postgresql/src/include/catalog/pg_proc.dat

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

12184 lines
619 KiB
Plaintext
Raw Normal View History

Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
#----------------------------------------------------------------------
#
# pg_proc.dat
# Initial contents of the pg_proc system catalog.
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
#
# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# Portions Copyright (c) 1994, Regents of the University of California
#
# src/include/catalog/pg_proc.dat
#
#----------------------------------------------------------------------
[
# Note: every entry in pg_proc.dat is expected to have a 'descr' comment,
# except for functions that implement pg_operator.dat operators and don't
# have a good reason to be called directly rather than via the operator.
# (If you do expect such a function to be used directly, you should
# duplicate the operator's comment.) initdb will supply suitable default
# comments for functions referenced by pg_operator.
# Try to follow the style of existing functions' comments.
# Some recommended conventions:
#
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# "I/O" for typinput, typoutput, typreceive, typsend functions
# "I/O typmod" for typmodin, typmodout functions
# "aggregate transition function" for aggtransfn functions, unless
# they are reasonably useful in their own right
# "aggregate final function" for aggfinalfn functions (likewise)
# "convert srctypename to desttypename" for cast functions
# "less-equal-greater" for B-tree comparison functions
# Note: pronargs is computed when this file is read, so it does not need
# to be specified in entries here. See AddDefaultValues() in Catalog.pm.
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# Once upon a time these entries were ordered by OID. Lately it's often
# been the custom to insert new entries adjacent to related older entries.
# Try to do one or the other though, don't just insert entries at random.
# OIDS 1 - 99
{ oid => '1242', descr => 'I/O',
proname => 'boolin', prorettype => 'bool', proargtypes => 'cstring',
prosrc => 'boolin' },
{ oid => '1243', descr => 'I/O',
proname => 'boolout', prorettype => 'cstring', proargtypes => 'bool',
prosrc => 'boolout' },
{ oid => '1244', descr => 'I/O',
proname => 'byteain', prorettype => 'bytea', proargtypes => 'cstring',
prosrc => 'byteain' },
{ oid => '31', descr => 'I/O',
proname => 'byteaout', prorettype => 'cstring', proargtypes => 'bytea',
prosrc => 'byteaout' },
{ oid => '1245', descr => 'I/O',
proname => 'charin', prorettype => 'char', proargtypes => 'cstring',
prosrc => 'charin' },
{ oid => '33', descr => 'I/O',
proname => 'charout', prorettype => 'cstring', proargtypes => 'char',
prosrc => 'charout' },
{ oid => '34', descr => 'I/O',
proname => 'namein', prorettype => 'name', proargtypes => 'cstring',
prosrc => 'namein' },
{ oid => '35', descr => 'I/O',
proname => 'nameout', prorettype => 'cstring', proargtypes => 'name',
prosrc => 'nameout' },
{ oid => '38', descr => 'I/O',
proname => 'int2in', prorettype => 'int2', proargtypes => 'cstring',
prosrc => 'int2in' },
{ oid => '39', descr => 'I/O',
proname => 'int2out', prorettype => 'cstring', proargtypes => 'int2',
prosrc => 'int2out' },
{ oid => '40', descr => 'I/O',
proname => 'int2vectorin', prorettype => 'int2vector',
proargtypes => 'cstring', prosrc => 'int2vectorin' },
{ oid => '41', descr => 'I/O',
proname => 'int2vectorout', prorettype => 'cstring',
proargtypes => 'int2vector', prosrc => 'int2vectorout' },
{ oid => '42', descr => 'I/O',
proname => 'int4in', prorettype => 'int4', proargtypes => 'cstring',
prosrc => 'int4in' },
{ oid => '43', descr => 'I/O',
proname => 'int4out', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'int4out' },
{ oid => '44', descr => 'I/O',
proname => 'regprocin', provolatile => 's', prorettype => 'regproc',
proargtypes => 'cstring', prosrc => 'regprocin' },
{ oid => '45', descr => 'I/O',
proname => 'regprocout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regproc', prosrc => 'regprocout' },
{ oid => '3494', descr => 'convert proname to regproc',
proname => 'to_regproc', provolatile => 's', prorettype => 'regproc',
proargtypes => 'text', prosrc => 'to_regproc' },
{ oid => '3479', descr => 'convert proname to regprocedure',
proname => 'to_regprocedure', provolatile => 's',
prorettype => 'regprocedure', proargtypes => 'text',
prosrc => 'to_regprocedure' },
{ oid => '46', descr => 'I/O',
proname => 'textin', prorettype => 'text', proargtypes => 'cstring',
prosrc => 'textin' },
{ oid => '47', descr => 'I/O',
proname => 'textout', prorettype => 'cstring', proargtypes => 'text',
prosrc => 'textout' },
{ oid => '48', descr => 'I/O',
proname => 'tidin', prorettype => 'tid', proargtypes => 'cstring',
prosrc => 'tidin' },
{ oid => '49', descr => 'I/O',
proname => 'tidout', prorettype => 'cstring', proargtypes => 'tid',
prosrc => 'tidout' },
{ oid => '50', descr => 'I/O',
proname => 'xidin', prorettype => 'xid', proargtypes => 'cstring',
prosrc => 'xidin' },
{ oid => '51', descr => 'I/O',
proname => 'xidout', prorettype => 'cstring', proargtypes => 'xid',
prosrc => 'xidout' },
{ oid => '5070', descr => 'I/O',
proname => 'xid8in', prorettype => 'xid8', proargtypes => 'cstring',
prosrc => 'xid8in' },
{ oid => '5081', descr => 'I/O',
proname => 'xid8out', prorettype => 'cstring', proargtypes => 'xid8',
prosrc => 'xid8out' },
{ oid => '5082', descr => 'I/O',
proname => 'xid8recv', prorettype => 'xid8', proargtypes => 'internal',
prosrc => 'xid8recv' },
{ oid => '5083', descr => 'I/O',
proname => 'xid8send', prorettype => 'bytea', proargtypes => 'xid8',
prosrc => 'xid8send' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '52', descr => 'I/O',
proname => 'cidin', prorettype => 'cid', proargtypes => 'cstring',
prosrc => 'cidin' },
{ oid => '53', descr => 'I/O',
proname => 'cidout', prorettype => 'cstring', proargtypes => 'cid',
prosrc => 'cidout' },
{ oid => '54', descr => 'I/O',
proname => 'oidvectorin', prorettype => 'oidvector', proargtypes => 'cstring',
prosrc => 'oidvectorin' },
{ oid => '55', descr => 'I/O',
proname => 'oidvectorout', prorettype => 'cstring',
proargtypes => 'oidvector', prosrc => 'oidvectorout' },
{ oid => '56',
proname => 'boollt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boollt' },
{ oid => '57',
proname => 'boolgt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boolgt' },
{ oid => '60',
proname => 'booleq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'booleq' },
{ oid => '61',
proname => 'chareq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'chareq' },
{ oid => '62',
proname => 'nameeq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'nameeq' },
{ oid => '63',
proname => 'int2eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2eq' },
{ oid => '64',
proname => 'int2lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2lt' },
{ oid => '65',
proname => 'int4eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4eq' },
{ oid => '66',
proname => 'int4lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4lt' },
{ oid => '67',
proname => 'texteq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'texteq' },
{ oid => '3696',
proname => 'starts_with', prosupport => 'text_starts_with_support',
proleakproof => 't', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'text_starts_with' },
{ oid => '6242', descr => 'planner support for text_starts_with',
proname => 'text_starts_with_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'text_starts_with_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '68',
proname => 'xideq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid xid', prosrc => 'xideq' },
{ oid => '3308',
proname => 'xidneq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid xid', prosrc => 'xidneq' },
{ oid => '5084',
proname => 'xid8eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8eq' },
{ oid => '5085',
proname => 'xid8ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8ne' },
{ oid => '5034',
proname => 'xid8lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8lt' },
{ oid => '5035',
proname => 'xid8gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8gt' },
{ oid => '5036',
proname => 'xid8le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8le' },
{ oid => '5037',
proname => 'xid8ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid8 xid8', prosrc => 'xid8ge' },
{ oid => '5096', descr => 'less-equal-greater',
proname => 'xid8cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'xid8 xid8', prosrc => 'xid8cmp' },
{ oid => '5071', descr => 'convert xid8 to xid',
proname => 'xid', prorettype => 'xid', proargtypes => 'xid8',
prosrc => 'xid8toxid' },
{ oid => '5097', descr => 'larger of two',
proname => 'xid8_larger', prorettype => 'xid8', proargtypes => 'xid8 xid8',
prosrc => 'xid8_larger' },
{ oid => '5098', descr => 'smaller of two',
proname => 'xid8_smaller', prorettype => 'xid8', proargtypes => 'xid8 xid8',
prosrc => 'xid8_smaller' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '69',
proname => 'cideq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'cid cid', prosrc => 'cideq' },
{ oid => '70',
proname => 'charne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'charne' },
{ oid => '1246',
proname => 'charlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'charlt' },
{ oid => '72',
proname => 'charle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'charle' },
{ oid => '73',
proname => 'chargt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'chargt' },
{ oid => '74',
proname => 'charge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'char char', prosrc => 'charge' },
{ oid => '77', descr => 'convert char to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'char',
prosrc => 'chartoi4' },
{ oid => '78', descr => 'convert int4 to char',
proname => 'char', prorettype => 'char', proargtypes => 'int4',
prosrc => 'i4tochar' },
{ oid => '79',
proname => 'nameregexeq', prosupport => 'textregexeq_support',
prorettype => 'bool', proargtypes => 'name text', prosrc => 'nameregexeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1252',
proname => 'nameregexne', prorettype => 'bool', proargtypes => 'name text',
prosrc => 'nameregexne' },
{ oid => '1254',
proname => 'textregexeq', prosupport => 'textregexeq_support',
prorettype => 'bool', proargtypes => 'text text', prosrc => 'textregexeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1256',
proname => 'textregexne', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'textregexne' },
{ oid => '1364', descr => 'planner support for textregexeq',
proname => 'textregexeq_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'textregexeq_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1257', descr => 'length',
proname => 'textlen', prorettype => 'int4', proargtypes => 'text',
prosrc => 'textlen' },
{ oid => '1258',
proname => 'textcat', prorettype => 'text', proargtypes => 'text text',
prosrc => 'textcat' },
{ oid => '84',
proname => 'boolne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boolne' },
{ oid => '89', descr => 'PostgreSQL version string',
proname => 'version', provolatile => 's', prorettype => 'text',
proargtypes => '', prosrc => 'pgsql_version' },
{ oid => '86', descr => 'I/O',
proname => 'pg_ddl_command_in', prorettype => 'pg_ddl_command',
proargtypes => 'cstring', prosrc => 'pg_ddl_command_in' },
{ oid => '87', descr => 'I/O',
proname => 'pg_ddl_command_out', prorettype => 'cstring',
proargtypes => 'pg_ddl_command', prosrc => 'pg_ddl_command_out' },
{ oid => '88', descr => 'I/O',
proname => 'pg_ddl_command_recv', prorettype => 'pg_ddl_command',
proargtypes => 'internal', prosrc => 'pg_ddl_command_recv' },
{ oid => '90', descr => 'I/O',
proname => 'pg_ddl_command_send', prorettype => 'bytea',
proargtypes => 'pg_ddl_command', prosrc => 'pg_ddl_command_send' },
# OIDS 100 - 199
{ oid => '101', descr => 'restriction selectivity of = and related operators',
proname => 'eqsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'eqsel' },
{ oid => '102',
descr => 'restriction selectivity of <> and related operators',
proname => 'neqsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'neqsel' },
{ oid => '103',
descr => 'restriction selectivity of < and related operators on scalar datatypes',
proname => 'scalarltsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'scalarltsel' },
{ oid => '104',
descr => 'restriction selectivity of > and related operators on scalar datatypes',
proname => 'scalargtsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'scalargtsel' },
{ oid => '105', descr => 'join selectivity of = and related operators',
proname => 'eqjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal', prosrc => 'eqjoinsel' },
{ oid => '106', descr => 'join selectivity of <> and related operators',
proname => 'neqjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'neqjoinsel' },
{ oid => '107',
descr => 'join selectivity of < and related operators on scalar datatypes',
proname => 'scalarltjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'scalarltjoinsel' },
{ oid => '108',
descr => 'join selectivity of > and related operators on scalar datatypes',
proname => 'scalargtjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'scalargtjoinsel' },
{ oid => '336',
descr => 'restriction selectivity of <= and related operators on scalar datatypes',
proname => 'scalarlesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'scalarlesel' },
{ oid => '337',
descr => 'restriction selectivity of >= and related operators on scalar datatypes',
proname => 'scalargesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'scalargesel' },
{ oid => '386',
descr => 'join selectivity of <= and related operators on scalar datatypes',
proname => 'scalarlejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'scalarlejoinsel' },
{ oid => '398',
descr => 'join selectivity of >= and related operators on scalar datatypes',
proname => 'scalargejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'scalargejoinsel' },
{ oid => '109', descr => 'I/O',
proname => 'unknownin', prorettype => 'unknown', proargtypes => 'cstring',
prosrc => 'unknownin' },
{ oid => '110', descr => 'I/O',
proname => 'unknownout', prorettype => 'cstring', proargtypes => 'unknown',
prosrc => 'unknownout' },
{ oid => '115',
proname => 'box_above_eq', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_above_eq' },
{ oid => '116',
proname => 'box_below_eq', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_below_eq' },
{ oid => '117', descr => 'I/O',
proname => 'point_in', prorettype => 'point', proargtypes => 'cstring',
prosrc => 'point_in' },
{ oid => '118', descr => 'I/O',
proname => 'point_out', prorettype => 'cstring', proargtypes => 'point',
prosrc => 'point_out' },
{ oid => '119', descr => 'I/O',
proname => 'lseg_in', prorettype => 'lseg', proargtypes => 'cstring',
prosrc => 'lseg_in' },
{ oid => '120', descr => 'I/O',
proname => 'lseg_out', prorettype => 'cstring', proargtypes => 'lseg',
prosrc => 'lseg_out' },
{ oid => '121', descr => 'I/O',
proname => 'path_in', prorettype => 'path', proargtypes => 'cstring',
prosrc => 'path_in' },
{ oid => '122', descr => 'I/O',
proname => 'path_out', prorettype => 'cstring', proargtypes => 'path',
prosrc => 'path_out' },
{ oid => '123', descr => 'I/O',
proname => 'box_in', prorettype => 'box', proargtypes => 'cstring',
prosrc => 'box_in' },
{ oid => '124', descr => 'I/O',
proname => 'box_out', prorettype => 'cstring', proargtypes => 'box',
prosrc => 'box_out' },
{ oid => '125',
proname => 'box_overlap', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_overlap' },
{ oid => '126',
proname => 'box_ge', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_ge' },
{ oid => '127',
proname => 'box_gt', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_gt' },
{ oid => '128',
proname => 'box_eq', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_eq' },
{ oid => '129',
proname => 'box_lt', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_lt' },
{ oid => '130',
proname => 'box_le', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_le' },
{ oid => '131',
proname => 'point_above', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_above' },
{ oid => '132',
proname => 'point_left', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_left' },
{ oid => '133',
proname => 'point_right', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_right' },
{ oid => '134',
proname => 'point_below', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_below' },
{ oid => '135',
proname => 'point_eq', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_eq' },
{ oid => '136',
proname => 'on_pb', prorettype => 'bool', proargtypes => 'point box',
prosrc => 'on_pb' },
{ oid => '137',
proname => 'on_ppath', prorettype => 'bool', proargtypes => 'point path',
prosrc => 'on_ppath' },
{ oid => '138',
proname => 'box_center', prorettype => 'point', proargtypes => 'box',
prosrc => 'box_center' },
{ oid => '139',
descr => 'restriction selectivity for area-comparison operators',
proname => 'areasel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'areasel' },
{ oid => '140', descr => 'join selectivity for area-comparison operators',
proname => 'areajoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'areajoinsel' },
{ oid => '141',
proname => 'int4mul', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4mul' },
{ oid => '144',
proname => 'int4ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4ne' },
{ oid => '145',
proname => 'int2ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2ne' },
{ oid => '146',
proname => 'int2gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2gt' },
{ oid => '147',
proname => 'int4gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4gt' },
{ oid => '148',
proname => 'int2le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2le' },
{ oid => '149',
proname => 'int4le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4le' },
{ oid => '150',
proname => 'int4ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int4', prosrc => 'int4ge' },
{ oid => '151',
proname => 'int2ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int2', prosrc => 'int2ge' },
{ oid => '152',
proname => 'int2mul', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2mul' },
{ oid => '153',
proname => 'int2div', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2div' },
{ oid => '154',
proname => 'int4div', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4div' },
{ oid => '155',
proname => 'int2mod', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2mod' },
{ oid => '156',
proname => 'int4mod', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4mod' },
{ oid => '157',
proname => 'textne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'textne' },
{ oid => '158',
proname => 'int24eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24eq' },
{ oid => '159',
proname => 'int42eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42eq' },
{ oid => '160',
proname => 'int24lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24lt' },
{ oid => '161',
proname => 'int42lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42lt' },
{ oid => '162',
proname => 'int24gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24gt' },
{ oid => '163',
proname => 'int42gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42gt' },
{ oid => '164',
proname => 'int24ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24ne' },
{ oid => '165',
proname => 'int42ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42ne' },
{ oid => '166',
proname => 'int24le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24le' },
{ oid => '167',
proname => 'int42le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42le' },
{ oid => '168',
proname => 'int24ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int4', prosrc => 'int24ge' },
{ oid => '169',
proname => 'int42ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int2', prosrc => 'int42ge' },
{ oid => '170',
proname => 'int24mul', prorettype => 'int4', proargtypes => 'int2 int4',
prosrc => 'int24mul' },
{ oid => '171',
proname => 'int42mul', prorettype => 'int4', proargtypes => 'int4 int2',
prosrc => 'int42mul' },
{ oid => '172',
proname => 'int24div', prorettype => 'int4', proargtypes => 'int2 int4',
prosrc => 'int24div' },
{ oid => '173',
proname => 'int42div', prorettype => 'int4', proargtypes => 'int4 int2',
prosrc => 'int42div' },
{ oid => '176',
proname => 'int2pl', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2pl' },
{ oid => '177',
proname => 'int4pl', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4pl' },
{ oid => '178',
proname => 'int24pl', prorettype => 'int4', proargtypes => 'int2 int4',
prosrc => 'int24pl' },
{ oid => '179',
proname => 'int42pl', prorettype => 'int4', proargtypes => 'int4 int2',
prosrc => 'int42pl' },
{ oid => '180',
proname => 'int2mi', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2mi' },
{ oid => '181',
proname => 'int4mi', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4mi' },
{ oid => '182',
proname => 'int24mi', prorettype => 'int4', proargtypes => 'int2 int4',
prosrc => 'int24mi' },
{ oid => '183',
proname => 'int42mi', prorettype => 'int4', proargtypes => 'int4 int2',
prosrc => 'int42mi' },
{ oid => '184',
proname => 'oideq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oideq' },
{ oid => '185',
proname => 'oidne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oidne' },
{ oid => '186',
proname => 'box_same', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_same' },
{ oid => '187',
proname => 'box_contain', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_contain' },
{ oid => '188',
proname => 'box_left', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_left' },
{ oid => '189',
proname => 'box_overleft', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_overleft' },
{ oid => '190',
proname => 'box_overright', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_overright' },
{ oid => '191',
proname => 'box_right', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_right' },
{ oid => '192',
proname => 'box_contained', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_contained' },
{ oid => '193',
proname => 'box_contain_pt', prorettype => 'bool', proargtypes => 'box point',
prosrc => 'box_contain_pt' },
{ oid => '195', descr => 'I/O',
proname => 'pg_node_tree_in', prorettype => 'pg_node_tree',
proargtypes => 'cstring', prosrc => 'pg_node_tree_in' },
{ oid => '196', descr => 'I/O',
proname => 'pg_node_tree_out', prorettype => 'cstring',
proargtypes => 'pg_node_tree', prosrc => 'pg_node_tree_out' },
{ oid => '197', descr => 'I/O',
proname => 'pg_node_tree_recv', provolatile => 's',
prorettype => 'pg_node_tree', proargtypes => 'internal',
prosrc => 'pg_node_tree_recv' },
{ oid => '198', descr => 'I/O',
proname => 'pg_node_tree_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'pg_node_tree', prosrc => 'pg_node_tree_send' },
# OIDS 200 - 299
{ oid => '200', descr => 'I/O',
proname => 'float4in', prorettype => 'float4', proargtypes => 'cstring',
prosrc => 'float4in' },
{ oid => '201', descr => 'I/O',
proname => 'float4out', prorettype => 'cstring', proargtypes => 'float4',
prosrc => 'float4out' },
{ oid => '202',
proname => 'float4mul', prorettype => 'float4',
proargtypes => 'float4 float4', prosrc => 'float4mul' },
{ oid => '203',
proname => 'float4div', prorettype => 'float4',
proargtypes => 'float4 float4', prosrc => 'float4div' },
{ oid => '204',
proname => 'float4pl', prorettype => 'float4', proargtypes => 'float4 float4',
prosrc => 'float4pl' },
{ oid => '205',
proname => 'float4mi', prorettype => 'float4', proargtypes => 'float4 float4',
prosrc => 'float4mi' },
{ oid => '206',
proname => 'float4um', prorettype => 'float4', proargtypes => 'float4',
prosrc => 'float4um' },
{ oid => '207',
proname => 'float4abs', prorettype => 'float4', proargtypes => 'float4',
prosrc => 'float4abs' },
{ oid => '208', descr => 'aggregate transition function',
proname => 'float4_accum', prorettype => '_float8',
proargtypes => '_float8 float4', prosrc => 'float4_accum' },
{ oid => '209', descr => 'larger of two',
proname => 'float4larger', prorettype => 'float4',
proargtypes => 'float4 float4', prosrc => 'float4larger' },
{ oid => '211', descr => 'smaller of two',
proname => 'float4smaller', prorettype => 'float4',
proargtypes => 'float4 float4', prosrc => 'float4smaller' },
{ oid => '212',
proname => 'int4um', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4um' },
{ oid => '213',
proname => 'int2um', prorettype => 'int2', proargtypes => 'int2',
prosrc => 'int2um' },
{ oid => '214', descr => 'I/O',
proname => 'float8in', prorettype => 'float8', proargtypes => 'cstring',
prosrc => 'float8in' },
{ oid => '215', descr => 'I/O',
proname => 'float8out', prorettype => 'cstring', proargtypes => 'float8',
prosrc => 'float8out' },
{ oid => '216',
proname => 'float8mul', prorettype => 'float8',
proargtypes => 'float8 float8', prosrc => 'float8mul' },
{ oid => '217',
proname => 'float8div', prorettype => 'float8',
proargtypes => 'float8 float8', prosrc => 'float8div' },
{ oid => '218',
proname => 'float8pl', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'float8pl' },
{ oid => '219',
proname => 'float8mi', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'float8mi' },
{ oid => '220',
proname => 'float8um', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'float8um' },
{ oid => '221',
proname => 'float8abs', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'float8abs' },
{ oid => '222', descr => 'aggregate transition function',
proname => 'float8_accum', prorettype => '_float8',
proargtypes => '_float8 float8', prosrc => 'float8_accum' },
{ oid => '276', descr => 'aggregate combine function',
proname => 'float8_combine', prorettype => '_float8',
proargtypes => '_float8 _float8', prosrc => 'float8_combine' },
{ oid => '223', descr => 'larger of two',
proname => 'float8larger', prorettype => 'float8',
proargtypes => 'float8 float8', prosrc => 'float8larger' },
{ oid => '224', descr => 'smaller of two',
proname => 'float8smaller', prorettype => 'float8',
proargtypes => 'float8 float8', prosrc => 'float8smaller' },
{ oid => '225',
proname => 'lseg_center', prorettype => 'point', proargtypes => 'lseg',
prosrc => 'lseg_center' },
{ oid => '227',
proname => 'poly_center', prorettype => 'point', proargtypes => 'polygon',
prosrc => 'poly_center' },
{ oid => '228', descr => 'round to nearest integer',
proname => 'dround', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dround' },
{ oid => '229', descr => 'truncate to integer',
proname => 'dtrunc', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dtrunc' },
{ oid => '2308', descr => 'nearest integer >= value',
proname => 'ceil', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dceil' },
{ oid => '2320', descr => 'nearest integer >= value',
proname => 'ceiling', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dceil' },
{ oid => '2309', descr => 'nearest integer <= value',
proname => 'floor', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dfloor' },
{ oid => '2310', descr => 'sign of value',
proname => 'sign', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsign' },
{ oid => '230',
proname => 'dsqrt', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsqrt' },
{ oid => '231',
proname => 'dcbrt', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcbrt' },
{ oid => '232',
proname => 'dpow', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'dpow' },
{ oid => '233', descr => 'natural exponential (e^x)',
proname => 'dexp', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dexp' },
{ oid => '234', descr => 'natural logarithm',
proname => 'dlog1', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dlog1' },
{ oid => '235', descr => 'convert int2 to float8',
proname => 'float8', proleakproof => 't', prorettype => 'float8',
proargtypes => 'int2', prosrc => 'i2tod' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '236', descr => 'convert int2 to float4',
proname => 'float4', proleakproof => 't', prorettype => 'float4',
proargtypes => 'int2', prosrc => 'i2tof' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '237', descr => 'convert float8 to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'float8',
prosrc => 'dtoi2' },
{ oid => '238', descr => 'convert float4 to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'float4',
prosrc => 'ftoi2' },
{ oid => '239',
proname => 'line_distance', prorettype => 'float8',
proargtypes => 'line line', prosrc => 'line_distance' },
{ oid => '240',
proname => 'nameeqtext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'nameeqtext' },
{ oid => '241',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namelttext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namelttext' },
{ oid => '242',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'nameletext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'nameletext' },
{ oid => '243',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namegetext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namegetext' },
{ oid => '244',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namegttext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namegttext' },
{ oid => '245',
proname => 'namenetext', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namenetext' },
{ oid => '246', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'btnametextcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'name text', prosrc => 'btnametextcmp' },
{ oid => '247',
proname => 'texteqname', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'texteqname' },
{ oid => '248',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'textltname', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'textltname' },
{ oid => '249',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'textlename', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'textlename' },
{ oid => '250',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'textgename', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'textgename' },
{ oid => '251',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'textgtname', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'textgtname' },
{ oid => '252',
proname => 'textnename', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text name', prosrc => 'textnename' },
{ oid => '253', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bttextnamecmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'text name', prosrc => 'bttextnamecmp' },
{ oid => '266', descr => 'concatenate name and oid',
proname => 'nameconcatoid', prorettype => 'name', proargtypes => 'name oid',
prosrc => 'nameconcatoid' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '274',
descr => 'current date and time - increments during transactions',
proname => 'timeofday', provolatile => 'v', prorettype => 'text',
proargtypes => '', prosrc => 'timeofday' },
{ oid => '277',
proname => 'inter_sl', prorettype => 'bool', proargtypes => 'lseg line',
prosrc => 'inter_sl' },
{ oid => '278',
proname => 'inter_lb', prorettype => 'bool', proargtypes => 'line box',
prosrc => 'inter_lb' },
{ oid => '279',
proname => 'float48mul', prorettype => 'float8',
proargtypes => 'float4 float8', prosrc => 'float48mul' },
{ oid => '280',
proname => 'float48div', prorettype => 'float8',
proargtypes => 'float4 float8', prosrc => 'float48div' },
{ oid => '281',
proname => 'float48pl', prorettype => 'float8',
proargtypes => 'float4 float8', prosrc => 'float48pl' },
{ oid => '282',
proname => 'float48mi', prorettype => 'float8',
proargtypes => 'float4 float8', prosrc => 'float48mi' },
{ oid => '283',
proname => 'float84mul', prorettype => 'float8',
proargtypes => 'float8 float4', prosrc => 'float84mul' },
{ oid => '284',
proname => 'float84div', prorettype => 'float8',
proargtypes => 'float8 float4', prosrc => 'float84div' },
{ oid => '285',
proname => 'float84pl', prorettype => 'float8',
proargtypes => 'float8 float4', prosrc => 'float84pl' },
{ oid => '286',
proname => 'float84mi', prorettype => 'float8',
proargtypes => 'float8 float4', prosrc => 'float84mi' },
{ oid => '287',
proname => 'float4eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4eq' },
{ oid => '288',
proname => 'float4ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4ne' },
{ oid => '289',
proname => 'float4lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4lt' },
{ oid => '290',
proname => 'float4le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4le' },
{ oid => '291',
proname => 'float4gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4gt' },
{ oid => '292',
proname => 'float4ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float4', prosrc => 'float4ge' },
{ oid => '293',
proname => 'float8eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8eq' },
{ oid => '294',
proname => 'float8ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8ne' },
{ oid => '295',
proname => 'float8lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8lt' },
{ oid => '296',
proname => 'float8le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8le' },
{ oid => '297',
proname => 'float8gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8gt' },
{ oid => '298',
proname => 'float8ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float8', prosrc => 'float8ge' },
{ oid => '299',
proname => 'float48eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48eq' },
# OIDS 300 - 399
{ oid => '300',
proname => 'float48ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48ne' },
{ oid => '301',
proname => 'float48lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48lt' },
{ oid => '302',
proname => 'float48le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48le' },
{ oid => '303',
proname => 'float48gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48gt' },
{ oid => '304',
proname => 'float48ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float4 float8', prosrc => 'float48ge' },
{ oid => '305',
proname => 'float84eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84eq' },
{ oid => '306',
proname => 'float84ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84ne' },
{ oid => '307',
proname => 'float84lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84lt' },
{ oid => '308',
proname => 'float84le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84le' },
{ oid => '309',
proname => 'float84gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84gt' },
{ oid => '310',
proname => 'float84ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'float8 float4', prosrc => 'float84ge' },
{ oid => '320', descr => 'bucket number of operand in equal-width histogram',
proname => 'width_bucket', prorettype => 'int4',
proargtypes => 'float8 float8 float8 int4', prosrc => 'width_bucket_float8' },
{ oid => '311', descr => 'convert float4 to float8',
proname => 'float8', proleakproof => 't', prorettype => 'float8',
proargtypes => 'float4', prosrc => 'ftod' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '312', descr => 'convert float8 to float4',
proname => 'float4', prorettype => 'float4', proargtypes => 'float8',
prosrc => 'dtof' },
{ oid => '313', descr => 'convert int2 to int4',
proname => 'int4', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int2', prosrc => 'i2toi4' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '314', descr => 'convert int4 to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'int4',
prosrc => 'i4toi2' },
{ oid => '316', descr => 'convert int4 to float8',
proname => 'float8', proleakproof => 't', prorettype => 'float8',
proargtypes => 'int4', prosrc => 'i4tod' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '317', descr => 'convert float8 to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'float8',
prosrc => 'dtoi4' },
{ oid => '318', descr => 'convert int4 to float4',
proname => 'float4', proleakproof => 't', prorettype => 'float4',
proargtypes => 'int4', prosrc => 'i4tof' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '319', descr => 'convert float4 to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'float4',
prosrc => 'ftoi4' },
tableam: introduce table AM infrastructure. This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 18:54:38 +01:00
# Table access method handlers
{ oid => '3', descr => 'row-oriented heap table access method handler',
proname => 'heap_tableam_handler', provolatile => 'v',
prorettype => 'table_am_handler', proargtypes => 'internal',
prosrc => 'heap_tableam_handler' },
tableam: introduce table AM infrastructure. This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 18:54:38 +01:00
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# Index access method handlers
{ oid => '330', descr => 'btree index access method handler',
proname => 'bthandler', provolatile => 'v', prorettype => 'index_am_handler',
proargtypes => 'internal', prosrc => 'bthandler' },
{ oid => '331', descr => 'hash index access method handler',
proname => 'hashhandler', provolatile => 'v',
prorettype => 'index_am_handler', proargtypes => 'internal',
prosrc => 'hashhandler' },
{ oid => '332', descr => 'gist index access method handler',
proname => 'gisthandler', provolatile => 'v',
prorettype => 'index_am_handler', proargtypes => 'internal',
prosrc => 'gisthandler' },
{ oid => '333', descr => 'gin index access method handler',
proname => 'ginhandler', provolatile => 'v', prorettype => 'index_am_handler',
proargtypes => 'internal', prosrc => 'ginhandler' },
{ oid => '334', descr => 'spgist index access method handler',
proname => 'spghandler', provolatile => 'v', prorettype => 'index_am_handler',
proargtypes => 'internal', prosrc => 'spghandler' },
{ oid => '335', descr => 'brin index access method handler',
proname => 'brinhandler', provolatile => 'v',
prorettype => 'index_am_handler', proargtypes => 'internal',
prosrc => 'brinhandler' },
{ oid => '3952', descr => 'brin: standalone scan new table pages',
proname => 'brin_summarize_new_values', provolatile => 'v',
proparallel => 'u', prorettype => 'int4', proargtypes => 'regclass',
prosrc => 'brin_summarize_new_values' },
{ oid => '3999', descr => 'brin: standalone scan new table pages',
proname => 'brin_summarize_range', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'regclass int8',
prosrc => 'brin_summarize_range' },
{ oid => '4014', descr => 'brin: desummarize page range',
proname => 'brin_desummarize_range', provolatile => 'v', proparallel => 'u',
prorettype => 'void', proargtypes => 'regclass int8',
prosrc => 'brin_desummarize_range' },
{ oid => '338', descr => 'validate an operator class',
proname => 'amvalidate', provolatile => 'v', prorettype => 'bool',
proargtypes => 'oid', prosrc => 'amvalidate' },
{ oid => '636', descr => 'test property of an index access method',
proname => 'pg_indexam_has_property', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text',
prosrc => 'pg_indexam_has_property' },
{ oid => '637', descr => 'test property of an index',
proname => 'pg_index_has_property', provolatile => 's', prorettype => 'bool',
proargtypes => 'regclass text', prosrc => 'pg_index_has_property' },
{ oid => '638', descr => 'test property of an index column',
proname => 'pg_index_column_has_property', provolatile => 's',
prorettype => 'bool', proargtypes => 'regclass int4 text',
prosrc => 'pg_index_column_has_property' },
{ oid => '676', descr => 'return name of given index build phase',
proname => 'pg_indexam_progress_phasename', prorettype => 'text',
proargtypes => 'oid int8', prosrc => 'pg_indexam_progress_phasename' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '339',
proname => 'poly_same', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_same' },
{ oid => '340',
proname => 'poly_contain', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_contain' },
{ oid => '341',
proname => 'poly_left', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_left' },
{ oid => '342',
proname => 'poly_overleft', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_overleft' },
{ oid => '343',
proname => 'poly_overright', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_overright' },
{ oid => '344',
proname => 'poly_right', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_right' },
{ oid => '345',
proname => 'poly_contained', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_contained' },
{ oid => '346',
proname => 'poly_overlap', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_overlap' },
{ oid => '347', descr => 'I/O',
proname => 'poly_in', prorettype => 'polygon', proargtypes => 'cstring',
prosrc => 'poly_in' },
{ oid => '348', descr => 'I/O',
proname => 'poly_out', prorettype => 'cstring', proargtypes => 'polygon',
prosrc => 'poly_out' },
{ oid => '350', descr => 'less-equal-greater',
proname => 'btint2cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int2 int2', prosrc => 'btint2cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3129', descr => 'sort support',
proname => 'btint2sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btint2sortsupport' },
{ oid => '351', descr => 'less-equal-greater',
proname => 'btint4cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int4 int4', prosrc => 'btint4cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3130', descr => 'sort support',
proname => 'btint4sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btint4sortsupport' },
{ oid => '842', descr => 'less-equal-greater',
proname => 'btint8cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int8 int8', prosrc => 'btint8cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3131', descr => 'sort support',
proname => 'btint8sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btint8sortsupport' },
{ oid => '354', descr => 'less-equal-greater',
proname => 'btfloat4cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'float4 float4', prosrc => 'btfloat4cmp' },
{ oid => '3132', descr => 'sort support',
proname => 'btfloat4sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btfloat4sortsupport' },
{ oid => '355', descr => 'less-equal-greater',
proname => 'btfloat8cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'float8 float8', prosrc => 'btfloat8cmp' },
{ oid => '3133', descr => 'sort support',
proname => 'btfloat8sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btfloat8sortsupport' },
{ oid => '356', descr => 'less-equal-greater',
proname => 'btoidcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'oid oid', prosrc => 'btoidcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3134', descr => 'sort support',
proname => 'btoidsortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btoidsortsupport' },
{ oid => '404', descr => 'less-equal-greater',
proname => 'btoidvectorcmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'btoidvectorcmp' },
{ oid => '358', descr => 'less-equal-greater',
proname => 'btcharcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'char char', prosrc => 'btcharcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '359', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'btnamecmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'name name', prosrc => 'btnamecmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3135', descr => 'sort support',
proname => 'btnamesortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btnamesortsupport' },
{ oid => '360', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bttextcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'text text', prosrc => 'bttextcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3255', descr => 'sort support',
proname => 'bttextsortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'bttextsortsupport' },
{ oid => '5050', descr => 'equal image',
proname => 'btvarstrequalimage', prorettype => 'bool', proargtypes => 'oid',
prosrc => 'btvarstrequalimage' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '377', descr => 'less-equal-greater',
proname => 'cash_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'money money', prosrc => 'cash_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '382', descr => 'less-equal-greater',
proname => 'btarraycmp', prorettype => 'int4',
proargtypes => 'anyarray anyarray', prosrc => 'btarraycmp' },
{ oid => '4126', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int8 int8 int8 bool bool', prosrc => 'in_range_int8_int8' },
{ oid => '4127', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int4 int4 int8 bool bool', prosrc => 'in_range_int4_int8' },
{ oid => '4128', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int4 int4 int4 bool bool', prosrc => 'in_range_int4_int4' },
{ oid => '4129', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int4 int4 int2 bool bool', prosrc => 'in_range_int4_int2' },
{ oid => '4130', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int2 int2 int8 bool bool', prosrc => 'in_range_int2_int8' },
{ oid => '4131', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int2 int2 int4 bool bool', prosrc => 'in_range_int2_int4' },
{ oid => '4132', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'int2 int2 int2 bool bool', prosrc => 'in_range_int2_int2' },
{ oid => '4139', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'float8 float8 float8 bool bool',
prosrc => 'in_range_float8_float8' },
{ oid => '4140', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'float4 float4 float8 bool bool',
prosrc => 'in_range_float4_float8' },
{ oid => '4141', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'numeric numeric numeric bool bool',
prosrc => 'in_range_numeric_numeric' },
{ oid => '361',
proname => 'lseg_distance', prorettype => 'float8',
proargtypes => 'lseg lseg', prosrc => 'lseg_distance' },
{ oid => '362',
proname => 'lseg_interpt', prorettype => 'point', proargtypes => 'lseg lseg',
prosrc => 'lseg_interpt' },
{ oid => '363',
proname => 'dist_ps', prorettype => 'float8', proargtypes => 'point lseg',
prosrc => 'dist_ps' },
{ oid => '380',
proname => 'dist_sp', prorettype => 'float8', proargtypes => 'lseg point',
prosrc => 'dist_sp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '364',
proname => 'dist_pb', prorettype => 'float8', proargtypes => 'point box',
prosrc => 'dist_pb' },
{ oid => '357',
proname => 'dist_bp', prorettype => 'float8', proargtypes => 'box point',
prosrc => 'dist_bp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '365',
proname => 'dist_sb', prorettype => 'float8', proargtypes => 'lseg box',
prosrc => 'dist_sb' },
{ oid => '381',
proname => 'dist_bs', prorettype => 'float8', proargtypes => 'box lseg',
prosrc => 'dist_bs' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '366',
proname => 'close_ps', prorettype => 'point', proargtypes => 'point lseg',
prosrc => 'close_ps' },
{ oid => '367',
proname => 'close_pb', prorettype => 'point', proargtypes => 'point box',
prosrc => 'close_pb' },
{ oid => '368',
proname => 'close_sb', prorettype => 'point', proargtypes => 'lseg box',
prosrc => 'close_sb' },
{ oid => '369',
proname => 'on_ps', prorettype => 'bool', proargtypes => 'point lseg',
prosrc => 'on_ps' },
{ oid => '370',
proname => 'path_distance', prorettype => 'float8',
proargtypes => 'path path', prosrc => 'path_distance' },
{ oid => '371',
proname => 'dist_ppath', prorettype => 'float8', proargtypes => 'point path',
prosrc => 'dist_ppath' },
{ oid => '421',
proname => 'dist_pathp', prorettype => 'float8', proargtypes => 'path point',
prosrc => 'dist_pathp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '372',
proname => 'on_sb', prorettype => 'bool', proargtypes => 'lseg box',
prosrc => 'on_sb' },
{ oid => '373',
proname => 'inter_sb', prorettype => 'bool', proargtypes => 'lseg box',
prosrc => 'inter_sb' },
# OIDS 400 - 499
{ oid => '401', descr => 'convert char(n) to text',
proname => 'text', prorettype => 'text', proargtypes => 'bpchar',
prosrc => 'rtrim1' },
{ oid => '406', descr => 'convert name to text',
proname => 'text', proleakproof => 't', prorettype => 'text',
proargtypes => 'name', prosrc => 'name_text' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '407', descr => 'convert text to name',
proname => 'name', proleakproof => 't', prorettype => 'name',
proargtypes => 'text', prosrc => 'text_name' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '408', descr => 'convert name to char(n)',
proname => 'bpchar', prorettype => 'bpchar', proargtypes => 'name',
prosrc => 'name_bpchar' },
{ oid => '409', descr => 'convert char(n) to name',
proname => 'name', proleakproof => 't', prorettype => 'name',
proargtypes => 'bpchar', prosrc => 'bpchar_name' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '449', descr => 'hash',
proname => 'hashint2', prorettype => 'int4', proargtypes => 'int2',
prosrc => 'hashint2' },
{ oid => '441', descr => 'hash',
proname => 'hashint2extended', prorettype => 'int8',
proargtypes => 'int2 int8', prosrc => 'hashint2extended' },
{ oid => '450', descr => 'hash',
proname => 'hashint4', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'hashint4' },
{ oid => '425', descr => 'hash',
proname => 'hashint4extended', prorettype => 'int8',
proargtypes => 'int4 int8', prosrc => 'hashint4extended' },
{ oid => '949', descr => 'hash',
proname => 'hashint8', prorettype => 'int4', proargtypes => 'int8',
prosrc => 'hashint8' },
{ oid => '442', descr => 'hash',
proname => 'hashint8extended', prorettype => 'int8',
proargtypes => 'int8 int8', prosrc => 'hashint8extended' },
{ oid => '451', descr => 'hash',
proname => 'hashfloat4', prorettype => 'int4', proargtypes => 'float4',
prosrc => 'hashfloat4' },
{ oid => '443', descr => 'hash',
proname => 'hashfloat4extended', prorettype => 'int8',
proargtypes => 'float4 int8', prosrc => 'hashfloat4extended' },
{ oid => '452', descr => 'hash',
proname => 'hashfloat8', prorettype => 'int4', proargtypes => 'float8',
prosrc => 'hashfloat8' },
{ oid => '444', descr => 'hash',
proname => 'hashfloat8extended', prorettype => 'int8',
proargtypes => 'float8 int8', prosrc => 'hashfloat8extended' },
{ oid => '453', descr => 'hash',
proname => 'hashoid', prorettype => 'int4', proargtypes => 'oid',
prosrc => 'hashoid' },
{ oid => '445', descr => 'hash',
proname => 'hashoidextended', prorettype => 'int8', proargtypes => 'oid int8',
prosrc => 'hashoidextended' },
{ oid => '454', descr => 'hash',
proname => 'hashchar', prorettype => 'int4', proargtypes => 'char',
prosrc => 'hashchar' },
{ oid => '446', descr => 'hash',
proname => 'hashcharextended', prorettype => 'int8',
proargtypes => 'char int8', prosrc => 'hashcharextended' },
{ oid => '455', descr => 'hash',
proname => 'hashname', prorettype => 'int4', proargtypes => 'name',
prosrc => 'hashname' },
{ oid => '447', descr => 'hash',
proname => 'hashnameextended', prorettype => 'int8',
proargtypes => 'name int8', prosrc => 'hashnameextended' },
{ oid => '400', descr => 'hash',
proname => 'hashtext', prorettype => 'int4', proargtypes => 'text',
prosrc => 'hashtext' },
{ oid => '448', descr => 'hash',
proname => 'hashtextextended', prorettype => 'int8',
proargtypes => 'text int8', prosrc => 'hashtextextended' },
{ oid => '456', descr => 'hash',
proname => 'hashvarlena', prorettype => 'int4', proargtypes => 'internal',
prosrc => 'hashvarlena' },
{ oid => '772', descr => 'hash',
proname => 'hashvarlenaextended', prorettype => 'int8',
proargtypes => 'internal int8', prosrc => 'hashvarlenaextended' },
{ oid => '457', descr => 'hash',
proname => 'hashoidvector', prorettype => 'int4', proargtypes => 'oidvector',
prosrc => 'hashoidvector' },
{ oid => '776', descr => 'hash',
proname => 'hashoidvectorextended', prorettype => 'int8',
proargtypes => 'oidvector int8', prosrc => 'hashoidvectorextended' },
{ oid => '329', descr => 'hash',
proname => 'hash_aclitem', prorettype => 'int4', proargtypes => 'aclitem',
prosrc => 'hash_aclitem' },
{ oid => '777', descr => 'hash',
proname => 'hash_aclitem_extended', prorettype => 'int8',
proargtypes => 'aclitem int8', prosrc => 'hash_aclitem_extended' },
{ oid => '399', descr => 'hash',
proname => 'hashmacaddr', prorettype => 'int4', proargtypes => 'macaddr',
prosrc => 'hashmacaddr' },
{ oid => '778', descr => 'hash',
proname => 'hashmacaddrextended', prorettype => 'int8',
proargtypes => 'macaddr int8', prosrc => 'hashmacaddrextended' },
{ oid => '422', descr => 'hash',
proname => 'hashinet', prorettype => 'int4', proargtypes => 'inet',
prosrc => 'hashinet' },
{ oid => '779', descr => 'hash',
proname => 'hashinetextended', prorettype => 'int8',
proargtypes => 'inet int8', prosrc => 'hashinetextended' },
{ oid => '432', descr => 'hash',
proname => 'hash_numeric', prorettype => 'int4', proargtypes => 'numeric',
prosrc => 'hash_numeric' },
{ oid => '780', descr => 'hash',
proname => 'hash_numeric_extended', prorettype => 'int8',
proargtypes => 'numeric int8', prosrc => 'hash_numeric_extended' },
{ oid => '328', descr => 'hash',
proname => 'hashmacaddr8', prorettype => 'int4', proargtypes => 'macaddr8',
prosrc => 'hashmacaddr8' },
{ oid => '781', descr => 'hash',
proname => 'hashmacaddr8extended', prorettype => 'int8',
proargtypes => 'macaddr8 int8', prosrc => 'hashmacaddr8extended' },
{ oid => '438', descr => 'count the number of NULL arguments',
proname => 'num_nulls', provariadic => 'any', proisstrict => 'f',
prorettype => 'int4', proargtypes => 'any', proallargtypes => '{any}',
proargmodes => '{v}', prosrc => 'pg_num_nulls' },
{ oid => '440', descr => 'count the number of non-NULL arguments',
proname => 'num_nonnulls', provariadic => 'any', proisstrict => 'f',
prorettype => 'int4', proargtypes => 'any', proallargtypes => '{any}',
proargmodes => '{v}', prosrc => 'pg_num_nonnulls' },
{ oid => '458', descr => 'larger of two',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_larger', proleakproof => 't', prorettype => 'text',
proargtypes => 'text text', prosrc => 'text_larger' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '459', descr => 'smaller of two',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_smaller', proleakproof => 't', prorettype => 'text',
proargtypes => 'text text', prosrc => 'text_smaller' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '460', descr => 'I/O',
proname => 'int8in', prorettype => 'int8', proargtypes => 'cstring',
prosrc => 'int8in' },
{ oid => '461', descr => 'I/O',
proname => 'int8out', prorettype => 'cstring', proargtypes => 'int8',
prosrc => 'int8out' },
{ oid => '462',
proname => 'int8um', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8um' },
{ oid => '463',
proname => 'int8pl', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8pl' },
{ oid => '464',
proname => 'int8mi', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8mi' },
{ oid => '465',
proname => 'int8mul', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8mul' },
{ oid => '466',
proname => 'int8div', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8div' },
{ oid => '467',
proname => 'int8eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8eq' },
{ oid => '468',
proname => 'int8ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8ne' },
{ oid => '469',
proname => 'int8lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8lt' },
{ oid => '470',
proname => 'int8gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8gt' },
{ oid => '471',
proname => 'int8le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8le' },
{ oid => '472',
proname => 'int8ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int8', prosrc => 'int8ge' },
{ oid => '474',
proname => 'int84eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84eq' },
{ oid => '475',
proname => 'int84ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84ne' },
{ oid => '476',
proname => 'int84lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84lt' },
{ oid => '477',
proname => 'int84gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84gt' },
{ oid => '478',
proname => 'int84le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84le' },
{ oid => '479',
proname => 'int84ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int4', prosrc => 'int84ge' },
{ oid => '480', descr => 'convert int8 to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'int8',
prosrc => 'int84' },
{ oid => '481', descr => 'convert int4 to int8',
proname => 'int8', proleakproof => 't', prorettype => 'int8',
proargtypes => 'int4', prosrc => 'int48' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '482', descr => 'convert int8 to float8',
proname => 'float8', proleakproof => 't', prorettype => 'float8',
proargtypes => 'int8', prosrc => 'i8tod' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '483', descr => 'convert float8 to int8',
proname => 'int8', prorettype => 'int8', proargtypes => 'float8',
prosrc => 'dtoi8' },
# OIDS 500 - 599
# OIDS 600 - 699
{ oid => '626', descr => 'hash',
proname => 'hash_array', prorettype => 'int4', proargtypes => 'anyarray',
prosrc => 'hash_array' },
{ oid => '782', descr => 'hash',
proname => 'hash_array_extended', prorettype => 'int8',
proargtypes => 'anyarray int8', prosrc => 'hash_array_extended' },
{ oid => '652', descr => 'convert int8 to float4',
proname => 'float4', proleakproof => 't', prorettype => 'float4',
proargtypes => 'int8', prosrc => 'i8tof' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '653', descr => 'convert float4 to int8',
proname => 'int8', prorettype => 'int8', proargtypes => 'float4',
prosrc => 'ftoi8' },
{ oid => '714', descr => 'convert int8 to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'int8',
prosrc => 'int82' },
{ oid => '754', descr => 'convert int2 to int8',
proname => 'int8', proleakproof => 't', prorettype => 'int8',
proargtypes => 'int2', prosrc => 'int28' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '655',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namelt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'namelt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '656',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namele', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'namele' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '657',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namegt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'namegt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '658',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'namege', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'namege' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '659',
proname => 'namene', proleakproof => 't', prorettype => 'bool',
proargtypes => 'name name', prosrc => 'namene' },
{ oid => '668', descr => 'adjust char() to typmod length',
proname => 'bpchar', prorettype => 'bpchar',
proargtypes => 'bpchar int4 bool', prosrc => 'bpchar' },
{ oid => '3097', descr => 'planner support for varchar length coercion',
proname => 'varchar_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'varchar_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '669', descr => 'adjust varchar() to typmod length',
proname => 'varchar', prosupport => 'varchar_support',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'varchar', proargtypes => 'varchar int4 bool',
prosrc => 'varchar' },
{ oid => '619',
proname => 'oidvectorne', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectorne' },
{ oid => '677',
proname => 'oidvectorlt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectorlt' },
{ oid => '678',
proname => 'oidvectorle', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectorle' },
{ oid => '679',
proname => 'oidvectoreq', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectoreq' },
{ oid => '680',
proname => 'oidvectorge', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectorge' },
{ oid => '681',
proname => 'oidvectorgt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'oidvector oidvector', prosrc => 'oidvectorgt' },
# OIDS 700 - 799
{ oid => '710', descr => 'deprecated, use current_user instead',
proname => 'getpgusername', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'current_user' },
{ oid => '716',
proname => 'oidlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oidlt' },
{ oid => '717',
proname => 'oidle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oidle' },
{ oid => '720', descr => 'octet length',
proname => 'octet_length', prorettype => 'int4', proargtypes => 'bytea',
prosrc => 'byteaoctetlen' },
{ oid => '721', descr => 'get byte',
proname => 'get_byte', prorettype => 'int4', proargtypes => 'bytea int4',
prosrc => 'byteaGetByte' },
{ oid => '722', descr => 'set byte',
proname => 'set_byte', prorettype => 'bytea',
proargtypes => 'bytea int4 int4', prosrc => 'byteaSetByte' },
{ oid => '723', descr => 'get bit',
proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'byteaGetBit' },
{ oid => '724', descr => 'set bit',
proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int8 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'byteaSetBit' },
{ oid => '749', descr => 'substitute portion of string',
proname => 'overlay', prorettype => 'bytea',
proargtypes => 'bytea bytea int4 int4', prosrc => 'byteaoverlay' },
{ oid => '752', descr => 'substitute portion of string',
proname => 'overlay', prorettype => 'bytea',
proargtypes => 'bytea bytea int4', prosrc => 'byteaoverlay_no_len' },
{ oid => '6163', descr => 'number of set bits',
proname => 'bit_count', prorettype => 'int8', proargtypes => 'bytea',
prosrc => 'bytea_bit_count' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '725',
proname => 'dist_pl', prorettype => 'float8', proargtypes => 'point line',
prosrc => 'dist_pl' },
{ oid => '702',
proname => 'dist_lp', prorettype => 'float8', proargtypes => 'line point',
prosrc => 'dist_lp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '727',
proname => 'dist_sl', prorettype => 'float8', proargtypes => 'lseg line',
prosrc => 'dist_sl' },
{ oid => '704',
proname => 'dist_ls', prorettype => 'float8', proargtypes => 'line lseg',
prosrc => 'dist_ls' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '728',
proname => 'dist_cpoly', prorettype => 'float8',
proargtypes => 'circle polygon', prosrc => 'dist_cpoly' },
{ oid => '785',
proname => 'dist_polyc', prorettype => 'float8',
proargtypes => 'polygon circle', prosrc => 'dist_polyc' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '729',
proname => 'poly_distance', prorettype => 'float8',
proargtypes => 'polygon polygon', prosrc => 'poly_distance' },
{ oid => '3275',
proname => 'dist_ppoly', prorettype => 'float8',
proargtypes => 'point polygon', prosrc => 'dist_ppoly' },
{ oid => '3292',
proname => 'dist_polyp', prorettype => 'float8',
proargtypes => 'polygon point', prosrc => 'dist_polyp' },
{ oid => '3290',
proname => 'dist_cpoint', prorettype => 'float8',
proargtypes => 'circle point', prosrc => 'dist_cpoint' },
{ oid => '740',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'text_lt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '741',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'text_le' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '742',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'text_gt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '743',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'text_ge' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '745', descr => 'current user name',
proname => 'current_user', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'current_user' },
{ oid => '746', descr => 'session user name',
proname => 'session_user', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'session_user' },
{ oid => '6311', descr => 'system user name',
proname => 'system_user', provolatile => 's', prorettype => 'text',
proargtypes => '', prosrc => 'system_user' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '744',
proname => 'array_eq', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_eq' },
{ oid => '390',
proname => 'array_ne', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_ne' },
{ oid => '391',
proname => 'array_lt', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_lt' },
{ oid => '392',
proname => 'array_gt', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_gt' },
{ oid => '393',
proname => 'array_le', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_le' },
{ oid => '396',
proname => 'array_ge', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'array_ge' },
{ oid => '747', descr => 'array dimensions',
proname => 'array_dims', prorettype => 'text', proargtypes => 'anyarray',
prosrc => 'array_dims' },
{ oid => '748', descr => 'number of array dimensions',
proname => 'array_ndims', prorettype => 'int4', proargtypes => 'anyarray',
prosrc => 'array_ndims' },
{ oid => '750', descr => 'I/O',
proname => 'array_in', provolatile => 's', prorettype => 'anyarray',
proargtypes => 'cstring oid int4', prosrc => 'array_in' },
{ oid => '751', descr => 'I/O',
proname => 'array_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyarray', prosrc => 'array_out' },
{ oid => '2091', descr => 'array lower dimension',
proname => 'array_lower', prorettype => 'int4',
proargtypes => 'anyarray int4', prosrc => 'array_lower' },
{ oid => '2092', descr => 'array upper dimension',
proname => 'array_upper', prorettype => 'int4',
proargtypes => 'anyarray int4', prosrc => 'array_upper' },
{ oid => '2176', descr => 'array length',
proname => 'array_length', prorettype => 'int4',
proargtypes => 'anyarray int4', prosrc => 'array_length' },
{ oid => '3179', descr => 'array cardinality',
proname => 'cardinality', prorettype => 'int4', proargtypes => 'anyarray',
prosrc => 'array_cardinality' },
{ oid => '378', descr => 'append element onto end of array',
proname => 'array_append', proisstrict => 'f',
prorettype => 'anycompatiblearray',
proargtypes => 'anycompatiblearray anycompatible', prosrc => 'array_append' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '379', descr => 'prepend element onto front of array',
proname => 'array_prepend', proisstrict => 'f',
prorettype => 'anycompatiblearray',
proargtypes => 'anycompatible anycompatiblearray',
prosrc => 'array_prepend' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '383',
proname => 'array_cat', proisstrict => 'f',
prorettype => 'anycompatiblearray',
proargtypes => 'anycompatiblearray anycompatiblearray',
prosrc => 'array_cat' },
{ oid => '394', descr => 'split delimited text',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'string_to_array', proisstrict => 'f', prorettype => '_text',
proargtypes => 'text text', prosrc => 'text_to_array' },
{ oid => '376', descr => 'split delimited text, with null string',
proname => 'string_to_array', proisstrict => 'f', prorettype => '_text',
proargtypes => 'text text text', prosrc => 'text_to_array_null' },
{ oid => '6160', descr => 'split delimited text',
proname => 'string_to_table', prorows => '1000', proisstrict => 'f',
proretset => 't', prorettype => 'text', proargtypes => 'text text',
prosrc => 'text_to_table' },
{ oid => '6161', descr => 'split delimited text, with null string',
proname => 'string_to_table', prorows => '1000', proisstrict => 'f',
proretset => 't', prorettype => 'text', proargtypes => 'text text text',
prosrc => 'text_to_table_null' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '395',
descr => 'concatenate array elements, using delimiter, into text',
proname => 'array_to_string', provolatile => 's', prorettype => 'text',
proargtypes => 'anyarray text', prosrc => 'array_to_text' },
{ oid => '384',
descr => 'concatenate array elements, using delimiter and null string, into text',
proname => 'array_to_string', proisstrict => 'f', provolatile => 's',
prorettype => 'text', proargtypes => 'anyarray text text',
prosrc => 'array_to_text_null' },
{ oid => '515', descr => 'larger of two',
proname => 'array_larger', prorettype => 'anyarray',
proargtypes => 'anyarray anyarray', prosrc => 'array_larger' },
{ oid => '516', descr => 'smaller of two',
proname => 'array_smaller', prorettype => 'anyarray',
proargtypes => 'anyarray anyarray', prosrc => 'array_smaller' },
{ oid => '3277', descr => 'returns an offset of value in array',
proname => 'array_position', proisstrict => 'f', prorettype => 'int4',
proargtypes => 'anycompatiblearray anycompatible',
prosrc => 'array_position' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3278',
descr => 'returns an offset of value in array with start index',
proname => 'array_position', proisstrict => 'f', prorettype => 'int4',
proargtypes => 'anycompatiblearray anycompatible int4',
prosrc => 'array_position_start' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3279',
descr => 'returns an array of offsets of some value in array',
proname => 'array_positions', proisstrict => 'f', prorettype => '_int4',
proargtypes => 'anycompatiblearray anycompatible',
prosrc => 'array_positions' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1191', descr => 'array subscripts generator',
proname => 'generate_subscripts', prorows => '1000', proretset => 't',
prorettype => 'int4', proargtypes => 'anyarray int4 bool',
prosrc => 'generate_subscripts' },
{ oid => '1192', descr => 'array subscripts generator',
proname => 'generate_subscripts', prorows => '1000', proretset => 't',
prorettype => 'int4', proargtypes => 'anyarray int4',
prosrc => 'generate_subscripts_nodir' },
{ oid => '1193', descr => 'array constructor with value',
proname => 'array_fill', proisstrict => 'f', prorettype => 'anyarray',
proargtypes => 'anyelement _int4', prosrc => 'array_fill' },
{ oid => '1286', descr => 'array constructor with value',
proname => 'array_fill', proisstrict => 'f', prorettype => 'anyarray',
proargtypes => 'anyelement _int4 _int4',
prosrc => 'array_fill_with_lower_bounds' },
{ oid => '2331', descr => 'expand array to set of rows',
proname => 'unnest', prorows => '100', prosupport => 'array_unnest_support',
proretset => 't', prorettype => 'anyelement', proargtypes => 'anyarray',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'array_unnest' },
{ oid => '3996', descr => 'planner support for array_unnest',
proname => 'array_unnest_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'array_unnest_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3167',
descr => 'remove any occurrences of an element from an array',
proname => 'array_remove', proisstrict => 'f',
prorettype => 'anycompatiblearray',
proargtypes => 'anycompatiblearray anycompatible', prosrc => 'array_remove' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3168', descr => 'replace any occurrences of an element in an array',
proname => 'array_replace', proisstrict => 'f',
prorettype => 'anycompatiblearray',
proargtypes => 'anycompatiblearray anycompatible anycompatible',
prosrc => 'array_replace' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2333', descr => 'aggregate transition function',
proname => 'array_agg_transfn', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal anynonarray', prosrc => 'array_agg_transfn' },
{ oid => '6293', descr => 'aggregate combine function',
proname => 'array_agg_combine', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'array_agg_combine' },
{ oid => '6294', descr => 'aggregate serial function',
proname => 'array_agg_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'array_agg_serialize' },
{ oid => '6295', descr => 'aggregate deserial function',
proname => 'array_agg_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'array_agg_deserialize' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2334', descr => 'aggregate final function',
proname => 'array_agg_finalfn', proisstrict => 'f', prorettype => 'anyarray',
proargtypes => 'internal anynonarray', prosrc => 'array_agg_finalfn' },
{ oid => '2335', descr => 'concatenate aggregate input into an array',
proname => 'array_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'anynonarray',
prosrc => 'aggregate_dummy' },
{ oid => '4051', descr => 'aggregate transition function',
proname => 'array_agg_array_transfn', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal anyarray',
prosrc => 'array_agg_array_transfn' },
{ oid => '6296', descr => 'aggregate combine function',
proname => 'array_agg_array_combine', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal internal',
prosrc => 'array_agg_array_combine' },
{ oid => '6297', descr => 'aggregate serial function',
proname => 'array_agg_array_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'array_agg_array_serialize' },
{ oid => '6298', descr => 'aggregate deserial function',
proname => 'array_agg_array_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'array_agg_array_deserialize' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '4052', descr => 'aggregate final function',
proname => 'array_agg_array_finalfn', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'internal anyarray',
prosrc => 'array_agg_array_finalfn' },
{ oid => '4053', descr => 'concatenate aggregate input into an array',
proname => 'array_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'anyarray',
prosrc => 'aggregate_dummy' },
{ oid => '3218',
descr => 'bucket number of operand given a sorted array of bucket lower bounds',
proname => 'width_bucket', prorettype => 'int4',
proargtypes => 'anycompatible anycompatiblearray',
prosrc => 'width_bucket_array' },
{ oid => '6172', descr => 'remove last N elements of array',
proname => 'trim_array', prorettype => 'anyarray',
proargtypes => 'anyarray int4', prosrc => 'trim_array' },
{ oid => '6215', descr => 'shuffle array',
proname => 'array_shuffle', provolatile => 'v', prorettype => 'anyarray',
proargtypes => 'anyarray', prosrc => 'array_shuffle' },
{ oid => '6216', descr => 'take samples from array',
proname => 'array_sample', provolatile => 'v', prorettype => 'anyarray',
proargtypes => 'anyarray int4', prosrc => 'array_sample' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3816', descr => 'array typanalyze',
proname => 'array_typanalyze', provolatile => 's', prorettype => 'bool',
proargtypes => 'internal', prosrc => 'array_typanalyze' },
{ oid => '3817',
descr => 'restriction selectivity for array-containment operators',
proname => 'arraycontsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'arraycontsel' },
{ oid => '3818', descr => 'join selectivity for array-containment operators',
proname => 'arraycontjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'arraycontjoinsel' },
{ oid => '764', descr => 'large object import',
proname => 'lo_import', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'text', prosrc => 'be_lo_import' },
{ oid => '767', descr => 'large object import',
proname => 'lo_import', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'text oid',
prosrc => 'be_lo_import_with_oid' },
{ oid => '765', descr => 'large object export',
proname => 'lo_export', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'oid text', prosrc => 'be_lo_export' },
{ oid => '766', descr => 'increment',
proname => 'int4inc', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4inc' },
{ oid => '768', descr => 'larger of two',
proname => 'int4larger', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4larger' },
{ oid => '769', descr => 'smaller of two',
proname => 'int4smaller', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4smaller' },
{ oid => '770', descr => 'larger of two',
proname => 'int2larger', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2larger' },
{ oid => '771', descr => 'smaller of two',
proname => 'int2smaller', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2smaller' },
# OIDS 800 - 899
{ oid => '846',
proname => 'cash_mul_flt4', prorettype => 'money',
proargtypes => 'money float4', prosrc => 'cash_mul_flt4' },
{ oid => '847',
proname => 'cash_div_flt4', prorettype => 'money',
proargtypes => 'money float4', prosrc => 'cash_div_flt4' },
{ oid => '848',
proname => 'flt4_mul_cash', prorettype => 'money',
proargtypes => 'float4 money', prosrc => 'flt4_mul_cash' },
{ oid => '849', descr => 'position of substring',
proname => 'position', prorettype => 'int4', proargtypes => 'text text',
prosrc => 'textpos' },
{ oid => '850',
proname => 'textlike', prosupport => 'textlike_support', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'textlike' },
{ oid => '1023', descr => 'planner support for textlike',
proname => 'textlike_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'textlike_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '851',
proname => 'textnlike', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'textnlike' },
{ oid => '852',
proname => 'int48eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48eq' },
{ oid => '853',
proname => 'int48ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48ne' },
{ oid => '854',
proname => 'int48lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48lt' },
{ oid => '855',
proname => 'int48gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48gt' },
{ oid => '856',
proname => 'int48le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48le' },
{ oid => '857',
proname => 'int48ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4 int8', prosrc => 'int48ge' },
{ oid => '858',
proname => 'namelike', prosupport => 'textlike_support', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namelike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '859',
proname => 'namenlike', prorettype => 'bool', proargtypes => 'name text',
prosrc => 'namenlike' },
{ oid => '860', descr => 'convert char to char(n)',
proname => 'bpchar', prorettype => 'bpchar', proargtypes => 'char',
prosrc => 'char_bpchar' },
{ oid => '861', descr => 'name of the current database',
proname => 'current_database', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'current_database' },
{ oid => '817', descr => 'get the currently executing query',
proname => 'current_query', proisstrict => 'f', provolatile => 'v',
proparallel => 'r', prorettype => 'text', proargtypes => '',
prosrc => 'current_query' },
{ oid => '3399',
proname => 'int8_mul_cash', prorettype => 'money',
proargtypes => 'int8 money', prosrc => 'int8_mul_cash' },
{ oid => '862',
proname => 'int4_mul_cash', prorettype => 'money',
proargtypes => 'int4 money', prosrc => 'int4_mul_cash' },
{ oid => '863',
proname => 'int2_mul_cash', prorettype => 'money',
proargtypes => 'int2 money', prosrc => 'int2_mul_cash' },
{ oid => '3344',
proname => 'cash_mul_int8', prorettype => 'money',
proargtypes => 'money int8', prosrc => 'cash_mul_int8' },
{ oid => '3345',
proname => 'cash_div_int8', prorettype => 'money',
proargtypes => 'money int8', prosrc => 'cash_div_int8' },
{ oid => '864',
proname => 'cash_mul_int4', prorettype => 'money',
proargtypes => 'money int4', prosrc => 'cash_mul_int4' },
{ oid => '865',
proname => 'cash_div_int4', prorettype => 'money',
proargtypes => 'money int4', prosrc => 'cash_div_int4' },
{ oid => '866',
proname => 'cash_mul_int2', prorettype => 'money',
proargtypes => 'money int2', prosrc => 'cash_mul_int2' },
{ oid => '867',
proname => 'cash_div_int2', prorettype => 'money',
proargtypes => 'money int2', prosrc => 'cash_div_int2' },
{ oid => '886', descr => 'I/O',
proname => 'cash_in', provolatile => 's', prorettype => 'money',
proargtypes => 'cstring', prosrc => 'cash_in' },
{ oid => '887', descr => 'I/O',
proname => 'cash_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'money', prosrc => 'cash_out' },
{ oid => '888',
proname => 'cash_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_eq' },
{ oid => '889',
proname => 'cash_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_ne' },
{ oid => '890',
proname => 'cash_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_lt' },
{ oid => '891',
proname => 'cash_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_le' },
{ oid => '892',
proname => 'cash_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_gt' },
{ oid => '893',
proname => 'cash_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'money money', prosrc => 'cash_ge' },
{ oid => '894',
proname => 'cash_pl', prorettype => 'money', proargtypes => 'money money',
prosrc => 'cash_pl' },
{ oid => '895',
proname => 'cash_mi', prorettype => 'money', proargtypes => 'money money',
prosrc => 'cash_mi' },
{ oid => '896',
proname => 'cash_mul_flt8', prorettype => 'money',
proargtypes => 'money float8', prosrc => 'cash_mul_flt8' },
{ oid => '897',
proname => 'cash_div_flt8', prorettype => 'money',
proargtypes => 'money float8', prosrc => 'cash_div_flt8' },
{ oid => '898', descr => 'larger of two',
proname => 'cashlarger', prorettype => 'money', proargtypes => 'money money',
prosrc => 'cashlarger' },
{ oid => '899', descr => 'smaller of two',
proname => 'cashsmaller', prorettype => 'money', proargtypes => 'money money',
prosrc => 'cashsmaller' },
{ oid => '919',
proname => 'flt8_mul_cash', prorettype => 'money',
proargtypes => 'float8 money', prosrc => 'flt8_mul_cash' },
{ oid => '935', descr => 'output money amount as words',
proname => 'cash_words', prorettype => 'text', proargtypes => 'money',
prosrc => 'cash_words' },
{ oid => '3822',
proname => 'cash_div_cash', prorettype => 'float8',
proargtypes => 'money money', prosrc => 'cash_div_cash' },
{ oid => '3823', descr => 'convert money to numeric',
proname => 'numeric', provolatile => 's', prorettype => 'numeric',
proargtypes => 'money', prosrc => 'cash_numeric' },
{ oid => '3824', descr => 'convert numeric to money',
proname => 'money', provolatile => 's', prorettype => 'money',
proargtypes => 'numeric', prosrc => 'numeric_cash' },
{ oid => '3811', descr => 'convert int4 to money',
proname => 'money', provolatile => 's', prorettype => 'money',
proargtypes => 'int4', prosrc => 'int4_cash' },
{ oid => '3812', descr => 'convert int8 to money',
proname => 'money', provolatile => 's', prorettype => 'money',
proargtypes => 'int8', prosrc => 'int8_cash' },
# OIDS 900 - 999
{ oid => '940', descr => 'modulus',
proname => 'mod', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2mod' },
{ oid => '941', descr => 'modulus',
proname => 'mod', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4mod' },
{ oid => '945',
proname => 'int8mod', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8mod' },
{ oid => '947', descr => 'modulus',
proname => 'mod', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8mod' },
{ oid => '5044', descr => 'greatest common divisor',
proname => 'gcd', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4gcd' },
{ oid => '5045', descr => 'greatest common divisor',
proname => 'gcd', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8gcd' },
{ oid => '5046', descr => 'least common multiple',
proname => 'lcm', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4lcm' },
{ oid => '5047', descr => 'least common multiple',
proname => 'lcm', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8lcm' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '944', descr => 'convert text to char',
proname => 'char', prorettype => 'char', proargtypes => 'text',
prosrc => 'text_char' },
{ oid => '946', descr => 'convert char to text',
proname => 'text', prorettype => 'text', proargtypes => 'char',
prosrc => 'char_text' },
{ oid => '952', descr => 'large object open',
proname => 'lo_open', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'oid int4', prosrc => 'be_lo_open' },
{ oid => '953', descr => 'large object close',
proname => 'lo_close', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4', prosrc => 'be_lo_close' },
{ oid => '954', descr => 'large object read',
proname => 'loread', provolatile => 'v', proparallel => 'u',
prorettype => 'bytea', proargtypes => 'int4 int4', prosrc => 'be_loread' },
{ oid => '955', descr => 'large object write',
proname => 'lowrite', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4 bytea', prosrc => 'be_lowrite' },
{ oid => '956', descr => 'large object seek',
proname => 'lo_lseek', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4 int4 int4',
prosrc => 'be_lo_lseek' },
{ oid => '3170', descr => 'large object seek (64 bit)',
proname => 'lo_lseek64', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'int4 int8 int4',
prosrc => 'be_lo_lseek64' },
{ oid => '957', descr => 'large object create',
proname => 'lo_creat', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'int4', prosrc => 'be_lo_creat' },
{ oid => '715', descr => 'large object create',
proname => 'lo_create', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'oid', prosrc => 'be_lo_create' },
{ oid => '958', descr => 'large object position',
proname => 'lo_tell', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4', prosrc => 'be_lo_tell' },
{ oid => '3171', descr => 'large object position (64 bit)',
proname => 'lo_tell64', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'int4', prosrc => 'be_lo_tell64' },
{ oid => '1004', descr => 'truncate large object',
proname => 'lo_truncate', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'be_lo_truncate' },
{ oid => '3172', descr => 'truncate large object (64 bit)',
proname => 'lo_truncate64', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'int4 int8',
prosrc => 'be_lo_truncate64' },
{ oid => '3457', descr => 'create new large object with given content',
proname => 'lo_from_bytea', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'oid bytea',
prosrc => 'be_lo_from_bytea' },
{ oid => '3458', descr => 'read entire large object',
proname => 'lo_get', provolatile => 'v', proparallel => 'u',
prorettype => 'bytea', proargtypes => 'oid', prosrc => 'be_lo_get' },
{ oid => '3459', descr => 'read large object from offset for length',
proname => 'lo_get', provolatile => 'v', proparallel => 'u',
prorettype => 'bytea', proargtypes => 'oid int8 int4',
prosrc => 'be_lo_get_fragment' },
{ oid => '3460', descr => 'write data at offset',
proname => 'lo_put', provolatile => 'v', proparallel => 'u',
prorettype => 'void', proargtypes => 'oid int8 bytea',
prosrc => 'be_lo_put' },
{ oid => '959',
proname => 'on_pl', prorettype => 'bool', proargtypes => 'point line',
prosrc => 'on_pl' },
{ oid => '960',
proname => 'on_sl', prorettype => 'bool', proargtypes => 'lseg line',
prosrc => 'on_sl' },
{ oid => '961',
proname => 'close_pl', prorettype => 'point', proargtypes => 'point line',
prosrc => 'close_pl' },
{ oid => '964', descr => 'large object unlink (delete)',
proname => 'lo_unlink', provolatile => 'v', proparallel => 'u',
prorettype => 'int4', proargtypes => 'oid', prosrc => 'be_lo_unlink' },
{ oid => '973',
proname => 'path_inter', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_inter' },
{ oid => '975', descr => 'box area',
proname => 'area', prorettype => 'float8', proargtypes => 'box',
prosrc => 'box_area' },
{ oid => '976', descr => 'box width',
proname => 'width', prorettype => 'float8', proargtypes => 'box',
prosrc => 'box_width' },
{ oid => '977', descr => 'box height',
proname => 'height', prorettype => 'float8', proargtypes => 'box',
prosrc => 'box_height' },
{ oid => '978',
proname => 'box_distance', prorettype => 'float8', proargtypes => 'box box',
prosrc => 'box_distance' },
{ oid => '979', descr => 'area of a closed path',
proname => 'area', prorettype => 'float8', proargtypes => 'path',
prosrc => 'path_area' },
{ oid => '980',
proname => 'box_intersect', prorettype => 'box', proargtypes => 'box box',
prosrc => 'box_intersect' },
{ oid => '4067', descr => 'bounding box of two boxes',
proname => 'bound_box', prorettype => 'box', proargtypes => 'box box',
prosrc => 'boxes_bound_box' },
{ oid => '981', descr => 'box diagonal',
proname => 'diagonal', prorettype => 'lseg', proargtypes => 'box',
prosrc => 'box_diagonal' },
{ oid => '982',
proname => 'path_n_lt', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_n_lt' },
{ oid => '983',
proname => 'path_n_gt', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_n_gt' },
{ oid => '984',
proname => 'path_n_eq', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_n_eq' },
{ oid => '985',
proname => 'path_n_le', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_n_le' },
{ oid => '986',
proname => 'path_n_ge', prorettype => 'bool', proargtypes => 'path path',
prosrc => 'path_n_ge' },
{ oid => '987',
proname => 'path_length', prorettype => 'float8', proargtypes => 'path',
prosrc => 'path_length' },
{ oid => '988',
proname => 'point_ne', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_ne' },
{ oid => '989',
proname => 'point_vert', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_vert' },
{ oid => '990',
proname => 'point_horiz', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_horiz' },
{ oid => '991',
proname => 'point_distance', prorettype => 'float8',
proargtypes => 'point point', prosrc => 'point_distance' },
{ oid => '992', descr => 'slope between points',
proname => 'slope', prorettype => 'float8', proargtypes => 'point point',
prosrc => 'point_slope' },
{ oid => '993', descr => 'convert points to line segment',
proname => 'lseg', prorettype => 'lseg', proargtypes => 'point point',
prosrc => 'lseg_construct' },
{ oid => '994',
proname => 'lseg_intersect', prorettype => 'bool', proargtypes => 'lseg lseg',
prosrc => 'lseg_intersect' },
{ oid => '995',
proname => 'lseg_parallel', prorettype => 'bool', proargtypes => 'lseg lseg',
prosrc => 'lseg_parallel' },
{ oid => '996',
proname => 'lseg_perp', prorettype => 'bool', proargtypes => 'lseg lseg',
prosrc => 'lseg_perp' },
{ oid => '997',
proname => 'lseg_vertical', prorettype => 'bool', proargtypes => 'lseg',
prosrc => 'lseg_vertical' },
{ oid => '998',
proname => 'lseg_horizontal', prorettype => 'bool', proargtypes => 'lseg',
prosrc => 'lseg_horizontal' },
{ oid => '999',
proname => 'lseg_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_eq' },
# OIDS 1000 - 1999
{ oid => '1026', descr => 'adjust timestamp to new time zone',
proname => 'timezone', prorettype => 'timestamp',
proargtypes => 'interval timestamptz', prosrc => 'timestamptz_izone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1031', descr => 'I/O',
proname => 'aclitemin', provolatile => 's', prorettype => 'aclitem',
proargtypes => 'cstring', prosrc => 'aclitemin' },
{ oid => '1032', descr => 'I/O',
proname => 'aclitemout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'aclitem', prosrc => 'aclitemout' },
{ oid => '1035', descr => 'add/update ACL item',
proname => 'aclinsert', prorettype => '_aclitem',
proargtypes => '_aclitem aclitem', prosrc => 'aclinsert' },
{ oid => '1036', descr => 'remove ACL item',
proname => 'aclremove', prorettype => '_aclitem',
proargtypes => '_aclitem aclitem', prosrc => 'aclremove' },
{ oid => '1037', descr => 'contains',
proname => 'aclcontains', prorettype => 'bool',
proargtypes => '_aclitem aclitem', prosrc => 'aclcontains' },
{ oid => '1062',
proname => 'aclitemeq', prorettype => 'bool',
proargtypes => 'aclitem aclitem', prosrc => 'aclitem_eq' },
{ oid => '1365', descr => 'make ACL item',
proname => 'makeaclitem', prorettype => 'aclitem',
proargtypes => 'oid oid text bool', prosrc => 'makeaclitem' },
{ oid => '3943',
descr => 'show hardwired default privileges, primarily for use by the information schema',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'acldefault', prorettype => '_aclitem', proargtypes => 'char oid',
prosrc => 'acldefault_sql' },
{ oid => '1689',
descr => 'convert ACL item array to table, primarily for use by information schema',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'aclexplode', prorows => '10', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => '_aclitem',
proallargtypes => '{_aclitem,oid,oid,text,bool}',
proargmodes => '{i,o,o,o,o}',
proargnames => '{acl,grantor,grantee,privilege_type,is_grantable}',
prosrc => 'aclexplode' },
{ oid => '1044', descr => 'I/O',
proname => 'bpcharin', prorettype => 'bpchar',
proargtypes => 'cstring oid int4', prosrc => 'bpcharin' },
{ oid => '1045', descr => 'I/O',
proname => 'bpcharout', prorettype => 'cstring', proargtypes => 'bpchar',
prosrc => 'bpcharout' },
{ oid => '2913', descr => 'I/O typmod',
proname => 'bpchartypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'bpchartypmodin' },
{ oid => '2914', descr => 'I/O typmod',
proname => 'bpchartypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'bpchartypmodout' },
{ oid => '1046', descr => 'I/O',
proname => 'varcharin', prorettype => 'varchar',
proargtypes => 'cstring oid int4', prosrc => 'varcharin' },
{ oid => '1047', descr => 'I/O',
proname => 'varcharout', prorettype => 'cstring', proargtypes => 'varchar',
prosrc => 'varcharout' },
{ oid => '2915', descr => 'I/O typmod',
proname => 'varchartypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'varchartypmodin' },
{ oid => '2916', descr => 'I/O typmod',
proname => 'varchartypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'varchartypmodout' },
{ oid => '1048',
proname => 'bpchareq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpchareq' },
{ oid => '1049',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpcharlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpcharlt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1050',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpcharle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpcharle' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1051',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchargt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpchargt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1052',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpcharge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpcharge' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1053',
proname => 'bpcharne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bpchar bpchar', prosrc => 'bpcharne' },
{ oid => '1063', descr => 'larger of two',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_larger', proleakproof => 't', prorettype => 'bpchar',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_larger' },
{ oid => '1064', descr => 'smaller of two',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_smaller', proleakproof => 't', prorettype => 'bpchar',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_smaller' },
{ oid => '1078', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpcharcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'bpchar bpchar', prosrc => 'bpcharcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3328', descr => 'sort support',
proname => 'bpchar_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'bpchar_sortsupport' },
{ oid => '1080', descr => 'hash',
proname => 'hashbpchar', prorettype => 'int4', proargtypes => 'bpchar',
prosrc => 'hashbpchar' },
{ oid => '972', descr => 'hash',
proname => 'hashbpcharextended', prorettype => 'int8',
proargtypes => 'bpchar int8', prosrc => 'hashbpcharextended' },
{ oid => '1081', descr => 'format a type oid and atttypmod to canonical SQL',
proname => 'format_type', proisstrict => 'f', provolatile => 's',
prorettype => 'text', proargtypes => 'oid int4', prosrc => 'format_type' },
{ oid => '1084', descr => 'I/O',
proname => 'date_in', provolatile => 's', prorettype => 'date',
proargtypes => 'cstring', prosrc => 'date_in' },
{ oid => '1085', descr => 'I/O',
proname => 'date_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'date', prosrc => 'date_out' },
{ oid => '1086',
proname => 'date_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_eq' },
{ oid => '1087',
proname => 'date_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_lt' },
{ oid => '1088',
proname => 'date_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_le' },
{ oid => '1089',
proname => 'date_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_gt' },
{ oid => '1090',
proname => 'date_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_ge' },
{ oid => '1091',
proname => 'date_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'date date', prosrc => 'date_ne' },
{ oid => '1092', descr => 'less-equal-greater',
proname => 'date_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'date date', prosrc => 'date_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3136', descr => 'sort support',
proname => 'date_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'date_sortsupport' },
{ oid => '4133', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'date date interval bool bool',
prosrc => 'in_range_date_interval' },
# OIDS 1100 - 1199
{ oid => '1102',
proname => 'time_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_lt' },
{ oid => '1103',
proname => 'time_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_le' },
{ oid => '1104',
proname => 'time_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_gt' },
{ oid => '1105',
proname => 'time_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_ge' },
{ oid => '1106',
proname => 'time_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_ne' },
{ oid => '1107', descr => 'less-equal-greater',
proname => 'time_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'time time', prosrc => 'time_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1138', descr => 'larger of two',
proname => 'date_larger', prorettype => 'date', proargtypes => 'date date',
prosrc => 'date_larger' },
{ oid => '1139', descr => 'smaller of two',
proname => 'date_smaller', prorettype => 'date', proargtypes => 'date date',
prosrc => 'date_smaller' },
{ oid => '1140',
proname => 'date_mi', prorettype => 'int4', proargtypes => 'date date',
prosrc => 'date_mi' },
{ oid => '1141',
proname => 'date_pli', prorettype => 'date', proargtypes => 'date int4',
prosrc => 'date_pli' },
{ oid => '1142',
proname => 'date_mii', prorettype => 'date', proargtypes => 'date int4',
prosrc => 'date_mii' },
{ oid => '1143', descr => 'I/O',
proname => 'time_in', provolatile => 's', prorettype => 'time',
proargtypes => 'cstring oid int4', prosrc => 'time_in' },
{ oid => '1144', descr => 'I/O',
proname => 'time_out', prorettype => 'cstring', proargtypes => 'time',
prosrc => 'time_out' },
{ oid => '2909', descr => 'I/O typmod',
proname => 'timetypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'timetypmodin' },
{ oid => '2910', descr => 'I/O typmod',
proname => 'timetypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'timetypmodout' },
{ oid => '1145',
proname => 'time_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'time time', prosrc => 'time_eq' },
{ oid => '1146',
proname => 'circle_add_pt', prorettype => 'circle',
proargtypes => 'circle point', prosrc => 'circle_add_pt' },
{ oid => '1147',
proname => 'circle_sub_pt', prorettype => 'circle',
proargtypes => 'circle point', prosrc => 'circle_sub_pt' },
{ oid => '1148',
proname => 'circle_mul_pt', prorettype => 'circle',
proargtypes => 'circle point', prosrc => 'circle_mul_pt' },
{ oid => '1149',
proname => 'circle_div_pt', prorettype => 'circle',
proargtypes => 'circle point', prosrc => 'circle_div_pt' },
{ oid => '1150', descr => 'I/O',
proname => 'timestamptz_in', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'cstring oid int4', prosrc => 'timestamptz_in' },
{ oid => '1151', descr => 'I/O',
proname => 'timestamptz_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'timestamptz', prosrc => 'timestamptz_out' },
{ oid => '2907', descr => 'I/O typmod',
proname => 'timestamptztypmodin', prorettype => 'int4',
proargtypes => '_cstring', prosrc => 'timestamptztypmodin' },
{ oid => '2908', descr => 'I/O typmod',
proname => 'timestamptztypmodout', prorettype => 'cstring',
proargtypes => 'int4', prosrc => 'timestamptztypmodout' },
{ oid => '1152',
proname => 'timestamptz_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_eq' },
{ oid => '1153',
proname => 'timestamptz_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_ne' },
{ oid => '1154',
proname => 'timestamptz_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_lt' },
{ oid => '1155',
proname => 'timestamptz_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_le' },
{ oid => '1156',
proname => 'timestamptz_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_ge' },
{ oid => '1157',
proname => 'timestamptz_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_gt' },
{ oid => '1158', descr => 'convert UNIX epoch to timestamptz',
proname => 'to_timestamp', prorettype => 'timestamptz',
proargtypes => 'float8', prosrc => 'float8_timestamptz' },
{ oid => '1159', descr => 'adjust timestamp to new time zone',
proname => 'timezone', prorettype => 'timestamp',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_zone' },
{ oid => '9159', descr => 'adjust timestamp to local time zone',
proname => 'timezone', provolatile => 's', prorettype => 'timestamp',
proargtypes => 'timestamptz', prosrc => 'timestamptz_at_local' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1160', descr => 'I/O',
proname => 'interval_in', provolatile => 's', prorettype => 'interval',
proargtypes => 'cstring oid int4', prosrc => 'interval_in' },
{ oid => '1161', descr => 'I/O',
proname => 'interval_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'interval', prosrc => 'interval_out' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2903', descr => 'I/O typmod',
proname => 'intervaltypmodin', prorettype => 'int4',
proargtypes => '_cstring', prosrc => 'intervaltypmodin' },
{ oid => '2904', descr => 'I/O typmod',
proname => 'intervaltypmodout', prorettype => 'cstring',
proargtypes => 'int4', prosrc => 'intervaltypmodout' },
{ oid => '1162',
proname => 'interval_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_eq' },
{ oid => '1163',
proname => 'interval_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_ne' },
{ oid => '1164',
proname => 'interval_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_lt' },
{ oid => '1165',
proname => 'interval_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_le' },
{ oid => '1166',
proname => 'interval_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_ge' },
{ oid => '1167',
proname => 'interval_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'interval interval', prosrc => 'interval_gt' },
{ oid => '1168',
proname => 'interval_um', prorettype => 'interval', proargtypes => 'interval',
prosrc => 'interval_um' },
{ oid => '1169',
proname => 'interval_pl', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_pl' },
{ oid => '1170',
proname => 'interval_mi', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_mi' },
{ oid => '1171', descr => 'extract field from timestamp with time zone',
proname => 'date_part', provolatile => 's', prorettype => 'float8',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
{ oid => '6203', descr => 'extract field from timestamp with time zone',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', provolatile => 's', prorettype => 'numeric',
proargtypes => 'text timestamptz', prosrc => 'extract_timestamptz' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1172', descr => 'extract field from interval',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text interval', prosrc => 'interval_part' },
{ oid => '6204', descr => 'extract field from interval',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', prorettype => 'numeric', proargtypes => 'text interval',
prosrc => 'extract_interval' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'date', prosrc => 'date_timestamptz' },
{ oid => '2711',
descr => 'promote groups of 24 hours to numbers of days and promote groups of 30 days to numbers of months',
proname => 'justify_interval', prorettype => 'interval',
proargtypes => 'interval', prosrc => 'interval_justify_interval' },
{ oid => '1175', descr => 'promote groups of 24 hours to numbers of days',
proname => 'justify_hours', prorettype => 'interval',
proargtypes => 'interval', prosrc => 'interval_justify_hours' },
{ oid => '1295', descr => 'promote groups of 30 days to numbers of months',
proname => 'justify_days', prorettype => 'interval',
proargtypes => 'interval', prosrc => 'interval_justify_days' },
{ oid => '1176', descr => 'convert date and time to timestamp with time zone',
proname => 'timestamptz', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamptz', proargtypes => 'date time',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1178', descr => 'convert timestamp with time zone to date',
proname => 'date', provolatile => 's', prorettype => 'date',
proargtypes => 'timestamptz', prosrc => 'timestamptz_date' },
{ oid => '1181',
descr => 'age of a transaction ID, in transactions before current transaction',
proname => 'age', provolatile => 's', proparallel => 'r',
prorettype => 'int4', proargtypes => 'xid', prosrc => 'xid_age' },
{ oid => '3939',
descr => 'age of a multi-transaction ID, in multi-transactions before current multi-transaction',
proname => 'mxid_age', provolatile => 's', prorettype => 'int4',
proargtypes => 'xid', prosrc => 'mxid_age' },
{ oid => '1188',
proname => 'timestamptz_mi', prorettype => 'interval',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_mi' },
{ oid => '1189',
proname => 'timestamptz_pl_interval', provolatile => 's',
prorettype => 'timestamptz', proargtypes => 'timestamptz interval',
prosrc => 'timestamptz_pl_interval' },
{ oid => '6221', descr => 'add interval to timestamp with time zone',
proname => 'date_add', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'timestamptz interval', prosrc => 'timestamptz_pl_interval' },
{ oid => '6222',
descr => 'add interval to timestamp with time zone in specified time zone',
proname => 'date_add', prorettype => 'timestamptz',
proargtypes => 'timestamptz interval text',
prosrc => 'timestamptz_pl_interval_at_zone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1190',
proname => 'timestamptz_mi_interval', provolatile => 's',
prorettype => 'timestamptz', proargtypes => 'timestamptz interval',
prosrc => 'timestamptz_mi_interval' },
{ oid => '6223', descr => 'subtract interval from timestamp with time zone',
proname => 'date_subtract', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'timestamptz interval', prosrc => 'timestamptz_mi_interval' },
{ oid => '6273',
descr => 'subtract interval from timestamp with time zone in specified time zone',
proname => 'date_subtract', prorettype => 'timestamptz',
proargtypes => 'timestamptz interval text',
prosrc => 'timestamptz_mi_interval_at_zone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1195', descr => 'smaller of two',
proname => 'timestamptz_smaller', prorettype => 'timestamptz',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_smaller' },
{ oid => '1196', descr => 'larger of two',
proname => 'timestamptz_larger', prorettype => 'timestamptz',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_larger' },
{ oid => '1197', descr => 'smaller of two',
proname => 'interval_smaller', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_smaller' },
{ oid => '1198', descr => 'larger of two',
proname => 'interval_larger', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_larger' },
{ oid => '1199', descr => 'date difference preserving months and years',
proname => 'age', prorettype => 'interval',
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamptz_age' },
# OIDS 1200 - 1299
{ oid => '3918', descr => 'planner support for interval length coercion',
proname => 'interval_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'interval_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1200', descr => 'adjust interval precision',
proname => 'interval', prosupport => 'interval_support',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'interval', proargtypes => 'interval int4',
prosrc => 'interval_scale' },
{ oid => '1215', descr => 'get description for object id and catalog name',
proname => 'obj_description', prolang => 'sql', procost => '100',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'text', proargtypes => 'oid name',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1216', descr => 'get description for table column',
proname => 'col_description', prolang => 'sql', procost => '100',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'text', proargtypes => 'oid int4',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1993',
descr => 'get description for object id and shared catalog name',
proname => 'shobj_description', prolang => 'sql', procost => '100',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'text', proargtypes => 'oid name',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1217',
descr => 'truncate timestamp with time zone to specified units',
proname => 'date_trunc', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_trunc' },
{ oid => '1284',
descr => 'truncate timestamp with time zone to specified units in specified time zone',
proname => 'date_trunc', prorettype => 'timestamptz',
proargtypes => 'text timestamptz text', prosrc => 'timestamptz_trunc_zone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1218', descr => 'truncate interval to specified units',
proname => 'date_trunc', prorettype => 'interval',
proargtypes => 'text interval', prosrc => 'interval_trunc' },
{ oid => '1219', descr => 'increment',
proname => 'int8inc', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8inc' },
{ oid => '3546', descr => 'decrement',
proname => 'int8dec', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8dec' },
{ oid => '2804', descr => 'increment, ignores second argument',
proname => 'int8inc_any', prorettype => 'int8', proargtypes => 'int8 any',
prosrc => 'int8inc_any' },
{ oid => '3547', descr => 'decrement, ignores second argument',
proname => 'int8dec_any', prorettype => 'int8', proargtypes => 'int8 any',
prosrc => 'int8dec_any' },
{ oid => '1230',
proname => 'int8abs', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8abs' },
{ oid => '1236', descr => 'larger of two',
proname => 'int8larger', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8larger' },
{ oid => '1237', descr => 'smaller of two',
proname => 'int8smaller', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8smaller' },
{ oid => '1238',
proname => 'texticregexeq', prosupport => 'texticregexeq_support',
prorettype => 'bool', proargtypes => 'text text', prosrc => 'texticregexeq' },
{ oid => '1024', descr => 'planner support for texticregexeq',
proname => 'texticregexeq_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'texticregexeq_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1239',
proname => 'texticregexne', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'texticregexne' },
{ oid => '1240',
proname => 'nameicregexeq', prosupport => 'texticregexeq_support',
prorettype => 'bool', proargtypes => 'name text', prosrc => 'nameicregexeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1241',
proname => 'nameicregexne', prorettype => 'bool', proargtypes => 'name text',
prosrc => 'nameicregexne' },
{ oid => '1251',
proname => 'int4abs', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4abs' },
{ oid => '1253',
proname => 'int2abs', prorettype => 'int2', proargtypes => 'int2',
prosrc => 'int2abs' },
{ oid => '1271', descr => 'intervals overlap?',
proname => 'overlaps', proisstrict => 'f', prorettype => 'bool',
proargtypes => 'timetz timetz timetz timetz', prosrc => 'overlaps_timetz' },
{ oid => '1272',
proname => 'datetime_pl', prorettype => 'timestamp',
proargtypes => 'date time', prosrc => 'datetime_timestamp' },
{ oid => '1273', descr => 'extract field from time with time zone',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
{ oid => '6201', descr => 'extract field from time with time zone',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', prorettype => 'numeric', proargtypes => 'text timetz',
prosrc => 'extract_timetz' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84pl' },
{ oid => '1275',
proname => 'int84mi', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84mi' },
{ oid => '1276',
proname => 'int84mul', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84mul' },
{ oid => '1277',
proname => 'int84div', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84div' },
{ oid => '1278',
proname => 'int48pl', prorettype => 'int8', proargtypes => 'int4 int8',
prosrc => 'int48pl' },
{ oid => '1279',
proname => 'int48mi', prorettype => 'int8', proargtypes => 'int4 int8',
prosrc => 'int48mi' },
{ oid => '1280',
proname => 'int48mul', prorettype => 'int8', proargtypes => 'int4 int8',
prosrc => 'int48mul' },
{ oid => '1281',
proname => 'int48div', prorettype => 'int8', proargtypes => 'int4 int8',
prosrc => 'int48div' },
{ oid => '837',
proname => 'int82pl', prorettype => 'int8', proargtypes => 'int8 int2',
prosrc => 'int82pl' },
{ oid => '838',
proname => 'int82mi', prorettype => 'int8', proargtypes => 'int8 int2',
prosrc => 'int82mi' },
{ oid => '839',
proname => 'int82mul', prorettype => 'int8', proargtypes => 'int8 int2',
prosrc => 'int82mul' },
{ oid => '840',
proname => 'int82div', prorettype => 'int8', proargtypes => 'int8 int2',
prosrc => 'int82div' },
{ oid => '841',
proname => 'int28pl', prorettype => 'int8', proargtypes => 'int2 int8',
prosrc => 'int28pl' },
{ oid => '942',
proname => 'int28mi', prorettype => 'int8', proargtypes => 'int2 int8',
prosrc => 'int28mi' },
{ oid => '943',
proname => 'int28mul', prorettype => 'int8', proargtypes => 'int2 int8',
prosrc => 'int28mul' },
{ oid => '948',
proname => 'int28div', prorettype => 'int8', proargtypes => 'int2 int8',
prosrc => 'int28div' },
{ oid => '1287', descr => 'convert int8 to oid',
proname => 'oid', prorettype => 'oid', proargtypes => 'int8',
prosrc => 'i8tooid' },
{ oid => '1288', descr => 'convert oid to int8',
proname => 'int8', proleakproof => 't', prorettype => 'int8',
proargtypes => 'oid', prosrc => 'oidtoi8' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1291',
descr => 'trigger to suppress updates when new and old records match',
proname => 'suppress_redundant_updates_trigger', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'suppress_redundant_updates_trigger' },
{ oid => '1292',
proname => 'tideq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tideq' },
{ oid => '1294', descr => 'latest tid of a tuple',
proname => 'currtid2', provolatile => 'v', proparallel => 'u',
prorettype => 'tid', proargtypes => 'text tid',
prosrc => 'currtid_byrelname' },
{ oid => '1265',
proname => 'tidne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tidne' },
{ oid => '2790',
proname => 'tidgt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tidgt' },
{ oid => '2791',
proname => 'tidlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tidlt' },
{ oid => '2792',
proname => 'tidge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tidge' },
{ oid => '2793',
proname => 'tidle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'tid tid', prosrc => 'tidle' },
{ oid => '2794', descr => 'less-equal-greater',
proname => 'bttidcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'tid tid', prosrc => 'bttidcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2795', descr => 'larger of two',
proname => 'tidlarger', prorettype => 'tid', proargtypes => 'tid tid',
prosrc => 'tidlarger' },
{ oid => '2796', descr => 'smaller of two',
proname => 'tidsmaller', prorettype => 'tid', proargtypes => 'tid tid',
prosrc => 'tidsmaller' },
{ oid => '2233', descr => 'hash',
proname => 'hashtid', prorettype => 'int4', proargtypes => 'tid',
prosrc => 'hashtid' },
{ oid => '2234', descr => 'hash',
proname => 'hashtidextended', prorettype => 'int8', proargtypes => 'tid int8',
prosrc => 'hashtidextended' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1296',
proname => 'timedate_pl', prolang => 'sql', prorettype => 'timestamp',
proargtypes => 'time date', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1297',
proname => 'datetimetz_pl', prorettype => 'timestamptz',
proargtypes => 'date timetz', prosrc => 'datetimetz_timestamptz' },
{ oid => '1298',
proname => 'timetzdate_pl', prolang => 'sql', prorettype => 'timestamptz',
proargtypes => 'timetz date', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1299', descr => 'current transaction time',
proname => 'now', provolatile => 's', prorettype => 'timestamptz',
proargtypes => '', prosrc => 'now' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2647', descr => 'current transaction time',
proname => 'transaction_timestamp', provolatile => 's',
prorettype => 'timestamptz', proargtypes => '', prosrc => 'now' },
{ oid => '2648', descr => 'current statement time',
proname => 'statement_timestamp', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamptz', proargtypes => '',
prosrc => 'statement_timestamp' },
{ oid => '2649', descr => 'current clock time',
proname => 'clock_timestamp', provolatile => 'v', prorettype => 'timestamptz',
proargtypes => '', prosrc => 'clock_timestamp' },
# OIDS 1300 - 1399
{ oid => '1300',
descr => 'restriction selectivity for position-comparison operators',
proname => 'positionsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'positionsel' },
{ oid => '1301',
descr => 'join selectivity for position-comparison operators',
proname => 'positionjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'positionjoinsel' },
{ oid => '1302',
descr => 'restriction selectivity for containment comparison operators',
proname => 'contsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'contsel' },
{ oid => '1303',
descr => 'join selectivity for containment comparison operators',
proname => 'contjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'contjoinsel' },
{ oid => '1304', descr => 'intervals overlap?',
proname => 'overlaps', proisstrict => 'f', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz timestamptz timestamptz',
prosrc => 'overlaps_timestamp' },
{ oid => '1305', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz interval timestamptz interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1306', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz timestamptz interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1307', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz interval timestamptz timestamptz',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1308', descr => 'intervals overlap?',
proname => 'overlaps', proisstrict => 'f', prorettype => 'bool',
proargtypes => 'time time time time', prosrc => 'overlaps_time' },
{ oid => '1309', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'time interval time interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1310', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'time time time interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1311', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'time interval time time',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1312', descr => 'I/O',
proname => 'timestamp_in', provolatile => 's', prorettype => 'timestamp',
proargtypes => 'cstring oid int4', prosrc => 'timestamp_in' },
{ oid => '1313', descr => 'I/O',
proname => 'timestamp_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'timestamp', prosrc => 'timestamp_out' },
{ oid => '2905', descr => 'I/O typmod',
proname => 'timestamptypmodin', prorettype => 'int4',
proargtypes => '_cstring', prosrc => 'timestamptypmodin' },
{ oid => '2906', descr => 'I/O typmod',
proname => 'timestamptypmodout', prorettype => 'cstring',
proargtypes => 'int4', prosrc => 'timestamptypmodout' },
{ oid => '1314', descr => 'less-equal-greater',
proname => 'timestamptz_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_cmp' },
{ oid => '1315', descr => 'less-equal-greater',
proname => 'interval_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'interval interval', prosrc => 'interval_cmp' },
{ oid => '1316', descr => 'convert timestamp to time',
proname => 'time', prorettype => 'time', proargtypes => 'timestamp',
prosrc => 'timestamp_time' },
{ oid => '1317', descr => 'length',
proname => 'length', prorettype => 'int4', proargtypes => 'text',
prosrc => 'textlen' },
{ oid => '1318', descr => 'character length',
proname => 'length', prorettype => 'int4', proargtypes => 'bpchar',
prosrc => 'bpcharlen' },
{ oid => '1319',
proname => 'xideqint4', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid int4', prosrc => 'xideq' },
{ oid => '3309',
proname => 'xidneqint4', proleakproof => 't', prorettype => 'bool',
proargtypes => 'xid int4', prosrc => 'xidneq' },
{ oid => '1326',
proname => 'interval_div', prorettype => 'interval',
proargtypes => 'interval float8', prosrc => 'interval_div' },
{ oid => '1339', descr => 'base 10 logarithm',
proname => 'dlog10', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dlog10' },
{ oid => '1340', descr => 'base 10 logarithm',
proname => 'log', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dlog10' },
{ oid => '1194', descr => 'base 10 logarithm',
proname => 'log10', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dlog10' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1341', descr => 'natural logarithm',
proname => 'ln', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dlog1' },
{ oid => '1342', descr => 'round to nearest integer',
proname => 'round', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dround' },
{ oid => '1343', descr => 'truncate to integer',
proname => 'trunc', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dtrunc' },
{ oid => '1344', descr => 'square root',
proname => 'sqrt', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsqrt' },
{ oid => '1345', descr => 'cube root',
proname => 'cbrt', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcbrt' },
{ oid => '1346', descr => 'exponentiation',
proname => 'pow', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'dpow' },
{ oid => '1368', descr => 'exponentiation',
proname => 'power', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'dpow' },
{ oid => '1347', descr => 'natural exponential (e^x)',
proname => 'exp', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dexp' },
# This form of obj_description is now deprecated, since it will fail if
# OIDs are not unique across system catalogs. Use the other form instead.
{ oid => '1348', descr => 'deprecated, use two-argument form instead',
proname => 'obj_description', prolang => 'sql', procost => '100',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'text', proargtypes => 'oid',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1349', descr => 'print type names of oidvector field',
proname => 'oidvectortypes', provolatile => 's', prorettype => 'text',
proargtypes => 'oidvector', prosrc => 'oidvectortypes' },
{ oid => '1350', descr => 'I/O',
proname => 'timetz_in', provolatile => 's', prorettype => 'timetz',
proargtypes => 'cstring oid int4', prosrc => 'timetz_in' },
{ oid => '1351', descr => 'I/O',
proname => 'timetz_out', prorettype => 'cstring', proargtypes => 'timetz',
prosrc => 'timetz_out' },
{ oid => '2911', descr => 'I/O typmod',
proname => 'timetztypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'timetztypmodin' },
{ oid => '2912', descr => 'I/O typmod',
proname => 'timetztypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'timetztypmodout' },
{ oid => '1352',
proname => 'timetz_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_eq' },
{ oid => '1353',
proname => 'timetz_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_ne' },
{ oid => '1354',
proname => 'timetz_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_lt' },
{ oid => '1355',
proname => 'timetz_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_le' },
{ oid => '1356',
proname => 'timetz_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_ge' },
{ oid => '1357',
proname => 'timetz_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timetz timetz', prosrc => 'timetz_gt' },
{ oid => '1358', descr => 'less-equal-greater',
proname => 'timetz_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'timetz timetz', prosrc => 'timetz_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1359',
descr => 'convert date and time with time zone to timestamp with time zone',
proname => 'timestamptz', prorettype => 'timestamptz',
proargtypes => 'date timetz', prosrc => 'datetimetz_timestamptz' },
{ oid => '1367', descr => 'character length',
proname => 'character_length', prorettype => 'int4', proargtypes => 'bpchar',
prosrc => 'bpcharlen' },
{ oid => '1369', descr => 'character length',
proname => 'character_length', prorettype => 'int4', proargtypes => 'text',
prosrc => 'textlen' },
{ oid => '1370', descr => 'convert time to interval',
proname => 'interval', proleakproof => 't', prorettype => 'interval',
proargtypes => 'time', prosrc => 'time_interval' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1372', descr => 'character length',
proname => 'char_length', prorettype => 'int4', proargtypes => 'bpchar',
prosrc => 'bpcharlen' },
{ oid => '1374', descr => 'octet length',
proname => 'octet_length', prorettype => 'int4', proargtypes => 'text',
prosrc => 'textoctetlen' },
{ oid => '1375', descr => 'octet length',
proname => 'octet_length', prorettype => 'int4', proargtypes => 'bpchar',
prosrc => 'bpcharoctetlen' },
{ oid => '1377', descr => 'larger of two',
proname => 'time_larger', prorettype => 'time', proargtypes => 'time time',
prosrc => 'time_larger' },
{ oid => '1378', descr => 'smaller of two',
proname => 'time_smaller', prorettype => 'time', proargtypes => 'time time',
prosrc => 'time_smaller' },
{ oid => '1379', descr => 'larger of two',
proname => 'timetz_larger', prorettype => 'timetz',
proargtypes => 'timetz timetz', prosrc => 'timetz_larger' },
{ oid => '1380', descr => 'smaller of two',
proname => 'timetz_smaller', prorettype => 'timetz',
proargtypes => 'timetz timetz', prosrc => 'timetz_smaller' },
{ oid => '1381', descr => 'character length',
proname => 'char_length', prorettype => 'int4', proargtypes => 'text',
prosrc => 'textlen' },
{ oid => '1384', descr => 'extract field from date',
proname => 'date_part', prolang => 'sql', prorettype => 'float8',
proargtypes => 'text date', prosrc => 'see system_functions.sql' },
{ oid => '6199', descr => 'extract field from date',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', prorettype => 'numeric', proargtypes => 'text date',
prosrc => 'extract_date' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1385', descr => 'extract field from time',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
prosrc => 'time_part' },
{ oid => '6200', descr => 'extract field from time',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', prorettype => 'numeric', proargtypes => 'text time',
prosrc => 'extract_time' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1386',
descr => 'date difference from today preserving months and years',
proname => 'age', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'interval', proargtypes => 'timestamptz',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1388',
descr => 'convert timestamp with time zone to time with time zone',
proname => 'timetz', provolatile => 's', prorettype => 'timetz',
proargtypes => 'timestamptz', prosrc => 'timestamptz_timetz' },
{ oid => '1373', descr => 'finite date?',
proname => 'isfinite', prorettype => 'bool', proargtypes => 'date',
prosrc => 'date_finite' },
{ oid => '1389', descr => 'finite timestamp?',
proname => 'isfinite', prorettype => 'bool', proargtypes => 'timestamptz',
prosrc => 'timestamp_finite' },
{ oid => '1390', descr => 'finite interval?',
proname => 'isfinite', prorettype => 'bool', proargtypes => 'interval',
prosrc => 'interval_finite' },
{ oid => '1376', descr => 'factorial',
proname => 'factorial', prorettype => 'numeric', proargtypes => 'int8',
prosrc => 'numeric_fac' },
{ oid => '1394', descr => 'absolute value',
proname => 'abs', prorettype => 'float4', proargtypes => 'float4',
prosrc => 'float4abs' },
{ oid => '1395', descr => 'absolute value',
proname => 'abs', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'float8abs' },
{ oid => '1396', descr => 'absolute value',
proname => 'abs', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8abs' },
{ oid => '1397', descr => 'absolute value',
proname => 'abs', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4abs' },
{ oid => '1398', descr => 'absolute value',
proname => 'abs', prorettype => 'int2', proargtypes => 'int2',
prosrc => 'int2abs' },
# OIDS 1400 - 1499
{ oid => '1400', descr => 'convert varchar to name',
proname => 'name', proleakproof => 't', prorettype => 'name',
proargtypes => 'varchar', prosrc => 'text_name' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1401', descr => 'convert name to varchar',
proname => 'varchar', proleakproof => 't', prorettype => 'varchar',
proargtypes => 'name', prosrc => 'name_text' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1402', descr => 'current schema name',
proname => 'current_schema', provolatile => 's', proparallel => 'u',
prorettype => 'name', proargtypes => '', prosrc => 'current_schema' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1403', descr => 'current schema search list',
proname => 'current_schemas', provolatile => 's', proparallel => 'u',
prorettype => '_name', proargtypes => 'bool', prosrc => 'current_schemas' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1404', descr => 'substitute portion of string',
proname => 'overlay', prorettype => 'text',
proargtypes => 'text text int4 int4', prosrc => 'textoverlay' },
{ oid => '1405', descr => 'substitute portion of string',
proname => 'overlay', prorettype => 'text', proargtypes => 'text text int4',
prosrc => 'textoverlay_no_len' },
{ oid => '1406', descr => 'vertically aligned',
proname => 'isvertical', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_vert' },
{ oid => '1407', descr => 'horizontally aligned',
proname => 'ishorizontal', prorettype => 'bool', proargtypes => 'point point',
prosrc => 'point_horiz' },
{ oid => '1408', descr => 'parallel',
proname => 'isparallel', prorettype => 'bool', proargtypes => 'lseg lseg',
prosrc => 'lseg_parallel' },
{ oid => '1409', descr => 'perpendicular',
proname => 'isperp', prorettype => 'bool', proargtypes => 'lseg lseg',
prosrc => 'lseg_perp' },
{ oid => '1410', descr => 'vertical',
proname => 'isvertical', prorettype => 'bool', proargtypes => 'lseg',
prosrc => 'lseg_vertical' },
{ oid => '1411', descr => 'horizontal',
proname => 'ishorizontal', prorettype => 'bool', proargtypes => 'lseg',
prosrc => 'lseg_horizontal' },
{ oid => '1412', descr => 'parallel',
proname => 'isparallel', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_parallel' },
{ oid => '1413', descr => 'perpendicular',
proname => 'isperp', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_perp' },
{ oid => '1414', descr => 'vertical',
proname => 'isvertical', prorettype => 'bool', proargtypes => 'line',
prosrc => 'line_vertical' },
{ oid => '1415', descr => 'horizontal',
proname => 'ishorizontal', prorettype => 'bool', proargtypes => 'line',
prosrc => 'line_horizontal' },
{ oid => '1416', descr => 'center of',
proname => 'point', prorettype => 'point', proargtypes => 'circle',
prosrc => 'circle_center' },
{ oid => '1419', descr => 'convert interval to time',
proname => 'time', prorettype => 'time', proargtypes => 'interval',
prosrc => 'interval_time' },
{ oid => '1421', descr => 'convert points to box',
proname => 'box', prorettype => 'box', proargtypes => 'point point',
prosrc => 'points_box' },
{ oid => '1422',
proname => 'box_add', prorettype => 'box', proargtypes => 'box point',
prosrc => 'box_add' },
{ oid => '1423',
proname => 'box_sub', prorettype => 'box', proargtypes => 'box point',
prosrc => 'box_sub' },
{ oid => '1424',
proname => 'box_mul', prorettype => 'box', proargtypes => 'box point',
prosrc => 'box_mul' },
{ oid => '1425',
proname => 'box_div', prorettype => 'box', proargtypes => 'box point',
prosrc => 'box_div' },
{ oid => '1426',
proname => 'path_contain_pt', prolang => 'sql', prorettype => 'bool',
proargtypes => 'path point', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1428',
proname => 'poly_contain_pt', prorettype => 'bool',
proargtypes => 'polygon point', prosrc => 'poly_contain_pt' },
{ oid => '1429',
proname => 'pt_contained_poly', prorettype => 'bool',
proargtypes => 'point polygon', prosrc => 'pt_contained_poly' },
{ oid => '1430', descr => 'path closed?',
proname => 'isclosed', prorettype => 'bool', proargtypes => 'path',
prosrc => 'path_isclosed' },
{ oid => '1431', descr => 'path open?',
proname => 'isopen', prorettype => 'bool', proargtypes => 'path',
prosrc => 'path_isopen' },
{ oid => '1432',
proname => 'path_npoints', prorettype => 'int4', proargtypes => 'path',
prosrc => 'path_npoints' },
# pclose and popen might better be named close and open, but that crashes initdb.
# - thomas 97/04/20
{ oid => '1433', descr => 'close path',
proname => 'pclose', prorettype => 'path', proargtypes => 'path',
prosrc => 'path_close' },
{ oid => '1434', descr => 'open path',
proname => 'popen', prorettype => 'path', proargtypes => 'path',
prosrc => 'path_open' },
{ oid => '1435',
proname => 'path_add', prorettype => 'path', proargtypes => 'path path',
prosrc => 'path_add' },
{ oid => '1436',
proname => 'path_add_pt', prorettype => 'path', proargtypes => 'path point',
prosrc => 'path_add_pt' },
{ oid => '1437',
proname => 'path_sub_pt', prorettype => 'path', proargtypes => 'path point',
prosrc => 'path_sub_pt' },
{ oid => '1438',
proname => 'path_mul_pt', prorettype => 'path', proargtypes => 'path point',
prosrc => 'path_mul_pt' },
{ oid => '1439',
proname => 'path_div_pt', prorettype => 'path', proargtypes => 'path point',
prosrc => 'path_div_pt' },
{ oid => '1440', descr => 'convert x, y to point',
proname => 'point', prorettype => 'point', proargtypes => 'float8 float8',
prosrc => 'construct_point' },
{ oid => '1441',
proname => 'point_add', prorettype => 'point', proargtypes => 'point point',
prosrc => 'point_add' },
{ oid => '1442',
proname => 'point_sub', prorettype => 'point', proargtypes => 'point point',
prosrc => 'point_sub' },
{ oid => '1443',
proname => 'point_mul', prorettype => 'point', proargtypes => 'point point',
prosrc => 'point_mul' },
{ oid => '1444',
proname => 'point_div', prorettype => 'point', proargtypes => 'point point',
prosrc => 'point_div' },
{ oid => '1445',
proname => 'poly_npoints', prorettype => 'int4', proargtypes => 'polygon',
prosrc => 'poly_npoints' },
{ oid => '1446', descr => 'convert polygon to bounding box',
proname => 'box', prorettype => 'box', proargtypes => 'polygon',
prosrc => 'poly_box' },
{ oid => '1447', descr => 'convert polygon to path',
proname => 'path', prorettype => 'path', proargtypes => 'polygon',
prosrc => 'poly_path' },
{ oid => '1448', descr => 'convert box to polygon',
proname => 'polygon', prorettype => 'polygon', proargtypes => 'box',
prosrc => 'box_poly' },
{ oid => '1449', descr => 'convert path to polygon',
proname => 'polygon', prorettype => 'polygon', proargtypes => 'path',
prosrc => 'path_poly' },
{ oid => '1450', descr => 'I/O',
proname => 'circle_in', prorettype => 'circle', proargtypes => 'cstring',
prosrc => 'circle_in' },
{ oid => '1451', descr => 'I/O',
proname => 'circle_out', prorettype => 'cstring', proargtypes => 'circle',
prosrc => 'circle_out' },
{ oid => '1452',
proname => 'circle_same', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_same' },
{ oid => '1453',
proname => 'circle_contain', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_contain' },
{ oid => '1454',
proname => 'circle_left', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_left' },
{ oid => '1455',
proname => 'circle_overleft', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_overleft' },
{ oid => '1456',
proname => 'circle_overright', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_overright' },
{ oid => '1457',
proname => 'circle_right', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_right' },
{ oid => '1458',
proname => 'circle_contained', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_contained' },
{ oid => '1459',
proname => 'circle_overlap', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_overlap' },
{ oid => '1460',
proname => 'circle_below', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_below' },
{ oid => '1461',
proname => 'circle_above', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_above' },
{ oid => '1462',
proname => 'circle_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_eq' },
{ oid => '1463',
proname => 'circle_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_ne' },
{ oid => '1464',
proname => 'circle_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_lt' },
{ oid => '1465',
proname => 'circle_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_gt' },
{ oid => '1466',
proname => 'circle_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_le' },
{ oid => '1467',
proname => 'circle_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_ge' },
{ oid => '1468', descr => 'area of circle',
proname => 'area', prorettype => 'float8', proargtypes => 'circle',
prosrc => 'circle_area' },
{ oid => '1469', descr => 'diameter of circle',
proname => 'diameter', prorettype => 'float8', proargtypes => 'circle',
prosrc => 'circle_diameter' },
{ oid => '1470', descr => 'radius of circle',
proname => 'radius', prorettype => 'float8', proargtypes => 'circle',
prosrc => 'circle_radius' },
{ oid => '1471',
proname => 'circle_distance', prorettype => 'float8',
proargtypes => 'circle circle', prosrc => 'circle_distance' },
{ oid => '1472',
proname => 'circle_center', prorettype => 'point', proargtypes => 'circle',
prosrc => 'circle_center' },
{ oid => '1473', descr => 'convert point and radius to circle',
proname => 'circle', prorettype => 'circle', proargtypes => 'point float8',
prosrc => 'cr_circle' },
{ oid => '1474', descr => 'convert polygon to circle',
proname => 'circle', prorettype => 'circle', proargtypes => 'polygon',
prosrc => 'poly_circle' },
{ oid => '1475', descr => 'convert vertex count and circle to polygon',
proname => 'polygon', prorettype => 'polygon', proargtypes => 'int4 circle',
prosrc => 'circle_poly' },
{ oid => '1476',
proname => 'dist_pc', prorettype => 'float8', proargtypes => 'point circle',
prosrc => 'dist_pc' },
{ oid => '1477',
proname => 'circle_contain_pt', prorettype => 'bool',
proargtypes => 'circle point', prosrc => 'circle_contain_pt' },
{ oid => '1478',
proname => 'pt_contained_circle', prorettype => 'bool',
proargtypes => 'point circle', prosrc => 'pt_contained_circle' },
{ oid => '4091', descr => 'convert point to empty box',
proname => 'box', prorettype => 'box', proargtypes => 'point',
prosrc => 'point_box' },
{ oid => '1479', descr => 'convert box to circle',
proname => 'circle', prorettype => 'circle', proargtypes => 'box',
prosrc => 'box_circle' },
{ oid => '1480', descr => 'convert circle to box',
proname => 'box', prorettype => 'box', proargtypes => 'circle',
prosrc => 'circle_box' },
{ oid => '1482',
proname => 'lseg_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_ne' },
{ oid => '1483',
proname => 'lseg_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_lt' },
{ oid => '1484',
proname => 'lseg_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_le' },
{ oid => '1485',
proname => 'lseg_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_gt' },
{ oid => '1486',
proname => 'lseg_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'lseg lseg', prosrc => 'lseg_ge' },
{ oid => '1487',
proname => 'lseg_length', prorettype => 'float8', proargtypes => 'lseg',
prosrc => 'lseg_length' },
{ oid => '1488',
proname => 'close_ls', prorettype => 'point', proargtypes => 'line lseg',
prosrc => 'close_ls' },
{ oid => '1489',
proname => 'close_lseg', prorettype => 'point', proargtypes => 'lseg lseg',
prosrc => 'close_lseg' },
{ oid => '1490', descr => 'I/O',
proname => 'line_in', prorettype => 'line', proargtypes => 'cstring',
prosrc => 'line_in' },
{ oid => '1491', descr => 'I/O',
proname => 'line_out', prorettype => 'cstring', proargtypes => 'line',
prosrc => 'line_out' },
{ oid => '1492',
proname => 'line_eq', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_eq' },
{ oid => '1493', descr => 'construct line from points',
proname => 'line', prorettype => 'line', proargtypes => 'point point',
prosrc => 'line_construct_pp' },
{ oid => '1494',
proname => 'line_interpt', prorettype => 'point', proargtypes => 'line line',
prosrc => 'line_interpt' },
{ oid => '1495',
proname => 'line_intersect', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_intersect' },
{ oid => '1496',
proname => 'line_parallel', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_parallel' },
{ oid => '1497',
proname => 'line_perp', prorettype => 'bool', proargtypes => 'line line',
prosrc => 'line_perp' },
{ oid => '1498',
proname => 'line_vertical', prorettype => 'bool', proargtypes => 'line',
prosrc => 'line_vertical' },
{ oid => '1499',
proname => 'line_horizontal', prorettype => 'bool', proargtypes => 'line',
prosrc => 'line_horizontal' },
# OIDS 1500 - 1599
{ oid => '1530', descr => 'distance between endpoints',
proname => 'length', prorettype => 'float8', proargtypes => 'lseg',
prosrc => 'lseg_length' },
{ oid => '1531', descr => 'sum of path segments',
proname => 'length', prorettype => 'float8', proargtypes => 'path',
prosrc => 'path_length' },
{ oid => '1532', descr => 'center of',
proname => 'point', prorettype => 'point', proargtypes => 'lseg',
prosrc => 'lseg_center' },
{ oid => '1534', descr => 'center of',
proname => 'point', prorettype => 'point', proargtypes => 'box',
prosrc => 'box_center' },
{ oid => '1540', descr => 'center of',
proname => 'point', prorettype => 'point', proargtypes => 'polygon',
prosrc => 'poly_center' },
{ oid => '1541', descr => 'diagonal of',
proname => 'lseg', prorettype => 'lseg', proargtypes => 'box',
prosrc => 'box_diagonal' },
{ oid => '1542', descr => 'center of',
proname => 'center', prorettype => 'point', proargtypes => 'box',
prosrc => 'box_center' },
{ oid => '1543', descr => 'center of',
proname => 'center', prorettype => 'point', proargtypes => 'circle',
prosrc => 'circle_center' },
{ oid => '1544', descr => 'convert circle to 12-vertex polygon',
proname => 'polygon', prolang => 'sql', prorettype => 'polygon',
proargtypes => 'circle', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1545', descr => 'number of points',
proname => 'npoints', prorettype => 'int4', proargtypes => 'path',
prosrc => 'path_npoints' },
{ oid => '1556', descr => 'number of points',
proname => 'npoints', prorettype => 'int4', proargtypes => 'polygon',
prosrc => 'poly_npoints' },
{ oid => '1564', descr => 'I/O',
proname => 'bit_in', prorettype => 'bit', proargtypes => 'cstring oid int4',
prosrc => 'bit_in' },
{ oid => '1565', descr => 'I/O',
proname => 'bit_out', prorettype => 'cstring', proargtypes => 'bit',
prosrc => 'bit_out' },
{ oid => '2919', descr => 'I/O typmod',
proname => 'bittypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'bittypmodin' },
{ oid => '2920', descr => 'I/O typmod',
proname => 'bittypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'bittypmodout' },
{ oid => '1569', descr => 'matches LIKE expression',
proname => 'like', prosupport => 'textlike_support', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'textlike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1570', descr => 'does not match LIKE expression',
proname => 'notlike', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'textnlike' },
{ oid => '1571', descr => 'matches LIKE expression',
proname => 'like', prosupport => 'textlike_support', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'namelike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1572', descr => 'does not match LIKE expression',
proname => 'notlike', prorettype => 'bool', proargtypes => 'name text',
prosrc => 'namenlike' },
# SEQUENCE functions
{ oid => '1574', descr => 'sequence next value',
proname => 'nextval', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass', prosrc => 'nextval_oid' },
{ oid => '1575', descr => 'sequence current value',
proname => 'currval', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass', prosrc => 'currval_oid' },
{ oid => '1576', descr => 'set sequence value',
proname => 'setval', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass int8',
prosrc => 'setval_oid' },
{ oid => '1765', descr => 'set sequence value and is_called status',
proname => 'setval', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass int8 bool',
prosrc => 'setval3_oid' },
{ oid => '3078',
descr => 'sequence parameters, for use by information schema',
proname => 'pg_sequence_parameters', provolatile => 's',
prorettype => 'record', proargtypes => 'oid',
proallargtypes => '{oid,int8,int8,int8,int8,bool,int8,oid}',
proargmodes => '{i,o,o,o,o,o,o,o}',
proargnames => '{sequence_oid,start_value,minimum_value,maximum_value,increment,cycle_option,cache_size,data_type}',
prosrc => 'pg_sequence_parameters' },
{ oid => '4032', descr => 'sequence last value',
proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass',
prosrc => 'pg_sequence_last_value' },
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
{ oid => '275', descr => 'return the next oid for a system table',
proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
prorettype => 'oid', proargtypes => 'regclass name regclass',
prosrc => 'pg_nextoid' },
{ oid => '6241', descr => 'stop making pinned objects during initdb',
proname => 'pg_stop_making_pinned_objects', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => '',
prosrc => 'pg_stop_making_pinned_objects' },
Remove WITH OIDS support, change oid catalog column visibility. Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1579', descr => 'I/O',
proname => 'varbit_in', prorettype => 'varbit',
proargtypes => 'cstring oid int4', prosrc => 'varbit_in' },
{ oid => '1580', descr => 'I/O',
proname => 'varbit_out', prorettype => 'cstring', proargtypes => 'varbit',
prosrc => 'varbit_out' },
{ oid => '2902', descr => 'I/O typmod',
proname => 'varbittypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'varbittypmodin' },
{ oid => '2921', descr => 'I/O typmod',
proname => 'varbittypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'varbittypmodout' },
{ oid => '1581',
proname => 'biteq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'biteq' },
{ oid => '1582',
proname => 'bitne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'bitne' },
{ oid => '1592',
proname => 'bitge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'bitge' },
{ oid => '1593',
proname => 'bitgt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'bitgt' },
{ oid => '1594',
proname => 'bitle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'bitle' },
{ oid => '1595',
proname => 'bitlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bit bit', prosrc => 'bitlt' },
{ oid => '1596', descr => 'less-equal-greater',
proname => 'bitcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'bit bit', prosrc => 'bitcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1598', descr => 'random value',
proname => 'random', provolatile => 'v', proparallel => 'r',
prorettype => 'float8', proargtypes => '', prosrc => 'drandom' },
{ oid => '6212', descr => 'random value from normal distribution',
proname => 'random_normal', provolatile => 'v', proparallel => 'r',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'drandom_normal' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1599', descr => 'set random seed',
proname => 'setseed', provolatile => 'v', proparallel => 'r',
prorettype => 'void', proargtypes => 'float8', prosrc => 'setseed' },
# OIDS 1600 - 1699
{ oid => '1600', descr => 'arcsine',
proname => 'asin', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dasin' },
{ oid => '1601', descr => 'arccosine',
proname => 'acos', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dacos' },
{ oid => '1602', descr => 'arctangent',
proname => 'atan', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'datan' },
{ oid => '1603', descr => 'arctangent, two arguments',
proname => 'atan2', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'datan2' },
{ oid => '1604', descr => 'sine',
proname => 'sin', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsin' },
{ oid => '1605', descr => 'cosine',
proname => 'cos', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcos' },
{ oid => '1606', descr => 'tangent',
proname => 'tan', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dtan' },
{ oid => '1607', descr => 'cotangent',
proname => 'cot', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcot' },
{ oid => '2731', descr => 'arcsine, degrees',
proname => 'asind', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dasind' },
{ oid => '2732', descr => 'arccosine, degrees',
proname => 'acosd', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dacosd' },
{ oid => '2733', descr => 'arctangent, degrees',
proname => 'atand', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'datand' },
{ oid => '2734', descr => 'arctangent, two arguments, degrees',
proname => 'atan2d', prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'datan2d' },
{ oid => '2735', descr => 'sine, degrees',
proname => 'sind', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsind' },
{ oid => '2736', descr => 'cosine, degrees',
proname => 'cosd', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcosd' },
{ oid => '2737', descr => 'tangent, degrees',
proname => 'tand', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dtand' },
{ oid => '2738', descr => 'cotangent, degrees',
proname => 'cotd', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcotd' },
{ oid => '1608', descr => 'radians to degrees',
proname => 'degrees', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'degrees' },
{ oid => '1609', descr => 'degrees to radians',
proname => 'radians', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'radians' },
{ oid => '1610', descr => 'PI',
proname => 'pi', prorettype => 'float8', proargtypes => '', prosrc => 'dpi' },
{ oid => '2462', descr => 'hyperbolic sine',
proname => 'sinh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dsinh' },
{ oid => '2463', descr => 'hyperbolic cosine',
proname => 'cosh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dcosh' },
{ oid => '2464', descr => 'hyperbolic tangent',
proname => 'tanh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dtanh' },
{ oid => '2465', descr => 'inverse hyperbolic sine',
proname => 'asinh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dasinh' },
{ oid => '2466', descr => 'inverse hyperbolic cosine',
proname => 'acosh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'dacosh' },
{ oid => '2467', descr => 'inverse hyperbolic tangent',
proname => 'atanh', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'datanh' },
{ oid => '6219', descr => 'error function',
proname => 'erf', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'derf' },
{ oid => '6220', descr => 'complementary error function',
proname => 'erfc', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'derfc' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1618',
proname => 'interval_mul', prorettype => 'interval',
proargtypes => 'interval float8', prosrc => 'interval_mul' },
{ oid => '1620', descr => 'convert first char to int4',
proname => 'ascii', prorettype => 'int4', proargtypes => 'text',
prosrc => 'ascii' },
{ oid => '1621', descr => 'convert int4 to char',
proname => 'chr', prorettype => 'text', proargtypes => 'int4',
prosrc => 'chr' },
{ oid => '1622', descr => 'replicate string n times',
proname => 'repeat', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'repeat' },
{ oid => '1623', descr => 'convert SQL regexp pattern to POSIX style',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'similar_escape', proisstrict => 'f', prorettype => 'text',
proargtypes => 'text text', prosrc => 'similar_escape' },
{ oid => '1986', descr => 'convert SQL regexp pattern to POSIX style',
proname => 'similar_to_escape', prorettype => 'text',
proargtypes => 'text text', prosrc => 'similar_to_escape_2' },
{ oid => '1987', descr => 'convert SQL regexp pattern to POSIX style',
proname => 'similar_to_escape', prorettype => 'text', proargtypes => 'text',
prosrc => 'similar_to_escape_1' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1624',
proname => 'mul_d_interval', prorettype => 'interval',
proargtypes => 'float8 interval', prosrc => 'mul_d_interval' },
{ oid => '1631',
proname => 'bpcharlike', prosupport => 'textlike_support',
prorettype => 'bool', proargtypes => 'bpchar text', prosrc => 'textlike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1632',
proname => 'bpcharnlike', prorettype => 'bool', proargtypes => 'bpchar text',
prosrc => 'textnlike' },
{ oid => '1633',
proname => 'texticlike', prosupport => 'texticlike_support',
prorettype => 'bool', proargtypes => 'text text', prosrc => 'texticlike' },
{ oid => '1025', descr => 'planner support for texticlike',
proname => 'texticlike_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'texticlike_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1634',
proname => 'texticnlike', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'texticnlike' },
{ oid => '1635',
proname => 'nameiclike', prosupport => 'texticlike_support',
prorettype => 'bool', proargtypes => 'name text', prosrc => 'nameiclike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1636',
proname => 'nameicnlike', prorettype => 'bool', proargtypes => 'name text',
prosrc => 'nameicnlike' },
{ oid => '1637', descr => 'convert LIKE pattern to use backslash escapes',
proname => 'like_escape', prorettype => 'text', proargtypes => 'text text',
prosrc => 'like_escape' },
{ oid => '1656',
proname => 'bpcharicregexeq', prosupport => 'texticregexeq_support',
prorettype => 'bool', proargtypes => 'bpchar text',
prosrc => 'texticregexeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1657',
proname => 'bpcharicregexne', prorettype => 'bool',
proargtypes => 'bpchar text', prosrc => 'texticregexne' },
{ oid => '1658',
proname => 'bpcharregexeq', prosupport => 'textregexeq_support',
prorettype => 'bool', proargtypes => 'bpchar text', prosrc => 'textregexeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1659',
proname => 'bpcharregexne', prorettype => 'bool',
proargtypes => 'bpchar text', prosrc => 'textregexne' },
{ oid => '1660',
proname => 'bpchariclike', prosupport => 'texticlike_support',
prorettype => 'bool', proargtypes => 'bpchar text', prosrc => 'texticlike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1661',
proname => 'bpcharicnlike', prorettype => 'bool',
proargtypes => 'bpchar text', prosrc => 'texticnlike' },
# Oracle Compatibility Related Functions - By Edmund Mergl <E.Mergl@bawue.de>
{ oid => '868', descr => 'position of substring',
proname => 'strpos', prorettype => 'int4', proargtypes => 'text text',
prosrc => 'textpos' },
{ oid => '870', descr => 'lowercase',
proname => 'lower', prorettype => 'text', proargtypes => 'text',
prosrc => 'lower' },
{ oid => '871', descr => 'uppercase',
proname => 'upper', prorettype => 'text', proargtypes => 'text',
prosrc => 'upper' },
{ oid => '872', descr => 'capitalize each word',
proname => 'initcap', prorettype => 'text', proargtypes => 'text',
prosrc => 'initcap' },
{ oid => '873', descr => 'left-pad string to length',
proname => 'lpad', prorettype => 'text', proargtypes => 'text int4 text',
prosrc => 'lpad' },
{ oid => '874', descr => 'right-pad string to length',
proname => 'rpad', prorettype => 'text', proargtypes => 'text int4 text',
prosrc => 'rpad' },
{ oid => '875', descr => 'trim selected characters from left end of string',
proname => 'ltrim', prorettype => 'text', proargtypes => 'text text',
prosrc => 'ltrim' },
{ oid => '876', descr => 'trim selected characters from right end of string',
proname => 'rtrim', prorettype => 'text', proargtypes => 'text text',
prosrc => 'rtrim' },
{ oid => '877', descr => 'extract portion of string',
proname => 'substr', prorettype => 'text', proargtypes => 'text int4 int4',
prosrc => 'text_substr' },
{ oid => '878', descr => 'map a set of characters appearing in string',
proname => 'translate', prorettype => 'text', proargtypes => 'text text text',
prosrc => 'translate' },
{ oid => '879', descr => 'left-pad string to length',
proname => 'lpad', prolang => 'sql', prorettype => 'text',
proargtypes => 'text int4', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '880', descr => 'right-pad string to length',
proname => 'rpad', prolang => 'sql', prorettype => 'text',
proargtypes => 'text int4', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '881', descr => 'trim spaces from left end of string',
proname => 'ltrim', prorettype => 'text', proargtypes => 'text',
prosrc => 'ltrim1' },
{ oid => '882', descr => 'trim spaces from right end of string',
proname => 'rtrim', prorettype => 'text', proargtypes => 'text',
prosrc => 'rtrim1' },
{ oid => '883', descr => 'extract portion of string',
proname => 'substr', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'text_substr_no_len' },
{ oid => '884', descr => 'trim selected characters from both ends of string',
proname => 'btrim', prorettype => 'text', proargtypes => 'text text',
prosrc => 'btrim' },
{ oid => '885', descr => 'trim spaces from both ends of string',
proname => 'btrim', prorettype => 'text', proargtypes => 'text',
prosrc => 'btrim1' },
{ oid => '936', descr => 'extract portion of string',
proname => 'substring', prorettype => 'text', proargtypes => 'text int4 int4',
prosrc => 'text_substr' },
{ oid => '937', descr => 'extract portion of string',
proname => 'substring', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'text_substr_no_len' },
{ oid => '2087',
descr => 'replace all occurrences in string of old_substr with new_substr',
proname => 'replace', prorettype => 'text', proargtypes => 'text text text',
prosrc => 'replace_text' },
{ oid => '2284', descr => 'replace text using regexp',
proname => 'regexp_replace', prorettype => 'text',
proargtypes => 'text text text', prosrc => 'textregexreplace_noopt' },
{ oid => '2285', descr => 'replace text using regexp',
proname => 'regexp_replace', prorettype => 'text',
proargtypes => 'text text text text', prosrc => 'textregexreplace' },
{ oid => '6251', descr => 'replace text using regexp',
proname => 'regexp_replace', prorettype => 'text',
proargtypes => 'text text text int4 int4 text',
prosrc => 'textregexreplace_extended' },
{ oid => '6252', descr => 'replace text using regexp',
proname => 'regexp_replace', prorettype => 'text',
proargtypes => 'text text text int4 int4',
prosrc => 'textregexreplace_extended_no_flags' },
{ oid => '6253', descr => 'replace text using regexp',
proname => 'regexp_replace', prorettype => 'text',
proargtypes => 'text text text int4',
prosrc => 'textregexreplace_extended_no_n' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3396', descr => 'find first match for regexp',
proname => 'regexp_match', prorettype => '_text', proargtypes => 'text text',
prosrc => 'regexp_match_no_flags' },
{ oid => '3397', descr => 'find first match for regexp',
proname => 'regexp_match', prorettype => '_text',
proargtypes => 'text text text', prosrc => 'regexp_match' },
{ oid => '2763', descr => 'find match(es) for regexp',
proname => 'regexp_matches', prorows => '1', proretset => 't',
prorettype => '_text', proargtypes => 'text text',
prosrc => 'regexp_matches_no_flags' },
{ oid => '2764', descr => 'find match(es) for regexp',
proname => 'regexp_matches', prorows => '10', proretset => 't',
prorettype => '_text', proargtypes => 'text text text',
prosrc => 'regexp_matches' },
{ oid => '6254', descr => 'count regexp matches',
proname => 'regexp_count', prorettype => 'int4', proargtypes => 'text text',
prosrc => 'regexp_count_no_start' },
{ oid => '6255', descr => 'count regexp matches',
proname => 'regexp_count', prorettype => 'int4',
proargtypes => 'text text int4', prosrc => 'regexp_count_no_flags' },
{ oid => '6256', descr => 'count regexp matches',
proname => 'regexp_count', prorettype => 'int4',
proargtypes => 'text text int4 text', prosrc => 'regexp_count' },
{ oid => '6257', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4', proargtypes => 'text text',
prosrc => 'regexp_instr_no_start' },
{ oid => '6258', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4',
proargtypes => 'text text int4', prosrc => 'regexp_instr_no_n' },
{ oid => '6259', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4',
proargtypes => 'text text int4 int4', prosrc => 'regexp_instr_no_endoption' },
{ oid => '6260', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4',
proargtypes => 'text text int4 int4 int4',
prosrc => 'regexp_instr_no_flags' },
{ oid => '6261', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4',
proargtypes => 'text text int4 int4 int4 text',
prosrc => 'regexp_instr_no_subexpr' },
{ oid => '6262', descr => 'position of regexp match',
proname => 'regexp_instr', prorettype => 'int4',
proargtypes => 'text text int4 int4 int4 text int4',
prosrc => 'regexp_instr' },
{ oid => '6263', descr => 'test for regexp match',
proname => 'regexp_like', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'regexp_like_no_flags' },
{ oid => '6264', descr => 'test for regexp match',
proname => 'regexp_like', prorettype => 'bool',
proargtypes => 'text text text', prosrc => 'regexp_like' },
{ oid => '6265', descr => 'extract substring that matches regexp',
proname => 'regexp_substr', prorettype => 'text', proargtypes => 'text text',
prosrc => 'regexp_substr_no_start' },
{ oid => '6266', descr => 'extract substring that matches regexp',
proname => 'regexp_substr', prorettype => 'text',
proargtypes => 'text text int4', prosrc => 'regexp_substr_no_n' },
{ oid => '6267', descr => 'extract substring that matches regexp',
proname => 'regexp_substr', prorettype => 'text',
proargtypes => 'text text int4 int4', prosrc => 'regexp_substr_no_flags' },
{ oid => '6268', descr => 'extract substring that matches regexp',
proname => 'regexp_substr', prorettype => 'text',
proargtypes => 'text text int4 int4 text',
prosrc => 'regexp_substr_no_subexpr' },
{ oid => '6269', descr => 'extract substring that matches regexp',
proname => 'regexp_substr', prorettype => 'text',
proargtypes => 'text text int4 int4 text int4', prosrc => 'regexp_substr' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2088', descr => 'split string by field_sep and return field_num',
proname => 'split_part', prorettype => 'text',
proargtypes => 'text text int4', prosrc => 'split_part' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2765', descr => 'split string by pattern',
proname => 'regexp_split_to_table', prorows => '1000', proretset => 't',
prorettype => 'text', proargtypes => 'text text',
prosrc => 'regexp_split_to_table_no_flags' },
{ oid => '2766', descr => 'split string by pattern',
proname => 'regexp_split_to_table', prorows => '1000', proretset => 't',
prorettype => 'text', proargtypes => 'text text text',
prosrc => 'regexp_split_to_table' },
{ oid => '2767', descr => 'split string by pattern',
proname => 'regexp_split_to_array', prorettype => '_text',
proargtypes => 'text text', prosrc => 'regexp_split_to_array_no_flags' },
{ oid => '2768', descr => 'split string by pattern',
proname => 'regexp_split_to_array', prorettype => '_text',
proargtypes => 'text text text', prosrc => 'regexp_split_to_array' },
{ oid => '9030', descr => 'convert int4 number to binary',
proname => 'to_bin', prorettype => 'text', proargtypes => 'int4',
prosrc => 'to_bin32' },
{ oid => '9031', descr => 'convert int8 number to binary',
proname => 'to_bin', prorettype => 'text', proargtypes => 'int8',
prosrc => 'to_bin64' },
{ oid => '9032', descr => 'convert int4 number to oct',
proname => 'to_oct', prorettype => 'text', proargtypes => 'int4',
prosrc => 'to_oct32' },
{ oid => '9033', descr => 'convert int8 number to oct',
proname => 'to_oct', prorettype => 'text', proargtypes => 'int8',
prosrc => 'to_oct64' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2089', descr => 'convert int4 number to hex',
proname => 'to_hex', prorettype => 'text', proargtypes => 'int4',
prosrc => 'to_hex32' },
{ oid => '2090', descr => 'convert int8 number to hex',
proname => 'to_hex', prorettype => 'text', proargtypes => 'int8',
prosrc => 'to_hex64' },
# for character set encoding support
# return database encoding name
{ oid => '1039', descr => 'encoding name of current database',
proname => 'getdatabaseencoding', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'getdatabaseencoding' },
# return client encoding name i.e. session encoding
{ oid => '810', descr => 'encoding name of current database',
proname => 'pg_client_encoding', provolatile => 's', prorettype => 'name',
proargtypes => '', prosrc => 'pg_client_encoding' },
{ oid => '1713', descr => 'length of string in specified encoding',
proname => 'length', provolatile => 's', prorettype => 'int4',
proargtypes => 'bytea name', prosrc => 'length_in_encoding' },
{ oid => '1714',
descr => 'convert string with specified source encoding name',
proname => 'convert_from', provolatile => 's', prorettype => 'text',
proargtypes => 'bytea name', prosrc => 'pg_convert_from' },
{ oid => '1717',
descr => 'convert string with specified destination encoding name',
proname => 'convert_to', provolatile => 's', prorettype => 'bytea',
proargtypes => 'text name', prosrc => 'pg_convert_to' },
{ oid => '1813', descr => 'convert string with specified encoding names',
proname => 'convert', provolatile => 's', prorettype => 'bytea',
proargtypes => 'bytea name name', prosrc => 'pg_convert' },
{ oid => '1264', descr => 'convert encoding name to encoding id',
proname => 'pg_char_to_encoding', provolatile => 's', prorettype => 'int4',
proargtypes => 'name', prosrc => 'PG_char_to_encoding' },
{ oid => '1597', descr => 'convert encoding id to encoding name',
proname => 'pg_encoding_to_char', provolatile => 's', prorettype => 'name',
proargtypes => 'int4', prosrc => 'PG_encoding_to_char' },
{ oid => '2319',
descr => 'maximum octet length of a character in given encoding',
proname => 'pg_encoding_max_length', prorettype => 'int4',
proargtypes => 'int4', prosrc => 'pg_encoding_max_length_sql' },
{ oid => '1638',
proname => 'oidgt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oidgt' },
{ oid => '1639',
proname => 'oidge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'oid oid', prosrc => 'oidge' },
# System-view support functions
{ oid => '1573', descr => 'source text of a rule',
proname => 'pg_get_ruledef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_ruledef' },
{ oid => '1640', descr => 'select statement of a view',
proname => 'pg_get_viewdef', provolatile => 's', proparallel => 'r',
prorettype => 'text', proargtypes => 'text',
prosrc => 'pg_get_viewdef_name' },
{ oid => '1641', descr => 'select statement of a view',
proname => 'pg_get_viewdef', provolatile => 's', proparallel => 'r',
prorettype => 'text', proargtypes => 'oid', prosrc => 'pg_get_viewdef' },
{ oid => '1642', descr => 'role name by OID (with fallback)',
proname => 'pg_get_userbyid', provolatile => 's', prorettype => 'name',
proargtypes => 'oid', prosrc => 'pg_get_userbyid' },
{ oid => '1643', descr => 'index description',
proname => 'pg_get_indexdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_indexdef' },
{ oid => '3415', descr => 'extended statistics object description',
proname => 'pg_get_statisticsobjdef', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_statisticsobjdef' },
{ oid => '6174', descr => 'extended statistics columns',
Extended statistics on expressions Allow defining extended statistics on expressions, not just just on simple column references. With this commit, expressions are supported by all existing extended statistics kinds, improving the same types of estimates. A simple example may look like this: CREATE TABLE t (a int); CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t; ANALYZE t; The collected statistics are useful e.g. to estimate queries with those expressions in WHERE or GROUP BY clauses: SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0; SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20); This introduces new internal statistics kind 'e' (expressions) which is built automatically when the statistics object definition includes any expressions. This represents single-expression statistics, as if there was an expression index (but without the index maintenance overhead). The statistics is stored in pg_statistics_ext_data as an array of composite types, which is possible thanks to 79f6a942bd. CREATE STATISTICS allows building statistics on a single expression, in which case in which case it's not possible to specify statistics kinds. A new system view pg_stats_ext_exprs can be used to display expression statistics, similarly to pg_stats and pg_stats_ext views. ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it treats indexes, i.e. it drops and recreates the statistics. This means all statistics are reset, and we no longer try to preserve at least the functional dependencies. This should not be a major issue in practice, as the functional dependencies actually rely on per-column statistics, which were always reset anyway. Author: Tomas Vondra Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
2021-03-26 23:22:01 +01:00
proname => 'pg_get_statisticsobjdef_columns', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_statisticsobjdef_columns' },
{ oid => '6173', descr => 'extended statistics expressions',
Extended statistics on expressions Allow defining extended statistics on expressions, not just just on simple column references. With this commit, expressions are supported by all existing extended statistics kinds, improving the same types of estimates. A simple example may look like this: CREATE TABLE t (a int); CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t; ANALYZE t; The collected statistics are useful e.g. to estimate queries with those expressions in WHERE or GROUP BY clauses: SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0; SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20); This introduces new internal statistics kind 'e' (expressions) which is built automatically when the statistics object definition includes any expressions. This represents single-expression statistics, as if there was an expression index (but without the index maintenance overhead). The statistics is stored in pg_statistics_ext_data as an array of composite types, which is possible thanks to 79f6a942bd. CREATE STATISTICS allows building statistics on a single expression, in which case in which case it's not possible to specify statistics kinds. A new system view pg_stats_ext_exprs can be used to display expression statistics, similarly to pg_stats and pg_stats_ext views. ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it treats indexes, i.e. it drops and recreates the statistics. This means all statistics are reset, and we no longer try to preserve at least the functional dependencies. This should not be a major issue in practice, as the functional dependencies actually rely on per-column statistics, which were always reset anyway. Author: Tomas Vondra Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
2021-03-26 23:22:01 +01:00
proname => 'pg_get_statisticsobjdef_expressions', provolatile => 's',
prorettype => '_text', proargtypes => 'oid',
prosrc => 'pg_get_statisticsobjdef_expressions' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3352', descr => 'partition key description',
proname => 'pg_get_partkeydef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_partkeydef' },
{ oid => '3408', descr => 'partition constraint description',
proname => 'pg_get_partition_constraintdef', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_partition_constraintdef' },
{ oid => '1662', descr => 'trigger description',
proname => 'pg_get_triggerdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_triggerdef' },
{ oid => '1387', descr => 'constraint description',
proname => 'pg_get_constraintdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_constraintdef' },
{ oid => '1716', descr => 'deparse an encoded expression',
proname => 'pg_get_expr', provolatile => 's', prorettype => 'text',
proargtypes => 'pg_node_tree oid', prosrc => 'pg_get_expr' },
{ oid => '1665', descr => 'name of sequence for a serial column',
proname => 'pg_get_serial_sequence', provolatile => 's', prorettype => 'text',
proargtypes => 'text text', prosrc => 'pg_get_serial_sequence' },
{ oid => '2098', descr => 'definition of a function',
proname => 'pg_get_functiondef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_functiondef' },
{ oid => '2162', descr => 'argument list of a function',
proname => 'pg_get_function_arguments', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_function_arguments' },
{ oid => '2232', descr => 'identity argument list of a function',
proname => 'pg_get_function_identity_arguments', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_function_identity_arguments' },
{ oid => '2165', descr => 'result type of a function',
proname => 'pg_get_function_result', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_get_function_result' },
{ oid => '3808', descr => 'function argument default',
proname => 'pg_get_function_arg_default', provolatile => 's',
prorettype => 'text', proargtypes => 'oid int4',
prosrc => 'pg_get_function_arg_default' },
{ oid => '6197', descr => 'function SQL body',
SQL-standard function body This adds support for writing CREATE FUNCTION and CREATE PROCEDURE statements for language SQL with a function body that conforms to the SQL standard and is portable to other implementations. Instead of the PostgreSQL-specific AS $$ string literal $$ syntax, this allows writing out the SQL statements making up the body unquoted, either as a single statement: CREATE FUNCTION add(a integer, b integer) RETURNS integer LANGUAGE SQL RETURN a + b; or as a block CREATE PROCEDURE insert_data(a integer, b integer) LANGUAGE SQL BEGIN ATOMIC INSERT INTO tbl VALUES (a); INSERT INTO tbl VALUES (b); END; The function body is parsed at function definition time and stored as expression nodes in a new pg_proc column prosqlbody. So at run time, no further parsing is required. However, this form does not support polymorphic arguments, because there is no more parse analysis done at call time. Dependencies between the function and the objects it uses are fully tracked. A new RETURN statement is introduced. This can only be used inside function bodies. Internally, it is treated much like a SELECT statement. psql needs some new intelligence to keep track of function body boundaries so that it doesn't send off statements when it sees semicolons that are inside a function body. Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
2021-04-07 21:30:08 +02:00
proname => 'pg_get_function_sqlbody', provolatile => 's',
prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_get_function_sqlbody' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1686', descr => 'list of SQL keywords',
proname => 'pg_get_keywords', procost => '10', prorows => '500',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,char,bool,text,text}',
proargmodes => '{o,o,o,o,o}',
proargnames => '{word,catcode,barelabel,catdesc,baredesc}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_get_keywords' },
{ oid => '6159', descr => 'list of catalog foreign key relationships',
Build in some knowledge about foreign-key relationships in the catalogs. This follows in the spirit of commit dfb75e478, which created primary key and uniqueness constraints to improve the visibility of constraints imposed on the system catalogs. While our catalogs contain many foreign-key-like relationships, they don't quite follow SQL semantics, in that the convention for an omitted reference is to write zero not NULL. Plus, we have some cases in which there are arrays each of whose elements is supposed to be an FK reference; SQL has no way to model that. So we can't create actual foreign key constraints to describe the situation. Nonetheless, we can collect and use knowledge about these relationships. This patch therefore adds annotations to the catalog header files to declare foreign-key relationships. (The BKI_LOOKUP annotations cover simple cases, but we weren't previously distinguishing which such columns are allowed to contain zeroes; we also need new markings for multi-column FK references.) Then, Catalog.pm and genbki.pl are taught to collect this information into a table in a new generated header "system_fk_info.h". The only user of that at the moment is a new SQL function pg_get_catalog_foreign_keys(), which exposes the table to SQL. The oidjoins regression test is rewritten to use pg_get_catalog_foreign_keys() to find out which columns to check. Aside from removing the need for manual maintenance of that test script, this allows it to cover numerous relationships that were not checked by the old implementation based on findoidjoins. (As of this commit, 217 relationships are checked by the test, versus 181 before.) Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
2021-02-02 23:11:55 +01:00
proname => 'pg_get_catalog_foreign_keys', procost => '10', prorows => '250',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => '',
proallargtypes => '{regclass,_text,regclass,_text,bool,bool}',
proargmodes => '{o,o,o,o,o,o}',
proargnames => '{fktable,fkcols,pktable,pkcols,is_array,is_opt}',
prosrc => 'pg_get_catalog_foreign_keys' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2289', descr => 'convert generic options array to name/value table',
proname => 'pg_options_to_table', prorows => '3', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => '_text',
proallargtypes => '{_text,text,text}', proargmodes => '{i,o,o}',
proargnames => '{options_array,option_name,option_value}',
prosrc => 'pg_options_to_table' },
{ oid => '1619', descr => 'type of the argument',
proname => 'pg_typeof', proisstrict => 'f', provolatile => 's',
prorettype => 'regtype', proargtypes => 'any', prosrc => 'pg_typeof' },
{ oid => '3162',
descr => 'collation of the argument; implementation of the COLLATION FOR expression',
proname => 'pg_collation_for', proisstrict => 'f', provolatile => 's',
prorettype => 'text', proargtypes => 'any', prosrc => 'pg_collation_for' },
{ oid => '3842', descr => 'is a relation insertable/updatable/deletable',
proname => 'pg_relation_is_updatable', procost => '10', provolatile => 's',
prorettype => 'int4', proargtypes => 'regclass bool',
prosrc => 'pg_relation_is_updatable' },
{ oid => '3843', descr => 'is a column updatable',
proname => 'pg_column_is_updatable', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'regclass int2 bool',
prosrc => 'pg_column_is_updatable' },
{ oid => '6120', descr => 'oid of replica identity index if any',
proname => 'pg_get_replica_identity_index', procost => '10',
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'unique_key_recheck' },
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_check_ins' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_check_upd' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
{ oid => '1647', descr => 'referential integrity ON UPDATE CASCADE',
proname => 'RI_FKey_cascade_upd', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_upd' },
{ oid => '1648', descr => 'referential integrity ON DELETE RESTRICT',
proname => 'RI_FKey_restrict_del', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_restrict_del' },
{ oid => '1649', descr => 'referential integrity ON UPDATE RESTRICT',
proname => 'RI_FKey_restrict_upd', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_restrict_upd' },
{ oid => '1650', descr => 'referential integrity ON DELETE SET NULL',
proname => 'RI_FKey_setnull_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_setnull_del' },
{ oid => '1651', descr => 'referential integrity ON UPDATE SET NULL',
proname => 'RI_FKey_setnull_upd', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_setnull_upd' },
{ oid => '1652', descr => 'referential integrity ON DELETE SET DEFAULT',
proname => 'RI_FKey_setdefault_del', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_setdefault_del' },
{ oid => '1653', descr => 'referential integrity ON UPDATE SET DEFAULT',
proname => 'RI_FKey_setdefault_upd', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_setdefault_upd' },
{ oid => '1654', descr => 'referential integrity ON DELETE NO ACTION',
proname => 'RI_FKey_noaction_del', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_noaction_del' },
{ oid => '1655', descr => 'referential integrity ON UPDATE NO ACTION',
proname => 'RI_FKey_noaction_upd', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'RI_FKey_noaction_upd' },
{ oid => '1666',
proname => 'varbiteq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'biteq' },
{ oid => '1667',
proname => 'varbitne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'bitne' },
{ oid => '1668',
proname => 'varbitge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'bitge' },
{ oid => '1669',
proname => 'varbitgt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'bitgt' },
{ oid => '1670',
proname => 'varbitle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'bitle' },
{ oid => '1671',
proname => 'varbitlt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'varbit varbit', prosrc => 'bitlt' },
{ oid => '1672', descr => 'less-equal-greater',
proname => 'varbitcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'varbit varbit', prosrc => 'bitcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# avoid the C names bitand and bitor, since they are C++ keywords
{ oid => '1673',
proname => 'bitand', prorettype => 'bit', proargtypes => 'bit bit',
prosrc => 'bit_and' },
{ oid => '1674',
proname => 'bitor', prorettype => 'bit', proargtypes => 'bit bit',
prosrc => 'bit_or' },
{ oid => '1675',
proname => 'bitxor', prorettype => 'bit', proargtypes => 'bit bit',
prosrc => 'bitxor' },
{ oid => '1676',
proname => 'bitnot', prorettype => 'bit', proargtypes => 'bit',
prosrc => 'bitnot' },
{ oid => '1677',
proname => 'bitshiftleft', prorettype => 'bit', proargtypes => 'bit int4',
prosrc => 'bitshiftleft' },
{ oid => '1678',
proname => 'bitshiftright', prorettype => 'bit', proargtypes => 'bit int4',
prosrc => 'bitshiftright' },
{ oid => '1679',
proname => 'bitcat', prorettype => 'varbit', proargtypes => 'varbit varbit',
prosrc => 'bitcat' },
{ oid => '1680', descr => 'extract portion of bitstring',
proname => 'substring', prorettype => 'bit', proargtypes => 'bit int4 int4',
prosrc => 'bitsubstr' },
{ oid => '1681', descr => 'bitstring length',
proname => 'length', prorettype => 'int4', proargtypes => 'bit',
prosrc => 'bitlength' },
{ oid => '1682', descr => 'octet length',
proname => 'octet_length', prorettype => 'int4', proargtypes => 'bit',
prosrc => 'bitoctetlength' },
{ oid => '1683', descr => 'convert int4 to bitstring',
proname => 'bit', prorettype => 'bit', proargtypes => 'int4 int4',
prosrc => 'bitfromint4' },
{ oid => '1684', descr => 'convert bitstring to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'bit',
prosrc => 'bittoint4' },
{ oid => '1685', descr => 'adjust bit() to typmod length',
proname => 'bit', prorettype => 'bit', proargtypes => 'bit int4 bool',
prosrc => 'bit' },
{ oid => '3158', descr => 'planner support for varbit length coercion',
proname => 'varbit_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'varbit_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1687', descr => 'adjust varbit() to typmod length',
proname => 'varbit', prosupport => 'varbit_support', prorettype => 'varbit',
proargtypes => 'varbit int4 bool', prosrc => 'varbit' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1698', descr => 'position of sub-bitstring',
proname => 'position', prorettype => 'int4', proargtypes => 'bit bit',
prosrc => 'bitposition' },
{ oid => '1699', descr => 'extract portion of bitstring',
proname => 'substring', prorettype => 'bit', proargtypes => 'bit int4',
prosrc => 'bitsubstr_no_len' },
{ oid => '3030', descr => 'substitute portion of bitstring',
proname => 'overlay', prorettype => 'bit', proargtypes => 'bit bit int4 int4',
prosrc => 'bitoverlay' },
{ oid => '3031', descr => 'substitute portion of bitstring',
proname => 'overlay', prorettype => 'bit', proargtypes => 'bit bit int4',
prosrc => 'bitoverlay_no_len' },
{ oid => '3032', descr => 'get bit',
proname => 'get_bit', prorettype => 'int4', proargtypes => 'bit int4',
prosrc => 'bitgetbit' },
{ oid => '3033', descr => 'set bit',
proname => 'set_bit', prorettype => 'bit', proargtypes => 'bit int4 int4',
prosrc => 'bitsetbit' },
{ oid => '6162', descr => 'number of set bits',
proname => 'bit_count', prorettype => 'int8', proargtypes => 'bit',
prosrc => 'bit_bit_count' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# for macaddr type support
{ oid => '436', descr => 'I/O',
proname => 'macaddr_in', prorettype => 'macaddr', proargtypes => 'cstring',
prosrc => 'macaddr_in' },
{ oid => '437', descr => 'I/O',
proname => 'macaddr_out', prorettype => 'cstring', proargtypes => 'macaddr',
prosrc => 'macaddr_out' },
{ oid => '753', descr => 'MACADDR manufacturer fields',
proname => 'trunc', prorettype => 'macaddr', proargtypes => 'macaddr',
prosrc => 'macaddr_trunc' },
{ oid => '830',
proname => 'macaddr_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_eq' },
{ oid => '831',
proname => 'macaddr_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_lt' },
{ oid => '832',
proname => 'macaddr_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_le' },
{ oid => '833',
proname => 'macaddr_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_gt' },
{ oid => '834',
proname => 'macaddr_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_ge' },
{ oid => '835',
proname => 'macaddr_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_ne' },
{ oid => '836', descr => 'less-equal-greater',
proname => 'macaddr_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_cmp' },
{ oid => '3144',
proname => 'macaddr_not', prorettype => 'macaddr', proargtypes => 'macaddr',
prosrc => 'macaddr_not' },
{ oid => '3145',
proname => 'macaddr_and', prorettype => 'macaddr',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_and' },
{ oid => '3146',
proname => 'macaddr_or', prorettype => 'macaddr',
proargtypes => 'macaddr macaddr', prosrc => 'macaddr_or' },
{ oid => '3359', descr => 'sort support',
proname => 'macaddr_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'macaddr_sortsupport' },
# for macaddr8 type support
{ oid => '4110', descr => 'I/O',
proname => 'macaddr8_in', prorettype => 'macaddr8', proargtypes => 'cstring',
prosrc => 'macaddr8_in' },
{ oid => '4111', descr => 'I/O',
proname => 'macaddr8_out', prorettype => 'cstring', proargtypes => 'macaddr8',
prosrc => 'macaddr8_out' },
{ oid => '4112', descr => 'MACADDR8 manufacturer fields',
proname => 'trunc', prorettype => 'macaddr8', proargtypes => 'macaddr8',
prosrc => 'macaddr8_trunc' },
{ oid => '4113',
proname => 'macaddr8_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_eq' },
{ oid => '4114',
proname => 'macaddr8_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_lt' },
{ oid => '4115',
proname => 'macaddr8_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_le' },
{ oid => '4116',
proname => 'macaddr8_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_gt' },
{ oid => '4117',
proname => 'macaddr8_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_ge' },
{ oid => '4118',
proname => 'macaddr8_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_ne' },
{ oid => '4119', descr => 'less-equal-greater',
proname => 'macaddr8_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_cmp' },
{ oid => '4120',
proname => 'macaddr8_not', prorettype => 'macaddr8',
proargtypes => 'macaddr8', prosrc => 'macaddr8_not' },
{ oid => '4121',
proname => 'macaddr8_and', prorettype => 'macaddr8',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_and' },
{ oid => '4122',
proname => 'macaddr8_or', prorettype => 'macaddr8',
proargtypes => 'macaddr8 macaddr8', prosrc => 'macaddr8_or' },
{ oid => '4123', descr => 'convert macaddr to macaddr8',
proname => 'macaddr8', proleakproof => 't', prorettype => 'macaddr8',
proargtypes => 'macaddr', prosrc => 'macaddrtomacaddr8' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '4124', descr => 'convert macaddr8 to macaddr',
proname => 'macaddr', prorettype => 'macaddr', proargtypes => 'macaddr8',
prosrc => 'macaddr8tomacaddr' },
{ oid => '4125', descr => 'set 7th bit in macaddr8',
proname => 'macaddr8_set7bit', prorettype => 'macaddr8',
proargtypes => 'macaddr8', prosrc => 'macaddr8_set7bit' },
# for inet type support
{ oid => '910', descr => 'I/O',
proname => 'inet_in', prorettype => 'inet', proargtypes => 'cstring',
prosrc => 'inet_in' },
{ oid => '911', descr => 'I/O',
proname => 'inet_out', prorettype => 'cstring', proargtypes => 'inet',
prosrc => 'inet_out' },
# for cidr type support
{ oid => '1267', descr => 'I/O',
proname => 'cidr_in', prorettype => 'cidr', proargtypes => 'cstring',
prosrc => 'cidr_in' },
{ oid => '1427', descr => 'I/O',
proname => 'cidr_out', prorettype => 'cstring', proargtypes => 'cidr',
prosrc => 'cidr_out' },
# these are used for both inet and cidr
{ oid => '920',
proname => 'network_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_eq' },
{ oid => '921',
proname => 'network_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_lt' },
{ oid => '922',
proname => 'network_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_le' },
{ oid => '923',
proname => 'network_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_gt' },
{ oid => '924',
proname => 'network_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_ge' },
{ oid => '925',
proname => 'network_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_ne' },
{ oid => '3562', descr => 'larger of two',
proname => 'network_larger', prorettype => 'inet', proargtypes => 'inet inet',
prosrc => 'network_larger' },
{ oid => '3563', descr => 'smaller of two',
proname => 'network_smaller', prorettype => 'inet',
proargtypes => 'inet inet', prosrc => 'network_smaller' },
{ oid => '926', descr => 'less-equal-greater',
proname => 'network_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'inet inet', prosrc => 'network_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '927',
proname => 'network_sub', prosupport => 'network_subset_support',
prorettype => 'bool', proargtypes => 'inet inet', prosrc => 'network_sub' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '928',
proname => 'network_subeq', prosupport => 'network_subset_support',
prorettype => 'bool', proargtypes => 'inet inet', prosrc => 'network_subeq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '929',
proname => 'network_sup', prosupport => 'network_subset_support',
prorettype => 'bool', proargtypes => 'inet inet', prosrc => 'network_sup' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '930',
proname => 'network_supeq', prosupport => 'network_subset_support',
prorettype => 'bool', proargtypes => 'inet inet', prosrc => 'network_supeq' },
{ oid => '1173', descr => 'planner support for network_sub/superset',
proname => 'network_subset_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'network_subset_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3551',
proname => 'network_overlap', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'network_overlap' },
{ oid => '5033', descr => 'sort support',
proname => 'network_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'network_sortsupport' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# inet/cidr functions
{ oid => '598', descr => 'abbreviated display of inet value',
proname => 'abbrev', prorettype => 'text', proargtypes => 'inet',
prosrc => 'inet_abbrev' },
{ oid => '599', descr => 'abbreviated display of cidr value',
proname => 'abbrev', prorettype => 'text', proargtypes => 'cidr',
prosrc => 'cidr_abbrev' },
{ oid => '605', descr => 'change netmask of inet',
proname => 'set_masklen', prorettype => 'inet', proargtypes => 'inet int4',
prosrc => 'inet_set_masklen' },
{ oid => '635', descr => 'change netmask of cidr',
proname => 'set_masklen', prorettype => 'cidr', proargtypes => 'cidr int4',
prosrc => 'cidr_set_masklen' },
{ oid => '711', descr => 'address family (4 for IPv4, 6 for IPv6)',
proname => 'family', prorettype => 'int4', proargtypes => 'inet',
prosrc => 'network_family' },
{ oid => '683', descr => 'network part of address',
proname => 'network', prorettype => 'cidr', proargtypes => 'inet',
prosrc => 'network_network' },
{ oid => '696', descr => 'netmask of address',
proname => 'netmask', prorettype => 'inet', proargtypes => 'inet',
prosrc => 'network_netmask' },
{ oid => '697', descr => 'netmask length',
proname => 'masklen', prorettype => 'int4', proargtypes => 'inet',
prosrc => 'network_masklen' },
{ oid => '698', descr => 'broadcast address of network',
proname => 'broadcast', prorettype => 'inet', proargtypes => 'inet',
prosrc => 'network_broadcast' },
{ oid => '699', descr => 'show address octets only',
proname => 'host', prorettype => 'text', proargtypes => 'inet',
prosrc => 'network_host' },
{ oid => '730', descr => 'show all parts of inet/cidr value',
proname => 'text', prorettype => 'text', proargtypes => 'inet',
prosrc => 'network_show' },
{ oid => '1362', descr => 'hostmask of address',
proname => 'hostmask', prorettype => 'inet', proargtypes => 'inet',
prosrc => 'network_hostmask' },
{ oid => '1715', descr => 'convert inet to cidr',
proname => 'cidr', prorettype => 'cidr', proargtypes => 'inet',
prosrc => 'inet_to_cidr' },
{ oid => '2196', descr => 'inet address of the client',
proname => 'inet_client_addr', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'inet', proargtypes => '',
prosrc => 'inet_client_addr' },
{ oid => '2197', descr => 'client\'s port number for this connection',
proname => 'inet_client_port', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'int4', proargtypes => '',
prosrc => 'inet_client_port' },
{ oid => '2198', descr => 'inet address of the server',
proname => 'inet_server_addr', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'inet', proargtypes => '',
prosrc => 'inet_server_addr' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2199', descr => 'server\'s port number for this connection',
proname => 'inet_server_port', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'int4', proargtypes => '',
prosrc => 'inet_server_port' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2627',
proname => 'inetnot', prorettype => 'inet', proargtypes => 'inet',
prosrc => 'inetnot' },
{ oid => '2628',
proname => 'inetand', prorettype => 'inet', proargtypes => 'inet inet',
prosrc => 'inetand' },
{ oid => '2629',
proname => 'inetor', prorettype => 'inet', proargtypes => 'inet inet',
prosrc => 'inetor' },
{ oid => '2630',
proname => 'inetpl', prorettype => 'inet', proargtypes => 'inet int8',
prosrc => 'inetpl' },
{ oid => '2631',
proname => 'int8pl_inet', prolang => 'sql', prorettype => 'inet',
proargtypes => 'int8 inet', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2632',
proname => 'inetmi_int8', prorettype => 'inet', proargtypes => 'inet int8',
prosrc => 'inetmi_int8' },
{ oid => '2633',
proname => 'inetmi', prorettype => 'int8', proargtypes => 'inet inet',
prosrc => 'inetmi' },
{ oid => '4071', descr => 'are the addresses from the same family?',
proname => 'inet_same_family', prorettype => 'bool',
proargtypes => 'inet inet', prosrc => 'inet_same_family' },
{ oid => '4063',
descr => 'the smallest network which includes both of the given networks',
proname => 'inet_merge', prorettype => 'cidr', proargtypes => 'inet inet',
prosrc => 'inet_merge' },
# GiST support for inet and cidr
{ oid => '3553', descr => 'GiST support',
proname => 'inet_gist_consistent', prorettype => 'bool',
proargtypes => 'internal inet int2 oid internal',
prosrc => 'inet_gist_consistent' },
{ oid => '3554', descr => 'GiST support',
proname => 'inet_gist_union', prorettype => 'inet',
proargtypes => 'internal internal', prosrc => 'inet_gist_union' },
{ oid => '3555', descr => 'GiST support',
proname => 'inet_gist_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'inet_gist_compress' },
{ oid => '3573', descr => 'GiST support',
proname => 'inet_gist_fetch', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'inet_gist_fetch' },
{ oid => '3557', descr => 'GiST support',
proname => 'inet_gist_penalty', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'inet_gist_penalty' },
{ oid => '3558', descr => 'GiST support',
proname => 'inet_gist_picksplit', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'inet_gist_picksplit' },
{ oid => '3559', descr => 'GiST support',
proname => 'inet_gist_same', prorettype => 'internal',
proargtypes => 'inet inet internal', prosrc => 'inet_gist_same' },
# SP-GiST support for inet and cidr
{ oid => '3795', descr => 'SP-GiST support',
proname => 'inet_spg_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'inet_spg_config' },
{ oid => '3796', descr => 'SP-GiST support',
proname => 'inet_spg_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'inet_spg_choose' },
{ oid => '3797', descr => 'SP-GiST support',
proname => 'inet_spg_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'inet_spg_picksplit' },
{ oid => '3798', descr => 'SP-GiST support',
proname => 'inet_spg_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'inet_spg_inner_consistent' },
{ oid => '3799', descr => 'SP-GiST support',
proname => 'inet_spg_leaf_consistent', prorettype => 'bool',
proargtypes => 'internal internal', prosrc => 'inet_spg_leaf_consistent' },
# Selectivity estimation for inet and cidr
{ oid => '3560', descr => 'restriction selectivity for network operators',
proname => 'networksel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'networksel' },
{ oid => '3561', descr => 'join selectivity for network operators',
proname => 'networkjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'networkjoinsel' },
{ oid => '1690',
proname => 'time_mi_time', prorettype => 'interval',
proargtypes => 'time time', prosrc => 'time_mi_time' },
{ oid => '1691',
proname => 'boolle', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boolle' },
{ oid => '1692',
proname => 'boolge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boolge' },
{ oid => '1693', descr => 'less-equal-greater',
proname => 'btboolcmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'bool bool', prosrc => 'btboolcmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1688', descr => 'hash',
proname => 'time_hash', prorettype => 'int4', proargtypes => 'time',
prosrc => 'time_hash' },
{ oid => '3409', descr => 'hash',
proname => 'time_hash_extended', prorettype => 'int8',
proargtypes => 'time int8', prosrc => 'time_hash_extended' },
{ oid => '1696', descr => 'hash',
proname => 'timetz_hash', prorettype => 'int4', proargtypes => 'timetz',
prosrc => 'timetz_hash' },
{ oid => '3410', descr => 'hash',
proname => 'timetz_hash_extended', prorettype => 'int8',
proargtypes => 'timetz int8', prosrc => 'timetz_hash_extended' },
{ oid => '1697', descr => 'hash',
proname => 'interval_hash', prorettype => 'int4', proargtypes => 'interval',
prosrc => 'interval_hash' },
{ oid => '3418', descr => 'hash',
proname => 'interval_hash_extended', prorettype => 'int8',
proargtypes => 'interval int8', prosrc => 'interval_hash_extended' },
# OID's 1700 - 1799 NUMERIC data type
{ oid => '1701', descr => 'I/O',
proname => 'numeric_in', prorettype => 'numeric',
proargtypes => 'cstring oid int4', prosrc => 'numeric_in' },
{ oid => '1702', descr => 'I/O',
proname => 'numeric_out', prorettype => 'cstring', proargtypes => 'numeric',
prosrc => 'numeric_out' },
{ oid => '2917', descr => 'I/O typmod',
proname => 'numerictypmodin', prorettype => 'int4', proargtypes => '_cstring',
prosrc => 'numerictypmodin' },
{ oid => '2918', descr => 'I/O typmod',
proname => 'numerictypmodout', prorettype => 'cstring', proargtypes => 'int4',
prosrc => 'numerictypmodout' },
{ oid => '3157', descr => 'planner support for numeric length coercion',
proname => 'numeric_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'numeric_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1703', descr => 'adjust numeric to typmod precision/scale',
proname => 'numeric', prosupport => 'numeric_support',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'numeric', proargtypes => 'numeric int4', prosrc => 'numeric' },
{ oid => '1704',
proname => 'numeric_abs', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_abs' },
{ oid => '1705', descr => 'absolute value',
proname => 'abs', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_abs' },
{ oid => '1706', descr => 'sign of value',
proname => 'sign', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_sign' },
{ oid => '1707', descr => 'value rounded to \'scale\'',
proname => 'round', prorettype => 'numeric', proargtypes => 'numeric int4',
prosrc => 'numeric_round' },
{ oid => '1708', descr => 'value rounded to \'scale\' of zero',
proname => 'round', prolang => 'sql', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1709', descr => 'value truncated to \'scale\'',
proname => 'trunc', prorettype => 'numeric', proargtypes => 'numeric int4',
prosrc => 'numeric_trunc' },
{ oid => '1710', descr => 'value truncated to \'scale\' of zero',
proname => 'trunc', prolang => 'sql', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1711', descr => 'nearest integer >= value',
proname => 'ceil', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_ceil' },
{ oid => '2167', descr => 'nearest integer >= value',
proname => 'ceiling', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_ceil' },
{ oid => '1712', descr => 'nearest integer <= value',
proname => 'floor', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_floor' },
{ oid => '1718',
proname => 'numeric_eq', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_eq' },
{ oid => '1719',
proname => 'numeric_ne', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_ne' },
{ oid => '1720',
proname => 'numeric_gt', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_gt' },
{ oid => '1721',
proname => 'numeric_ge', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_ge' },
{ oid => '1722',
proname => 'numeric_lt', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_lt' },
{ oid => '1723',
proname => 'numeric_le', prorettype => 'bool',
proargtypes => 'numeric numeric', prosrc => 'numeric_le' },
{ oid => '1724',
proname => 'numeric_add', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_add' },
{ oid => '1725',
proname => 'numeric_sub', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_sub' },
{ oid => '1726',
proname => 'numeric_mul', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_mul' },
{ oid => '1727',
proname => 'numeric_div', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_div' },
{ oid => '1728', descr => 'modulus',
proname => 'mod', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_mod' },
{ oid => '1729',
proname => 'numeric_mod', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_mod' },
{ oid => '5048', descr => 'greatest common divisor',
proname => 'gcd', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_gcd' },
{ oid => '5049', descr => 'least common multiple',
proname => 'lcm', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_lcm' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1730', descr => 'square root',
proname => 'sqrt', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_sqrt' },
{ oid => '1731', descr => 'square root',
proname => 'numeric_sqrt', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_sqrt' },
{ oid => '1732', descr => 'natural exponential (e^x)',
proname => 'exp', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_exp' },
{ oid => '1733', descr => 'natural exponential (e^x)',
proname => 'numeric_exp', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_exp' },
{ oid => '1734', descr => 'natural logarithm',
proname => 'ln', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_ln' },
{ oid => '1735', descr => 'natural logarithm',
proname => 'numeric_ln', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_ln' },
{ oid => '1736', descr => 'logarithm base m of n',
proname => 'log', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_log' },
{ oid => '1737', descr => 'logarithm base m of n',
proname => 'numeric_log', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_log' },
{ oid => '1738', descr => 'exponentiation',
proname => 'pow', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_power' },
{ oid => '2169', descr => 'exponentiation',
proname => 'power', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_power' },
{ oid => '1739',
proname => 'numeric_power', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_power' },
{ oid => '3281', descr => 'number of decimal digits in the fractional part',
proname => 'scale', prorettype => 'int4', proargtypes => 'numeric',
prosrc => 'numeric_scale' },
{ oid => '5042', descr => 'minimum scale needed to represent the value',
proname => 'min_scale', prorettype => 'int4', proargtypes => 'numeric',
prosrc => 'numeric_min_scale' },
{ oid => '5043',
descr => 'numeric with minimum scale needed to represent the value',
proname => 'trim_scale', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_trim_scale' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1740', descr => 'convert int4 to numeric',
proname => 'numeric', proleakproof => 't', prorettype => 'numeric',
proargtypes => 'int4', prosrc => 'int4_numeric' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1741', descr => 'base 10 logarithm',
proname => 'log', prolang => 'sql', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'see system_functions.sql' },
{ oid => '1481', descr => 'base 10 logarithm',
proname => 'log10', prolang => 'sql', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1742', descr => 'convert float4 to numeric',
proname => 'numeric', proleakproof => 't', prorettype => 'numeric',
proargtypes => 'float4', prosrc => 'float4_numeric' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1743', descr => 'convert float8 to numeric',
proname => 'numeric', proleakproof => 't', prorettype => 'numeric',
proargtypes => 'float8', prosrc => 'float8_numeric' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1744', descr => 'convert numeric to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'numeric',
prosrc => 'numeric_int4' },
{ oid => '1745', descr => 'convert numeric to float4',
proname => 'float4', prorettype => 'float4', proargtypes => 'numeric',
prosrc => 'numeric_float4' },
{ oid => '1746', descr => 'convert numeric to float8',
proname => 'float8', prorettype => 'float8', proargtypes => 'numeric',
prosrc => 'numeric_float8' },
{ oid => '1973', descr => 'trunc(x/y)',
proname => 'div', prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'numeric_div_trunc' },
{ oid => '1980', descr => 'trunc(x/y)',
proname => 'numeric_div_trunc', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_div_trunc' },
{ oid => '2170', descr => 'bucket number of operand in equal-width histogram',
proname => 'width_bucket', prorettype => 'int4',
proargtypes => 'numeric numeric numeric int4',
prosrc => 'width_bucket_numeric' },
{ oid => '1747',
proname => 'time_pl_interval', prorettype => 'time',
proargtypes => 'time interval', prosrc => 'time_pl_interval' },
{ oid => '1748',
proname => 'time_mi_interval', prorettype => 'time',
proargtypes => 'time interval', prosrc => 'time_mi_interval' },
{ oid => '1749',
proname => 'timetz_pl_interval', prorettype => 'timetz',
proargtypes => 'timetz interval', prosrc => 'timetz_pl_interval' },
{ oid => '1750',
proname => 'timetz_mi_interval', prorettype => 'timetz',
proargtypes => 'timetz interval', prosrc => 'timetz_mi_interval' },
{ oid => '1764', descr => 'increment by one',
proname => 'numeric_inc', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_inc' },
{ oid => '1766', descr => 'smaller of two',
proname => 'numeric_smaller', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_smaller' },
{ oid => '1767', descr => 'larger of two',
proname => 'numeric_larger', prorettype => 'numeric',
proargtypes => 'numeric numeric', prosrc => 'numeric_larger' },
{ oid => '1769', descr => 'less-equal-greater',
proname => 'numeric_cmp', prorettype => 'int4',
proargtypes => 'numeric numeric', prosrc => 'numeric_cmp' },
{ oid => '3283', descr => 'sort support',
proname => 'numeric_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'numeric_sortsupport' },
{ oid => '1771',
proname => 'numeric_uminus', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'numeric_uminus' },
{ oid => '1779', descr => 'convert numeric to int8',
proname => 'int8', prorettype => 'int8', proargtypes => 'numeric',
prosrc => 'numeric_int8' },
{ oid => '1781', descr => 'convert int8 to numeric',
proname => 'numeric', proleakproof => 't', prorettype => 'numeric',
proargtypes => 'int8', prosrc => 'int8_numeric' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1782', descr => 'convert int2 to numeric',
proname => 'numeric', proleakproof => 't', prorettype => 'numeric',
proargtypes => 'int2', prosrc => 'int2_numeric' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1783', descr => 'convert numeric to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'numeric',
prosrc => 'numeric_int2' },
{ oid => '6103', descr => 'convert numeric to pg_lsn',
proname => 'pg_lsn', prorettype => 'pg_lsn', proargtypes => 'numeric',
prosrc => 'numeric_pg_lsn' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3556', descr => 'convert jsonb to boolean',
proname => 'bool', prorettype => 'bool', proargtypes => 'jsonb',
prosrc => 'jsonb_bool' },
{ oid => '3449', descr => 'convert jsonb to numeric',
proname => 'numeric', prorettype => 'numeric', proargtypes => 'jsonb',
prosrc => 'jsonb_numeric' },
{ oid => '3450', descr => 'convert jsonb to int2',
proname => 'int2', prorettype => 'int2', proargtypes => 'jsonb',
prosrc => 'jsonb_int2' },
{ oid => '3451', descr => 'convert jsonb to int4',
proname => 'int4', prorettype => 'int4', proargtypes => 'jsonb',
prosrc => 'jsonb_int4' },
{ oid => '3452', descr => 'convert jsonb to int8',
proname => 'int8', prorettype => 'int8', proargtypes => 'jsonb',
prosrc => 'jsonb_int8' },
{ oid => '3453', descr => 'convert jsonb to float4',
proname => 'float4', prorettype => 'float4', proargtypes => 'jsonb',
prosrc => 'jsonb_float4' },
{ oid => '2580', descr => 'convert jsonb to float8',
proname => 'float8', prorettype => 'float8', proargtypes => 'jsonb',
prosrc => 'jsonb_float8' },
# formatting
{ oid => '1770', descr => 'format timestamp with time zone to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'timestamptz text', prosrc => 'timestamptz_to_char' },
{ oid => '1772', descr => 'format numeric to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'numeric text', prosrc => 'numeric_to_char' },
{ oid => '1773', descr => 'format int4 to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'int4 text', prosrc => 'int4_to_char' },
{ oid => '1774', descr => 'format int8 to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'int8 text', prosrc => 'int8_to_char' },
{ oid => '1775', descr => 'format float4 to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'float4 text', prosrc => 'float4_to_char' },
{ oid => '1776', descr => 'format float8 to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'float8 text', prosrc => 'float8_to_char' },
{ oid => '1777', descr => 'convert text to numeric',
proname => 'to_number', provolatile => 's', prorettype => 'numeric',
proargtypes => 'text text', prosrc => 'numeric_to_number' },
{ oid => '1778', descr => 'convert text to timestamp with time zone',
proname => 'to_timestamp', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'text text', prosrc => 'to_timestamp' },
{ oid => '1780', descr => 'convert text to date',
proname => 'to_date', provolatile => 's', prorettype => 'date',
proargtypes => 'text text', prosrc => 'to_date' },
{ oid => '1768', descr => 'format interval to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'interval text', prosrc => 'interval_to_char' },
{ oid => '1282', descr => 'quote an identifier for usage in a querystring',
proname => 'quote_ident', prorettype => 'text', proargtypes => 'text',
prosrc => 'quote_ident' },
{ oid => '1283', descr => 'quote a literal for usage in a querystring',
proname => 'quote_literal', prorettype => 'text', proargtypes => 'text',
prosrc => 'quote_literal' },
{ oid => '1285', descr => 'quote a data value for usage in a querystring',
proname => 'quote_literal', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'text', proargtypes => 'anyelement',
prosrc => 'select pg_catalog.quote_literal($1::pg_catalog.text)' },
{ oid => '1289',
descr => 'quote a possibly-null literal for usage in a querystring',
proname => 'quote_nullable', proisstrict => 'f', prorettype => 'text',
proargtypes => 'text', prosrc => 'quote_nullable' },
{ oid => '1290',
descr => 'quote a possibly-null data value for usage in a querystring',
proname => 'quote_nullable', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 's', prorettype => 'text', proargtypes => 'anyelement',
prosrc => 'select pg_catalog.quote_nullable($1::pg_catalog.text)' },
{ oid => '1798', descr => 'I/O',
proname => 'oidin', prorettype => 'oid', proargtypes => 'cstring',
prosrc => 'oidin' },
{ oid => '1799', descr => 'I/O',
proname => 'oidout', prorettype => 'cstring', proargtypes => 'oid',
prosrc => 'oidout' },
{ oid => '3058', descr => 'concatenate values',
proname => 'concat', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'text', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}', prosrc => 'text_concat' },
{ oid => '3059', descr => 'concatenate values with separators',
proname => 'concat_ws', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'text', proargtypes => 'text any',
proallargtypes => '{text,any}', proargmodes => '{i,v}',
prosrc => 'text_concat_ws' },
{ oid => '3060', descr => 'extract the first n characters',
proname => 'left', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'text_left' },
{ oid => '3061', descr => 'extract the last n characters',
proname => 'right', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'text_right' },
{ oid => '3062', descr => 'reverse text',
proname => 'reverse', prorettype => 'text', proargtypes => 'text',
prosrc => 'text_reverse' },
{ oid => '3539', descr => 'format text message',
proname => 'format', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'text', proargtypes => 'text any',
proallargtypes => '{text,any}', proargmodes => '{i,v}',
prosrc => 'text_format' },
{ oid => '3540', descr => 'format text message',
proname => 'format', proisstrict => 'f', provolatile => 's',
prorettype => 'text', proargtypes => 'text', prosrc => 'text_format_nv' },
{ oid => '1810', descr => 'length in bits',
proname => 'bit_length', prolang => 'sql', prorettype => 'int4',
proargtypes => 'bytea', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1811', descr => 'length in bits',
proname => 'bit_length', prolang => 'sql', prorettype => 'int4',
proargtypes => 'text', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1812', descr => 'length in bits',
proname => 'bit_length', prolang => 'sql', prorettype => 'int4',
proargtypes => 'bit', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# Selectivity estimators for LIKE and related operators
{ oid => '1814', descr => 'restriction selectivity of ILIKE',
proname => 'iclikesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'iclikesel' },
{ oid => '1815', descr => 'restriction selectivity of NOT ILIKE',
proname => 'icnlikesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'icnlikesel' },
{ oid => '1816', descr => 'join selectivity of ILIKE',
proname => 'iclikejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'iclikejoinsel' },
{ oid => '1817', descr => 'join selectivity of NOT ILIKE',
proname => 'icnlikejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'icnlikejoinsel' },
{ oid => '1818', descr => 'restriction selectivity of regex match',
proname => 'regexeqsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'regexeqsel' },
{ oid => '1819', descr => 'restriction selectivity of LIKE',
proname => 'likesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'likesel' },
{ oid => '1820',
descr => 'restriction selectivity of case-insensitive regex match',
proname => 'icregexeqsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'icregexeqsel' },
{ oid => '1821', descr => 'restriction selectivity of regex non-match',
proname => 'regexnesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'regexnesel' },
{ oid => '1822', descr => 'restriction selectivity of NOT LIKE',
proname => 'nlikesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'nlikesel' },
{ oid => '1823',
descr => 'restriction selectivity of case-insensitive regex non-match',
proname => 'icregexnesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'icregexnesel' },
{ oid => '1824', descr => 'join selectivity of regex match',
proname => 'regexeqjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'regexeqjoinsel' },
{ oid => '1825', descr => 'join selectivity of LIKE',
proname => 'likejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'likejoinsel' },
{ oid => '1826', descr => 'join selectivity of case-insensitive regex match',
proname => 'icregexeqjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'icregexeqjoinsel' },
{ oid => '1827', descr => 'join selectivity of regex non-match',
proname => 'regexnejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'regexnejoinsel' },
{ oid => '1828', descr => 'join selectivity of NOT LIKE',
proname => 'nlikejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'nlikejoinsel' },
{ oid => '1829',
descr => 'join selectivity of case-insensitive regex non-match',
proname => 'icregexnejoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'icregexnejoinsel' },
{ oid => '3437', descr => 'restriction selectivity of exact prefix',
proname => 'prefixsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'prefixsel' },
{ oid => '3438', descr => 'join selectivity of exact prefix',
proname => 'prefixjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'prefixjoinsel' },
# Aggregate-related functions
{ oid => '1830', descr => 'aggregate final function',
proname => 'float8_avg', prorettype => 'float8', proargtypes => '_float8',
prosrc => 'float8_avg' },
{ oid => '2512', descr => 'aggregate final function',
proname => 'float8_var_pop', prorettype => 'float8', proargtypes => '_float8',
prosrc => 'float8_var_pop' },
{ oid => '1831', descr => 'aggregate final function',
proname => 'float8_var_samp', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_var_samp' },
{ oid => '2513', descr => 'aggregate final function',
proname => 'float8_stddev_pop', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_stddev_pop' },
{ oid => '1832', descr => 'aggregate final function',
proname => 'float8_stddev_samp', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_stddev_samp' },
{ oid => '1833', descr => 'aggregate transition function',
proname => 'numeric_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal numeric', prosrc => 'numeric_accum' },
{ oid => '3341', descr => 'aggregate combine function',
proname => 'numeric_combine', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'numeric_combine' },
{ oid => '2858', descr => 'aggregate transition function',
proname => 'numeric_avg_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal numeric', prosrc => 'numeric_avg_accum' },
{ oid => '3337', descr => 'aggregate combine function',
proname => 'numeric_avg_combine', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal internal',
prosrc => 'numeric_avg_combine' },
{ oid => '2740', descr => 'aggregate serial function',
proname => 'numeric_avg_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'numeric_avg_serialize' },
{ oid => '2741', descr => 'aggregate deserial function',
proname => 'numeric_avg_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'numeric_avg_deserialize' },
{ oid => '3335', descr => 'aggregate serial function',
proname => 'numeric_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'numeric_serialize' },
{ oid => '3336', descr => 'aggregate deserial function',
proname => 'numeric_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'numeric_deserialize' },
{ oid => '3548', descr => 'aggregate transition function',
proname => 'numeric_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal numeric', prosrc => 'numeric_accum_inv' },
{ oid => '1834', descr => 'aggregate transition function',
proname => 'int2_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int2', prosrc => 'int2_accum' },
{ oid => '1835', descr => 'aggregate transition function',
proname => 'int4_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int4', prosrc => 'int4_accum' },
{ oid => '1836', descr => 'aggregate transition function',
proname => 'int8_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int8', prosrc => 'int8_accum' },
{ oid => '3338', descr => 'aggregate combine function',
proname => 'numeric_poly_combine', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal internal',
prosrc => 'numeric_poly_combine' },
{ oid => '3339', descr => 'aggregate serial function',
proname => 'numeric_poly_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'numeric_poly_serialize' },
{ oid => '3340', descr => 'aggregate deserial function',
proname => 'numeric_poly_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'numeric_poly_deserialize' },
{ oid => '2746', descr => 'aggregate transition function',
proname => 'int8_avg_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int8', prosrc => 'int8_avg_accum' },
{ oid => '3567', descr => 'aggregate transition function',
proname => 'int2_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int2', prosrc => 'int2_accum_inv' },
{ oid => '3568', descr => 'aggregate transition function',
proname => 'int4_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int4', prosrc => 'int4_accum_inv' },
{ oid => '3569', descr => 'aggregate transition function',
proname => 'int8_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int8', prosrc => 'int8_accum_inv' },
{ oid => '3387', descr => 'aggregate transition function',
proname => 'int8_avg_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal int8', prosrc => 'int8_avg_accum_inv' },
{ oid => '2785', descr => 'aggregate combine function',
proname => 'int8_avg_combine', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'int8_avg_combine' },
{ oid => '2786', descr => 'aggregate serial function',
proname => 'int8_avg_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'int8_avg_serialize' },
{ oid => '2787', descr => 'aggregate deserial function',
proname => 'int8_avg_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'int8_avg_deserialize' },
{ oid => '3324', descr => 'aggregate combine function',
proname => 'int4_avg_combine', prorettype => '_int8',
proargtypes => '_int8 _int8', prosrc => 'int4_avg_combine' },
{ oid => '3178', descr => 'aggregate final function',
proname => 'numeric_sum', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_sum' },
{ oid => '1837', descr => 'aggregate final function',
proname => 'numeric_avg', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_avg' },
{ oid => '2514', descr => 'aggregate final function',
proname => 'numeric_var_pop', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_var_pop' },
{ oid => '1838', descr => 'aggregate final function',
proname => 'numeric_var_samp', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_var_samp' },
{ oid => '2596', descr => 'aggregate final function',
proname => 'numeric_stddev_pop', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_stddev_pop' },
{ oid => '1839', descr => 'aggregate final function',
proname => 'numeric_stddev_samp', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_stddev_samp' },
{ oid => '1840', descr => 'aggregate transition function',
proname => 'int2_sum', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int8 int2', prosrc => 'int2_sum' },
{ oid => '1841', descr => 'aggregate transition function',
proname => 'int4_sum', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int8 int4', prosrc => 'int4_sum' },
{ oid => '1842', descr => 'aggregate transition function',
proname => 'int8_sum', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'numeric int8', prosrc => 'int8_sum' },
{ oid => '3388', descr => 'aggregate final function',
proname => 'numeric_poly_sum', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_poly_sum' },
{ oid => '3389', descr => 'aggregate final function',
proname => 'numeric_poly_avg', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'internal', prosrc => 'numeric_poly_avg' },
{ oid => '3390', descr => 'aggregate final function',
proname => 'numeric_poly_var_pop', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'internal',
prosrc => 'numeric_poly_var_pop' },
{ oid => '3391', descr => 'aggregate final function',
proname => 'numeric_poly_var_samp', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'internal',
prosrc => 'numeric_poly_var_samp' },
{ oid => '3392', descr => 'aggregate final function',
proname => 'numeric_poly_stddev_pop', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'internal',
prosrc => 'numeric_poly_stddev_pop' },
{ oid => '3393', descr => 'aggregate final function',
proname => 'numeric_poly_stddev_samp', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'internal',
prosrc => 'numeric_poly_stddev_samp' },
{ oid => '1843', descr => 'aggregate transition function',
proname => 'interval_avg_accum', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal interval',
prosrc => 'interval_avg_accum' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3325', descr => 'aggregate combine function',
proname => 'interval_avg_combine', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal internal',
prosrc => 'interval_avg_combine' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3549', descr => 'aggregate transition function',
proname => 'interval_avg_accum_inv', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal interval',
prosrc => 'interval_avg_accum_inv' },
{ oid => '8505', descr => 'aggregate serial function',
proname => 'interval_avg_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'interval_avg_serialize' },
{ oid => '8506', descr => 'aggregate deserial function',
proname => 'interval_avg_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'interval_avg_deserialize' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1844', descr => 'aggregate final function',
proname => 'interval_avg', proisstrict => 'f', prorettype => 'interval',
proargtypes => 'internal', prosrc => 'interval_avg' },
{ oid => '8507', descr => 'aggregate final function',
proname => 'interval_sum', proisstrict => 'f', prorettype => 'interval',
proargtypes => 'internal', prosrc => 'interval_sum' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1962', descr => 'aggregate transition function',
proname => 'int2_avg_accum', prorettype => '_int8',
proargtypes => '_int8 int2', prosrc => 'int2_avg_accum' },
{ oid => '1963', descr => 'aggregate transition function',
proname => 'int4_avg_accum', prorettype => '_int8',
proargtypes => '_int8 int4', prosrc => 'int4_avg_accum' },
{ oid => '3570', descr => 'aggregate transition function',
proname => 'int2_avg_accum_inv', prorettype => '_int8',
proargtypes => '_int8 int2', prosrc => 'int2_avg_accum_inv' },
{ oid => '3571', descr => 'aggregate transition function',
proname => 'int4_avg_accum_inv', prorettype => '_int8',
proargtypes => '_int8 int4', prosrc => 'int4_avg_accum_inv' },
{ oid => '1964', descr => 'aggregate final function',
proname => 'int8_avg', prorettype => 'numeric', proargtypes => '_int8',
prosrc => 'int8_avg' },
{ oid => '3572', descr => 'aggregate final function',
proname => 'int2int4_sum', prorettype => 'int8', proargtypes => '_int8',
prosrc => 'int2int4_sum' },
{ oid => '2805', descr => 'aggregate transition function',
proname => 'int8inc_float8_float8', prorettype => 'int8',
proargtypes => 'int8 float8 float8', prosrc => 'int8inc_float8_float8' },
{ oid => '2806', descr => 'aggregate transition function',
proname => 'float8_regr_accum', prorettype => '_float8',
proargtypes => '_float8 float8 float8', prosrc => 'float8_regr_accum' },
{ oid => '3342', descr => 'aggregate combine function',
proname => 'float8_regr_combine', prorettype => '_float8',
proargtypes => '_float8 _float8', prosrc => 'float8_regr_combine' },
{ oid => '2807', descr => 'aggregate final function',
proname => 'float8_regr_sxx', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_sxx' },
{ oid => '2808', descr => 'aggregate final function',
proname => 'float8_regr_syy', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_syy' },
{ oid => '2809', descr => 'aggregate final function',
proname => 'float8_regr_sxy', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_sxy' },
{ oid => '2810', descr => 'aggregate final function',
proname => 'float8_regr_avgx', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_avgx' },
{ oid => '2811', descr => 'aggregate final function',
proname => 'float8_regr_avgy', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_avgy' },
{ oid => '2812', descr => 'aggregate final function',
proname => 'float8_regr_r2', prorettype => 'float8', proargtypes => '_float8',
prosrc => 'float8_regr_r2' },
{ oid => '2813', descr => 'aggregate final function',
proname => 'float8_regr_slope', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_slope' },
{ oid => '2814', descr => 'aggregate final function',
proname => 'float8_regr_intercept', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_regr_intercept' },
{ oid => '2815', descr => 'aggregate final function',
proname => 'float8_covar_pop', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_covar_pop' },
{ oid => '2816', descr => 'aggregate final function',
proname => 'float8_covar_samp', prorettype => 'float8',
proargtypes => '_float8', prosrc => 'float8_covar_samp' },
{ oid => '2817', descr => 'aggregate final function',
proname => 'float8_corr', prorettype => 'float8', proargtypes => '_float8',
prosrc => 'float8_corr' },
{ oid => '3535', descr => 'aggregate transition function',
proname => 'string_agg_transfn', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal text text', prosrc => 'string_agg_transfn' },
{ oid => '6299', descr => 'aggregate combine function',
proname => 'string_agg_combine', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'string_agg_combine' },
{ oid => '6300', descr => 'aggregate serial function',
proname => 'string_agg_serialize', prorettype => 'bytea',
proargtypes => 'internal', prosrc => 'string_agg_serialize' },
{ oid => '6301', descr => 'aggregate deserial function',
proname => 'string_agg_deserialize', prorettype => 'internal',
proargtypes => 'bytea internal', prosrc => 'string_agg_deserialize' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3536', descr => 'aggregate final function',
proname => 'string_agg_finalfn', proisstrict => 'f', prorettype => 'text',
proargtypes => 'internal', prosrc => 'string_agg_finalfn' },
{ oid => '3538', descr => 'concatenate aggregate input into a string',
proname => 'string_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'text', proargtypes => 'text text',
proargnames => '{value,delimiter}', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3543', descr => 'aggregate transition function',
proname => 'bytea_string_agg_transfn', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal bytea bytea',
prosrc => 'bytea_string_agg_transfn' },
{ oid => '3544', descr => 'aggregate final function',
proname => 'bytea_string_agg_finalfn', proisstrict => 'f',
prorettype => 'bytea', proargtypes => 'internal',
prosrc => 'bytea_string_agg_finalfn' },
{ oid => '3545', descr => 'concatenate aggregate input into a bytea',
proname => 'string_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'bytea', proargtypes => 'bytea bytea',
proargnames => '{value,delimiter}', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# To ASCII conversion
{ oid => '1845', descr => 'encode text from DB encoding to ASCII text',
proname => 'to_ascii', prorettype => 'text', proargtypes => 'text',
prosrc => 'to_ascii_default' },
{ oid => '1846', descr => 'encode text from encoding to ASCII text',
proname => 'to_ascii', prorettype => 'text', proargtypes => 'text int4',
prosrc => 'to_ascii_enc' },
{ oid => '1847', descr => 'encode text from encoding to ASCII text',
proname => 'to_ascii', prorettype => 'text', proargtypes => 'text name',
prosrc => 'to_ascii_encname' },
{ oid => '1848',
proname => 'interval_pl_time', prolang => 'sql', prorettype => 'time',
proargtypes => 'interval time', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1850',
proname => 'int28eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28eq' },
{ oid => '1851',
proname => 'int28ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28ne' },
{ oid => '1852',
proname => 'int28lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28lt' },
{ oid => '1853',
proname => 'int28gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28gt' },
{ oid => '1854',
proname => 'int28le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28le' },
{ oid => '1855',
proname => 'int28ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int2 int8', prosrc => 'int28ge' },
{ oid => '1856',
proname => 'int82eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82eq' },
{ oid => '1857',
proname => 'int82ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82ne' },
{ oid => '1858',
proname => 'int82lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82lt' },
{ oid => '1859',
proname => 'int82gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82gt' },
{ oid => '1860',
proname => 'int82le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82le' },
{ oid => '1861',
proname => 'int82ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int8 int2', prosrc => 'int82ge' },
{ oid => '1892',
proname => 'int2and', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2and' },
{ oid => '1893',
proname => 'int2or', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2or' },
{ oid => '1894',
proname => 'int2xor', prorettype => 'int2', proargtypes => 'int2 int2',
prosrc => 'int2xor' },
{ oid => '1895',
proname => 'int2not', prorettype => 'int2', proargtypes => 'int2',
prosrc => 'int2not' },
{ oid => '1896',
proname => 'int2shl', prorettype => 'int2', proargtypes => 'int2 int4',
prosrc => 'int2shl' },
{ oid => '1897',
proname => 'int2shr', prorettype => 'int2', proargtypes => 'int2 int4',
prosrc => 'int2shr' },
{ oid => '1898',
proname => 'int4and', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4and' },
{ oid => '1899',
proname => 'int4or', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4or' },
{ oid => '1900',
proname => 'int4xor', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4xor' },
{ oid => '1901',
proname => 'int4not', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4not' },
{ oid => '1902',
proname => 'int4shl', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4shl' },
{ oid => '1903',
proname => 'int4shr', prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'int4shr' },
{ oid => '1904',
proname => 'int8and', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8and' },
{ oid => '1905',
proname => 'int8or', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8or' },
{ oid => '1906',
proname => 'int8xor', prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'int8xor' },
{ oid => '1907',
proname => 'int8not', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8not' },
{ oid => '1908',
proname => 'int8shl', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int8shl' },
{ oid => '1909',
proname => 'int8shr', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int8shr' },
{ oid => '1910',
proname => 'int8up', prorettype => 'int8', proargtypes => 'int8',
prosrc => 'int8up' },
{ oid => '1911',
proname => 'int2up', prorettype => 'int2', proargtypes => 'int2',
prosrc => 'int2up' },
{ oid => '1912',
proname => 'int4up', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'int4up' },
{ oid => '1913',
proname => 'float4up', prorettype => 'float4', proargtypes => 'float4',
prosrc => 'float4up' },
{ oid => '1914',
proname => 'float8up', prorettype => 'float8', proargtypes => 'float8',
prosrc => 'float8up' },
{ oid => '1915',
proname => 'numeric_uplus', prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'numeric_uplus' },
{ oid => '1922', descr => 'user privilege on relation by username, rel name',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text', prosrc => 'has_table_privilege_name_name' },
{ oid => '1923', descr => 'user privilege on relation by username, rel oid',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_table_privilege_name_id' },
{ oid => '1924', descr => 'user privilege on relation by user oid, rel name',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_table_privilege_id_name' },
{ oid => '1925', descr => 'user privilege on relation by user oid, rel oid',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_table_privilege_id_id' },
{ oid => '1926', descr => 'current user privilege on relation by rel name',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_table_privilege_name' },
{ oid => '1927', descr => 'current user privilege on relation by rel oid',
proname => 'has_table_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_table_privilege_id' },
{ oid => '2181', descr => 'user privilege on sequence by username, seq name',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text',
prosrc => 'has_sequence_privilege_name_name' },
{ oid => '2182', descr => 'user privilege on sequence by username, seq oid',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_sequence_privilege_name_id' },
{ oid => '2183', descr => 'user privilege on sequence by user oid, seq name',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_sequence_privilege_id_name' },
{ oid => '2184', descr => 'user privilege on sequence by user oid, seq oid',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_sequence_privilege_id_id' },
{ oid => '2185', descr => 'current user privilege on sequence by seq name',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_sequence_privilege_name' },
{ oid => '2186', descr => 'current user privilege on sequence by seq oid',
proname => 'has_sequence_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_sequence_privilege_id' },
{ oid => '3012',
descr => 'user privilege on column by username, rel name, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text text',
prosrc => 'has_column_privilege_name_name_name' },
{ oid => '3013',
descr => 'user privilege on column by username, rel name, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text int2 text',
prosrc => 'has_column_privilege_name_name_attnum' },
{ oid => '3014',
descr => 'user privilege on column by username, rel oid, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text text',
prosrc => 'has_column_privilege_name_id_name' },
{ oid => '3015',
descr => 'user privilege on column by username, rel oid, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid int2 text',
prosrc => 'has_column_privilege_name_id_attnum' },
{ oid => '3016',
descr => 'user privilege on column by user oid, rel name, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text text',
prosrc => 'has_column_privilege_id_name_name' },
{ oid => '3017',
descr => 'user privilege on column by user oid, rel name, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text int2 text',
prosrc => 'has_column_privilege_id_name_attnum' },
{ oid => '3018',
descr => 'user privilege on column by user oid, rel oid, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text text',
prosrc => 'has_column_privilege_id_id_name' },
{ oid => '3019',
descr => 'user privilege on column by user oid, rel oid, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid int2 text',
prosrc => 'has_column_privilege_id_id_attnum' },
{ oid => '3020',
descr => 'current user privilege on column by rel name, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text text', prosrc => 'has_column_privilege_name_name' },
{ oid => '3021',
descr => 'current user privilege on column by rel name, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text int2 text',
prosrc => 'has_column_privilege_name_attnum' },
{ oid => '3022',
descr => 'current user privilege on column by rel oid, col name',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_column_privilege_id_name' },
{ oid => '3023',
descr => 'current user privilege on column by rel oid, col attnum',
proname => 'has_column_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid int2 text', prosrc => 'has_column_privilege_id_attnum' },
{ oid => '3024',
descr => 'user privilege on any column by username, rel name',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'name text text',
prosrc => 'has_any_column_privilege_name_name' },
{ oid => '3025', descr => 'user privilege on any column by username, rel oid',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'name oid text',
prosrc => 'has_any_column_privilege_name_id' },
{ oid => '3026',
descr => 'user privilege on any column by user oid, rel name',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text text',
prosrc => 'has_any_column_privilege_id_name' },
{ oid => '3027', descr => 'user privilege on any column by user oid, rel oid',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid oid text',
prosrc => 'has_any_column_privilege_id_id' },
{ oid => '3028', descr => 'current user privilege on any column by rel name',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'text text',
prosrc => 'has_any_column_privilege_name' },
{ oid => '3029', descr => 'current user privilege on any column by rel oid',
proname => 'has_any_column_privilege', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text',
prosrc => 'has_any_column_privilege_id' },
{ oid => '3355', descr => 'I/O',
proname => 'pg_ndistinct_in', prorettype => 'pg_ndistinct',
proargtypes => 'cstring', prosrc => 'pg_ndistinct_in' },
{ oid => '3356', descr => 'I/O',
proname => 'pg_ndistinct_out', prorettype => 'cstring',
proargtypes => 'pg_ndistinct', prosrc => 'pg_ndistinct_out' },
{ oid => '3357', descr => 'I/O',
proname => 'pg_ndistinct_recv', provolatile => 's',
prorettype => 'pg_ndistinct', proargtypes => 'internal',
prosrc => 'pg_ndistinct_recv' },
{ oid => '3358', descr => 'I/O',
proname => 'pg_ndistinct_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'pg_ndistinct', prosrc => 'pg_ndistinct_send' },
{ oid => '3404', descr => 'I/O',
proname => 'pg_dependencies_in', prorettype => 'pg_dependencies',
proargtypes => 'cstring', prosrc => 'pg_dependencies_in' },
{ oid => '3405', descr => 'I/O',
proname => 'pg_dependencies_out', prorettype => 'cstring',
proargtypes => 'pg_dependencies', prosrc => 'pg_dependencies_out' },
{ oid => '3406', descr => 'I/O',
proname => 'pg_dependencies_recv', provolatile => 's',
prorettype => 'pg_dependencies', proargtypes => 'internal',
prosrc => 'pg_dependencies_recv' },
{ oid => '3407', descr => 'I/O',
proname => 'pg_dependencies_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'pg_dependencies', prosrc => 'pg_dependencies_send' },
{ oid => '5018', descr => 'I/O',
proname => 'pg_mcv_list_in', prorettype => 'pg_mcv_list',
proargtypes => 'cstring', prosrc => 'pg_mcv_list_in' },
{ oid => '5019', descr => 'I/O',
proname => 'pg_mcv_list_out', prorettype => 'cstring',
proargtypes => 'pg_mcv_list', prosrc => 'pg_mcv_list_out' },
{ oid => '5020', descr => 'I/O',
proname => 'pg_mcv_list_recv', provolatile => 's',
prorettype => 'pg_mcv_list', proargtypes => 'internal',
prosrc => 'pg_mcv_list_recv' },
{ oid => '5021', descr => 'I/O',
proname => 'pg_mcv_list_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'pg_mcv_list', prosrc => 'pg_mcv_list_send' },
{ oid => '3427', descr => 'details about MCV list items',
proname => 'pg_mcv_list_items', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => 'pg_mcv_list',
proallargtypes => '{pg_mcv_list,int4,_text,_bool,float8,float8}',
proargmodes => '{i,o,o,o,o,o}',
proargnames => '{mcv_list,index,values,nulls,frequency,base_frequency}',
prosrc => 'pg_stats_ext_mcvlist_items' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1928', descr => 'statistics: number of scans done for table/index',
proname => 'pg_stat_get_numscans', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_numscans' },
{ oid => '6310', descr => 'statistics: time of the last scan for table/index',
proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_lastscan' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1929', descr => 'statistics: number of tuples read by seqscan',
proname => 'pg_stat_get_tuples_returned', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_returned' },
{ oid => '1930', descr => 'statistics: number of tuples fetched by idxscan',
proname => 'pg_stat_get_tuples_fetched', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_fetched' },
{ oid => '1931', descr => 'statistics: number of tuples inserted',
proname => 'pg_stat_get_tuples_inserted', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_inserted' },
{ oid => '1932', descr => 'statistics: number of tuples updated',
proname => 'pg_stat_get_tuples_updated', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_updated' },
{ oid => '1933', descr => 'statistics: number of tuples deleted',
proname => 'pg_stat_get_tuples_deleted', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_deleted' },
{ oid => '1972', descr => 'statistics: number of tuples hot updated',
proname => 'pg_stat_get_tuples_hot_updated', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_hot_updated' },
{ oid => '6217',
descr => 'statistics: number of tuples updated onto a new page',
proname => 'pg_stat_get_tuples_newpage_updated', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_tuples_newpage_updated' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2878', descr => 'statistics: number of live tuples',
proname => 'pg_stat_get_live_tuples', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_live_tuples' },
{ oid => '2879', descr => 'statistics: number of dead tuples',
proname => 'pg_stat_get_dead_tuples', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_dead_tuples' },
{ oid => '3177',
descr => 'statistics: number of tuples changed since last analyze',
proname => 'pg_stat_get_mod_since_analyze', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_mod_since_analyze' },
{ oid => '5053',
Trigger autovacuum based on number of INSERTs Traditionally autovacuum has only ever invoked a worker based on the estimated number of dead tuples in a table and for anti-wraparound purposes. For the latter, with certain classes of tables such as insert-only tables, anti-wraparound vacuums could be the first vacuum that the table ever receives. This could often lead to autovacuum workers being busy for extended periods of time due to having to potentially freeze every page in the table. This could be particularly bad for very large tables. New clusters, or recently pg_restored clusters could suffer even more as many large tables may have the same relfrozenxid, which could result in large numbers of tables requiring an anti-wraparound vacuum all at once. Here we aim to reduce the work required by anti-wraparound and aggressive vacuums in general, by triggering autovacuum when the table has received enough INSERTs. This is controlled by adding two new GUCs and reloptions; autovacuum_vacuum_insert_threshold and autovacuum_vacuum_insert_scale_factor. These work exactly the same as the existing scale factor and threshold controls, only base themselves off the number of inserts since the last vacuum, rather than the number of dead tuples. New controls were added rather than reusing the existing controls, to allow these new vacuums to be tuned independently and perhaps even completely disabled altogether, which can be done by setting autovacuum_vacuum_insert_threshold to -1. We make no attempt to skip index cleanup operations on these vacuums as they may trigger for an insert-mostly table which continually doesn't have enough dead tuples to trigger an autovacuum for the purpose of removing those dead tuples. If we were to skip cleaning the indexes in this case, then it is possible for the index(es) to become bloated over time. There are additional benefits to triggering autovacuums based on inserts, as tables which never contain enough dead tuples to trigger an autovacuum are now more likely to receive a vacuum, which can mark more of the table as "allvisible" and encourage the query planner to make use of Index Only Scans. Currently, we still obey vacuum_freeze_min_age when triggering these new autovacuums based on INSERTs. For large insert-only tables, it may be beneficial to lower the table's autovacuum_freeze_min_age so that tuples are eligible to be frozen sooner. Here we've opted not to zero that for these types of vacuums, since the table may just be insert-mostly and we may otherwise freeze tuples that are still destined to be updated or removed in the near future. There was some debate to what exactly the new scale factor and threshold should default to. For now, these are set to 0.2 and 1000, respectively. There may be some motivation to adjust these before the release. Author: Laurenz Albe, Darafei Praliaskouski Reviewed-by: Alvaro Herrera, Masahiko Sawada, Chris Travers, Andres Freund, Justin Pryzby Discussion: https://postgr.es/m/CAC8Q8t%2Bj36G_bLF%3D%2B0iMo6jGNWnLnWb1tujXuJr-%2Bx8ZCCTqoQ%40mail.gmail.com
2020-03-28 07:20:12 +01:00
descr => 'statistics: number of tuples inserted since last vacuum',
proname => 'pg_stat_get_ins_since_vacuum', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_ins_since_vacuum' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1934', descr => 'statistics: number of blocks fetched',
proname => 'pg_stat_get_blocks_fetched', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_blocks_fetched' },
{ oid => '1935', descr => 'statistics: number of blocks found in cache',
proname => 'pg_stat_get_blocks_hit', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_blocks_hit' },
{ oid => '2781', descr => 'statistics: last manual vacuum time for a table',
proname => 'pg_stat_get_last_vacuum_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_last_vacuum_time' },
{ oid => '2782', descr => 'statistics: last auto vacuum time for a table',
proname => 'pg_stat_get_last_autovacuum_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_last_autovacuum_time' },
{ oid => '2783', descr => 'statistics: last manual analyze time for a table',
proname => 'pg_stat_get_last_analyze_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_last_analyze_time' },
{ oid => '2784', descr => 'statistics: last auto analyze time for a table',
proname => 'pg_stat_get_last_autoanalyze_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_last_autoanalyze_time' },
{ oid => '3054', descr => 'statistics: number of manual vacuums for a table',
proname => 'pg_stat_get_vacuum_count', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_vacuum_count' },
{ oid => '3055', descr => 'statistics: number of auto vacuums for a table',
proname => 'pg_stat_get_autovacuum_count', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_autovacuum_count' },
{ oid => '3056', descr => 'statistics: number of manual analyzes for a table',
proname => 'pg_stat_get_analyze_count', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_analyze_count' },
{ oid => '3057', descr => 'statistics: number of auto analyzes for a table',
proname => 'pg_stat_get_autoanalyze_count', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_autoanalyze_count' },
{ oid => '1936', descr => 'statistics: currently active backend IDs',
proname => 'pg_stat_get_backend_idset', prorows => '100', proretset => 't',
provolatile => 's', proparallel => 'r', prorettype => 'int4',
proargtypes => '', prosrc => 'pg_stat_get_backend_idset' },
{ oid => '2022',
descr => 'statistics: information about currently active backends',
proname => 'pg_stat_get_activity', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => 'int4',
proallargtypes => '{int4,oid,int4,oid,text,text,text,text,text,timestamptz,timestamptz,timestamptz,timestamptz,inet,text,int4,xid,xid,text,bool,text,text,int4,text,numeric,text,bool,text,bool,bool,int4,int8}',
proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{pid,datid,pid,usesysid,application_name,state,query,wait_event_type,wait_event,xact_start,query_start,backend_start,state_change,client_addr,client_hostname,client_port,backend_xid,backend_xmin,backend_type,ssl,sslversion,sslcipher,sslbits,ssl_client_dn,ssl_client_serial,ssl_issuer_dn,gss_auth,gss_princ,gss_enc,gss_delegation,leader_pid,query_id}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_stat_get_activity' },
{ oid => '8403', descr => 'describe wait events',
proname => 'pg_get_wait_events', procost => '10', prorows => '250',
proretset => 't', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,text,text}',
proargmodes => '{o,o,o}', proargnames => '{type,name,description}',
prosrc => 'pg_get_wait_events' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3318',
descr => 'statistics: information about progress of backends running maintenance command',
proname => 'pg_stat_get_progress_info', prorows => '100', proretset => 't',
provolatile => 's', proparallel => 'r', prorettype => 'record',
proargtypes => 'text',
proallargtypes => '{text,int4,oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8}',
proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{cmdtype,pid,datid,relid,param1,param2,param3,param4,param5,param6,param7,param8,param9,param10,param11,param12,param13,param14,param15,param16,param17,param18,param19,param20}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_stat_get_progress_info' },
{ oid => '3099',
descr => 'statistics: information about currently active replication',
proname => 'pg_stat_get_wal_senders', prorows => '10', proisstrict => 'f',
proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{int4,text,pg_lsn,pg_lsn,pg_lsn,pg_lsn,interval,interval,interval,int4,text,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{pid,state,sent_lsn,write_lsn,flush_lsn,replay_lsn,write_lag,flush_lag,replay_lag,sync_priority,sync_state,reply_time}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_stat_get_wal_senders' },
{ oid => '3317', descr => 'statistics: information about WAL receiver',
proname => 'pg_stat_get_wal_receiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
proallargtypes => '{int4,text,pg_lsn,int4,pg_lsn,pg_lsn,int4,timestamptz,timestamptz,pg_lsn,timestamptz,text,text,int4,text}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{pid,status,receive_start_lsn,receive_start_tli,written_lsn,flushed_lsn,received_tli,last_msg_send_time,last_msg_receipt_time,latest_end_lsn,latest_end_time,slot_name,sender_host,sender_port,conninfo}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_stat_get_wal_receiver' },
{ oid => '6169', descr => 'statistics: information about replication slot',
proname => 'pg_stat_get_replication_slot', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => 'text',
proallargtypes => '{text,text,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
proargmodes => '{i,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{slot_name,slot_name,spill_txns,spill_count,spill_bytes,stream_txns,stream_count,stream_bytes,total_txns,total_bytes,stats_reset}',
prosrc => 'pg_stat_get_replication_slot' },
{ oid => '6230', descr => 'statistics: check if a stats object exists',
proname => 'pg_stat_have_stats', provolatile => 'v', proparallel => 'r',
prorettype => 'bool', proargtypes => 'text oid oid',
prosrc => 'pg_stat_have_stats' },
{ oid => '6231', descr => 'statistics: information about subscription stats',
proname => 'pg_stat_get_subscription_stats', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
proallargtypes => '{oid,oid,int8,int8,timestamptz}',
proargmodes => '{i,o,o,o,o}',
proargnames => '{subid,subid,apply_error_count,sync_error_count,stats_reset}',
prosrc => 'pg_stat_get_subscription_stats' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '6118', descr => 'statistics: information about subscription',
proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => 'oid',
proallargtypes => '{oid,oid,oid,int4,int4,pg_lsn,timestamptz,timestamptz,pg_lsn,timestamptz,text}',
proargmodes => '{i,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{subid,subid,relid,pid,leader_pid,received_lsn,last_msg_send_time,last_msg_receipt_time,latest_end_lsn,latest_end_time,worker_type}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_stat_get_subscription' },
{ oid => '2026', descr => 'statistics: current backend PID',
proname => 'pg_backend_pid', provolatile => 's', proparallel => 'r',
prorettype => 'int4', proargtypes => '', prosrc => 'pg_backend_pid' },
{ oid => '1937', descr => 'statistics: PID of backend',
proname => 'pg_stat_get_backend_pid', provolatile => 's', proparallel => 'r',
prorettype => 'int4', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_pid' },
{ oid => '1938', descr => 'statistics: database ID of backend',
proname => 'pg_stat_get_backend_dbid', provolatile => 's', proparallel => 'r',
prorettype => 'oid', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_dbid' },
{ oid => '6107', descr => 'statistics: get subtransaction status of backend',
proname => 'pg_stat_get_backend_subxact', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => 'int4',
proallargtypes => '{int4,int4,bool}', proargmodes => '{i,o,o}',
proargnames => '{bid,subxact_count,subxact_overflowed}',
prosrc => 'pg_stat_get_backend_subxact' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1939', descr => 'statistics: user ID of backend',
proname => 'pg_stat_get_backend_userid', provolatile => 's',
proparallel => 'r', prorettype => 'oid', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_userid' },
{ oid => '1940', descr => 'statistics: current query of backend',
proname => 'pg_stat_get_backend_activity', provolatile => 's',
proparallel => 'r', prorettype => 'text', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_activity' },
{ oid => '2788',
descr => 'statistics: wait event type on which backend is currently waiting',
proname => 'pg_stat_get_backend_wait_event_type', provolatile => 's',
proparallel => 'r', prorettype => 'text', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_wait_event_type' },
{ oid => '2853',
descr => 'statistics: wait event on which backend is currently waiting',
proname => 'pg_stat_get_backend_wait_event', provolatile => 's',
proparallel => 'r', prorettype => 'text', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_wait_event' },
{ oid => '2094',
descr => 'statistics: start time for current query of backend',
proname => 'pg_stat_get_backend_activity_start', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_activity_start' },
{ oid => '2857',
descr => 'statistics: start time for backend\'s current transaction',
proname => 'pg_stat_get_backend_xact_start', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_xact_start' },
{ oid => '1391',
descr => 'statistics: start time for current backend session',
proname => 'pg_stat_get_backend_start', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_start' },
{ oid => '1392',
descr => 'statistics: address of client connected to backend',
proname => 'pg_stat_get_backend_client_addr', provolatile => 's',
proparallel => 'r', prorettype => 'inet', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_client_addr' },
{ oid => '1393',
descr => 'statistics: port number of client connected to backend',
proname => 'pg_stat_get_backend_client_port', provolatile => 's',
proparallel => 'r', prorettype => 'int4', proargtypes => 'int4',
prosrc => 'pg_stat_get_backend_client_port' },
{ oid => '1941', descr => 'statistics: number of backends in database',
proname => 'pg_stat_get_db_numbackends', provolatile => 's',
proparallel => 'r', prorettype => 'int4', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_numbackends' },
{ oid => '1942', descr => 'statistics: transactions committed',
proname => 'pg_stat_get_db_xact_commit', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_xact_commit' },
{ oid => '1943', descr => 'statistics: transactions rolled back',
proname => 'pg_stat_get_db_xact_rollback', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_xact_rollback' },
{ oid => '1944', descr => 'statistics: blocks fetched for database',
proname => 'pg_stat_get_db_blocks_fetched', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_blocks_fetched' },
{ oid => '1945', descr => 'statistics: blocks found in cache for database',
proname => 'pg_stat_get_db_blocks_hit', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_blocks_hit' },
{ oid => '2758', descr => 'statistics: tuples returned for database',
proname => 'pg_stat_get_db_tuples_returned', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_tuples_returned' },
{ oid => '2759', descr => 'statistics: tuples fetched for database',
proname => 'pg_stat_get_db_tuples_fetched', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_tuples_fetched' },
{ oid => '2760', descr => 'statistics: tuples inserted in database',
proname => 'pg_stat_get_db_tuples_inserted', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_tuples_inserted' },
{ oid => '2761', descr => 'statistics: tuples updated in database',
proname => 'pg_stat_get_db_tuples_updated', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_tuples_updated' },
{ oid => '2762', descr => 'statistics: tuples deleted in database',
proname => 'pg_stat_get_db_tuples_deleted', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_tuples_deleted' },
{ oid => '3065',
descr => 'statistics: recovery conflicts in database caused by drop tablespace',
proname => 'pg_stat_get_db_conflict_tablespace', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_tablespace' },
{ oid => '3066',
descr => 'statistics: recovery conflicts in database caused by relation lock',
proname => 'pg_stat_get_db_conflict_lock', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_lock' },
{ oid => '3067',
descr => 'statistics: recovery conflicts in database caused by snapshot expiry',
proname => 'pg_stat_get_db_conflict_snapshot', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_snapshot' },
{ oid => '6309',
Handle logical slot conflicts on standby During WAL replay on the standby, when a conflict with a logical slot is identified, invalidate such slots. There are two sources of conflicts: 1) Using the information added in 6af1793954e, logical slots are invalidated if required rows are removed 2) wal_level on the primary server is reduced to below logical Uses the infrastructure introduced in the prior commit. FIXME: add commit reference. Change InvalidatePossiblyObsoleteSlot() to use a recovery conflict to interrupt use of a slot, if called in the startup process. The new recovery conflict is added to pg_stat_database_conflicts, as confl_active_logicalslot. See 6af1793954e for an overall design of logical decoding on a standby. Bumps catversion for the addition of the pg_stat_database_conflicts column. Bumps PGSTAT_FILE_FORMAT_ID for the same reason. Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Author: Andres Freund <andres@anarazel.de> Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version) Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20230407075009.igg7be27ha2htkbt@awork3.anarazel.de
2023-04-08 08:11:28 +02:00
descr => 'statistics: recovery conflicts in database caused by logical replication slot',
proname => 'pg_stat_get_db_conflict_logicalslot', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_logicalslot' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3068',
descr => 'statistics: recovery conflicts in database caused by shared buffer pin',
proname => 'pg_stat_get_db_conflict_bufferpin', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_bufferpin' },
{ oid => '3069',
descr => 'statistics: recovery conflicts in database caused by buffer deadlock',
proname => 'pg_stat_get_db_conflict_startup_deadlock', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_startup_deadlock' },
{ oid => '3070', descr => 'statistics: recovery conflicts in database',
proname => 'pg_stat_get_db_conflict_all', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_conflict_all' },
{ oid => '3152', descr => 'statistics: deadlocks detected in database',
proname => 'pg_stat_get_db_deadlocks', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_deadlocks' },
{ oid => '3426',
descr => 'statistics: checksum failures detected in database',
proname => 'pg_stat_get_db_checksum_failures', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_checksum_failures' },
{ oid => '3428',
descr => 'statistics: when last checksum failure was detected in database',
proname => 'pg_stat_get_db_checksum_last_failure', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_checksum_last_failure' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3074', descr => 'statistics: last reset for a database',
proname => 'pg_stat_get_db_stat_reset_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_stat_reset_time' },
{ oid => '3150', descr => 'statistics: number of temporary files written',
proname => 'pg_stat_get_db_temp_files', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_temp_files' },
{ oid => '3151',
descr => 'statistics: number of bytes in temporary files written',
proname => 'pg_stat_get_db_temp_bytes', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_temp_bytes' },
{ oid => '2844', descr => 'statistics: block read time, in milliseconds',
proname => 'pg_stat_get_db_blk_read_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_blk_read_time' },
{ oid => '2845', descr => 'statistics: block write time, in milliseconds',
proname => 'pg_stat_get_db_blk_write_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_blk_write_time' },
{ oid => '6185', descr => 'statistics: session time, in milliseconds',
proname => 'pg_stat_get_db_session_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_session_time' },
{ oid => '6186', descr => 'statistics: session active time, in milliseconds',
proname => 'pg_stat_get_db_active_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_active_time' },
{ oid => '6187',
descr => 'statistics: session idle in transaction time, in milliseconds',
proname => 'pg_stat_get_db_idle_in_transaction_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_idle_in_transaction_time' },
{ oid => '6188', descr => 'statistics: total number of sessions',
proname => 'pg_stat_get_db_sessions', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions' },
{ oid => '6189',
descr => 'statistics: number of sessions disconnected by the client closing the network connection',
proname => 'pg_stat_get_db_sessions_abandoned', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_abandoned' },
{ oid => '6190',
descr => 'statistics: number of sessions disconnected by fatal errors',
proname => 'pg_stat_get_db_sessions_fatal', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_fatal' },
{ oid => '6191',
descr => 'statistics: number of sessions killed by administrative action',
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
proallargtypes => '{int8,text,timestamptz,int8,text,timestamptz,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o}',
proargnames => '{archived_count,last_archived_wal,last_archived_time,failed_count,last_failed_wal,last_failed_time,stats_reset}',
prosrc => 'pg_stat_get_archiver' },
{ oid => '2769',
descr => 'statistics: number of timed checkpoints started by the checkpointer',
proname => 'pg_stat_get_checkpointer_num_timed', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_num_timed' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2770',
descr => 'statistics: number of backend requested checkpoints started by the checkpointer',
proname => 'pg_stat_get_checkpointer_num_requested', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_num_requested' },
{ oid => '8743',
descr => 'statistics: number of timed restartpoints started by the checkpointer',
proname => 'pg_stat_get_checkpointer_restartpoints_timed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_restartpoints_timed' },
{ oid => '8744',
descr => 'statistics: number of backend requested restartpoints started by the checkpointer',
proname => 'pg_stat_get_checkpointer_restartpoints_requested', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_restartpoints_requested' },
{ oid => '8745',
descr => 'statistics: number of backend performed restartpoints',
proname => 'pg_stat_get_checkpointer_restartpoints_performed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_restartpoints_performed' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2771',
descr => 'statistics: number of buffers written during checkpoints and restartpoints',
proname => 'pg_stat_get_checkpointer_buffers_written', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_buffers_written' },
{ oid => '8206', descr => 'statistics: last reset for the checkpointer',
proname => 'pg_stat_get_checkpointer_stat_reset_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_stat_reset_time' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2772',
descr => 'statistics: number of buffers written by the bgwriter for cleaning dirty buffers',
proname => 'pg_stat_get_bgwriter_buf_written_clean', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_bgwriter_buf_written_clean' },
{ oid => '2773',
descr => 'statistics: number of times the bgwriter stopped processing when it had written too many buffers while cleaning',
proname => 'pg_stat_get_bgwriter_maxwritten_clean', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => '',
prosrc => 'pg_stat_get_bgwriter_maxwritten_clean' },
{ oid => '3075', descr => 'statistics: last reset for the bgwriter',
proname => 'pg_stat_get_bgwriter_stat_reset_time', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_stat_get_bgwriter_stat_reset_time' },
{ oid => '3160',
descr => 'statistics: checkpoint/restartpoint time spent writing buffers to disk, in milliseconds',
proname => 'pg_stat_get_checkpointer_write_time', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proparallel => 'r', prorettype => 'float8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_write_time' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3161',
descr => 'statistics: checkpoint/restartpoint time spent synchronizing buffers to disk, in milliseconds',
proname => 'pg_stat_get_checkpointer_sync_time', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proparallel => 'r', prorettype => 'float8', proargtypes => '',
prosrc => 'pg_stat_get_checkpointer_sync_time' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2859', descr => 'statistics: number of buffer allocations',
proname => 'pg_stat_get_buf_alloc', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => '', prosrc => 'pg_stat_get_buf_alloc' },
{ oid => '6214', descr => 'statistics: per backend type IO statistics',
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
proname => 'pg_stat_get_io', prorows => '30', proretset => 't',
provolatile => 'v', proparallel => 'r', prorettype => 'record',
proargtypes => '',
proallargtypes => '{text,text,text,int8,float8,int8,float8,int8,float8,int8,float8,int8,int8,int8,int8,int8,float8,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{backend_type,object,context,reads,read_time,writes,write_time,writebacks,writeback_time,extends,extend_time,op_bytes,hits,evictions,reuses,fsyncs,fsync_time,stats_reset}',
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
prosrc => 'pg_stat_get_io' },
{ oid => '1136', descr => 'statistics: information about WAL activity',
proname => 'pg_stat_get_wal', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
proallargtypes => '{int8,int8,numeric,int8,int8,int8,float8,float8,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o}',
proargnames => '{wal_records,wal_fpi,wal_bytes,wal_buffers_full,wal_write,wal_sync,wal_write_time,wal_sync_time,stats_reset}',
prosrc => 'pg_stat_get_wal' },
{ oid => '6248', descr => 'statistics: information about WAL prefetching',
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
proname => 'pg_stat_get_recovery_prefetch', prorows => '1', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{timestamptz,int8,int8,int8,int8,int8,int8,int4,int4,int4}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o}',
proargnames => '{stats_reset,prefetch,hit,skip_init,skip_new,skip_fpw,skip_rep,wal_distance,block_distance,io_depth}',
prosrc => 'pg_stat_get_recovery_prefetch' },
{ oid => '2306', descr => 'statistics: information about SLRU caches',
proname => 'pg_stat_get_slru', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o}',
proargnames => '{name,blks_zeroed,blks_hit,blks_read,blks_written,blks_exists,flushes,truncates,stats_reset}',
prosrc => 'pg_stat_get_slru' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2978', descr => 'statistics: number of function calls',
proname => 'pg_stat_get_function_calls', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_function_calls' },
{ oid => '2979',
descr => 'statistics: total execution time of function, in milliseconds',
proname => 'pg_stat_get_function_total_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_function_total_time' },
{ oid => '2980',
descr => 'statistics: self execution time of function, in milliseconds',
proname => 'pg_stat_get_function_self_time', provolatile => 's',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_function_self_time' },
{ oid => '3037',
descr => 'statistics: number of scans done for table/index in current transaction',
proname => 'pg_stat_get_xact_numscans', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_numscans' },
{ oid => '3038',
descr => 'statistics: number of tuples read by seqscan in current transaction',
proname => 'pg_stat_get_xact_tuples_returned', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_returned' },
{ oid => '3039',
descr => 'statistics: number of tuples fetched by idxscan in current transaction',
proname => 'pg_stat_get_xact_tuples_fetched', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_fetched' },
{ oid => '3040',
descr => 'statistics: number of tuples inserted in current transaction',
proname => 'pg_stat_get_xact_tuples_inserted', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_inserted' },
{ oid => '3041',
descr => 'statistics: number of tuples updated in current transaction',
proname => 'pg_stat_get_xact_tuples_updated', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_updated' },
{ oid => '3042',
descr => 'statistics: number of tuples deleted in current transaction',
proname => 'pg_stat_get_xact_tuples_deleted', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_deleted' },
{ oid => '3043',
descr => 'statistics: number of tuples hot updated in current transaction',
proname => 'pg_stat_get_xact_tuples_hot_updated', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_hot_updated' },
{ oid => '6218',
descr => 'statistics: number of tuples updated onto a new page in current transaction',
proname => 'pg_stat_get_xact_tuples_newpage_updated', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_tuples_newpage_updated' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3044',
descr => 'statistics: number of blocks fetched in current transaction',
proname => 'pg_stat_get_xact_blocks_fetched', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_blocks_fetched' },
{ oid => '3045',
descr => 'statistics: number of blocks found in cache in current transaction',
proname => 'pg_stat_get_xact_blocks_hit', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_blocks_hit' },
{ oid => '3046',
descr => 'statistics: number of function calls in current transaction',
proname => 'pg_stat_get_xact_function_calls', provolatile => 'v',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_function_calls' },
{ oid => '3047',
descr => 'statistics: total execution time of function in current transaction, in milliseconds',
proname => 'pg_stat_get_xact_function_total_time', provolatile => 'v',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_function_total_time' },
{ oid => '3048',
descr => 'statistics: self execution time of function in current transaction, in milliseconds',
proname => 'pg_stat_get_xact_function_self_time', provolatile => 'v',
proparallel => 'r', prorettype => 'float8', proargtypes => 'oid',
prosrc => 'pg_stat_get_xact_function_self_time' },
{ oid => '3788',
descr => 'statistics: timestamp of the current statistics snapshot',
proname => 'pg_stat_get_snapshot_timestamp', provolatile => 's',
proparallel => 'r', prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_stat_get_snapshot_timestamp' },
{ oid => '2230',
descr => 'statistics: discard current transaction\'s statistics snapshot',
proname => 'pg_stat_clear_snapshot', proisstrict => 'f', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => '',
prosrc => 'pg_stat_clear_snapshot' },
{ oid => '2137',
descr => 'statistics: force stats to be flushed after the next commit',
proname => 'pg_stat_force_next_flush', proisstrict => 'f', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => '',
prosrc => 'pg_stat_force_next_flush' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2274',
descr => 'statistics: reset collected statistics for current database',
proname => 'pg_stat_reset', proisstrict => 'f', provolatile => 'v',
prorettype => 'void', proargtypes => '', prosrc => 'pg_stat_reset' },
{ oid => '3775',
descr => 'statistics: reset collected statistics shared across the cluster',
proname => 'pg_stat_reset_shared', proisstrict => 'f', provolatile => 'v',
prorettype => 'void', proargtypes => 'text',
prosrc => 'pg_stat_reset_shared' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3776',
descr => 'statistics: reset collected statistics for a single table or index in the current database or shared across all databases in the cluster',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'pg_stat_reset_single_table_counters', provolatile => 'v',
prorettype => 'void', proargtypes => 'oid',
prosrc => 'pg_stat_reset_single_table_counters' },
{ oid => '3777',
descr => 'statistics: reset collected statistics for a single function in the current database',
proname => 'pg_stat_reset_single_function_counters', provolatile => 'v',
prorettype => 'void', proargtypes => 'oid',
prosrc => 'pg_stat_reset_single_function_counters' },
{ oid => '2307',
descr => 'statistics: reset collected statistics for a single SLRU',
proname => 'pg_stat_reset_slru', proisstrict => 'f', provolatile => 'v',
prorettype => 'void', proargtypes => 'text', proargnames => '{target}',
prosrc => 'pg_stat_reset_slru' },
{ oid => '6170',
descr => 'statistics: reset collected statistics for a single replication slot',
proname => 'pg_stat_reset_replication_slot', proisstrict => 'f',
provolatile => 'v', prorettype => 'void', proargtypes => 'text',
prosrc => 'pg_stat_reset_replication_slot' },
{ oid => '6232',
descr => 'statistics: reset collected statistics for a single subscription',
proname => 'pg_stat_reset_subscription_stats', proisstrict => 'f',
provolatile => 'v', prorettype => 'void', proargtypes => 'oid',
prosrc => 'pg_stat_reset_subscription_stats' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3163', descr => 'current trigger depth',
proname => 'pg_trigger_depth', provolatile => 's', proparallel => 'r',
prorettype => 'int4', proargtypes => '', prosrc => 'pg_trigger_depth' },
{ oid => '3778', descr => 'tablespace location',
proname => 'pg_tablespace_location', provolatile => 's', prorettype => 'text',
proargtypes => 'oid', prosrc => 'pg_tablespace_location' },
{ oid => '1946',
descr => 'convert bytea value into some ascii-only text string',
proname => 'encode', prorettype => 'text', proargtypes => 'bytea text',
prosrc => 'binary_encode' },
{ oid => '1947',
descr => 'convert ascii-encoded text string into bytea value',
proname => 'decode', prorettype => 'bytea', proargtypes => 'text text',
prosrc => 'binary_decode' },
{ oid => '1948',
proname => 'byteaeq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'byteaeq' },
{ oid => '1949',
proname => 'bytealt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'bytealt' },
{ oid => '1950',
proname => 'byteale', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'byteale' },
{ oid => '1951',
proname => 'byteagt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'byteagt' },
{ oid => '1952',
proname => 'byteage', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'byteage' },
{ oid => '1953',
proname => 'byteane', proleakproof => 't', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'byteane' },
{ oid => '1954', descr => 'less-equal-greater',
proname => 'byteacmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'bytea bytea', prosrc => 'byteacmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3331', descr => 'sort support',
proname => 'bytea_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'bytea_sortsupport' },
{ oid => '3917', descr => 'planner support for timestamp length coercion',
proname => 'timestamp_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'timestamp_support' },
{ oid => '3944', descr => 'planner support for time length coercion',
proname => 'time_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'time_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1961', descr => 'adjust timestamp precision',
proname => 'timestamp', prosupport => 'timestamp_support',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamp', proargtypes => 'timestamp int4',
prosrc => 'timestamp_scale' },
{ oid => '1965', descr => 'larger of two',
proname => 'oidlarger', prorettype => 'oid', proargtypes => 'oid oid',
prosrc => 'oidlarger' },
{ oid => '1966', descr => 'smaller of two',
proname => 'oidsmaller', prorettype => 'oid', proargtypes => 'oid oid',
prosrc => 'oidsmaller' },
{ oid => '1967', descr => 'adjust timestamptz precision',
proname => 'timestamptz', prosupport => 'timestamp_support',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamptz', proargtypes => 'timestamptz int4',
prosrc => 'timestamptz_scale' },
{ oid => '1968', descr => 'adjust time precision',
proname => 'time', prosupport => 'time_support', prorettype => 'time',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'time int4', prosrc => 'time_scale' },
{ oid => '1969', descr => 'adjust time with time zone precision',
proname => 'timetz', prosupport => 'time_support', prorettype => 'timetz',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'timetz int4', prosrc => 'timetz_scale' },
{ oid => '2003',
proname => 'textanycat', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'text', proargtypes => 'text anynonarray',
prosrc => 'select $1 operator(pg_catalog.||) $2::pg_catalog.text' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2004',
proname => 'anytextcat', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'text', proargtypes => 'anynonarray text',
prosrc => 'select $1::pg_catalog.text operator(pg_catalog.||) $2' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2005',
proname => 'bytealike', prosupport => 'textlike_support',
prorettype => 'bool', proargtypes => 'bytea bytea', prosrc => 'bytealike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2006',
proname => 'byteanlike', prorettype => 'bool', proargtypes => 'bytea bytea',
prosrc => 'byteanlike' },
{ oid => '2007', descr => 'matches LIKE expression',
proname => 'like', prosupport => 'textlike_support', prorettype => 'bool',
proargtypes => 'bytea bytea', prosrc => 'bytealike' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2008', descr => 'does not match LIKE expression',
proname => 'notlike', prorettype => 'bool', proargtypes => 'bytea bytea',
prosrc => 'byteanlike' },
{ oid => '2009', descr => 'convert LIKE pattern to use backslash escapes',
proname => 'like_escape', prorettype => 'bytea', proargtypes => 'bytea bytea',
prosrc => 'like_escape_bytea' },
{ oid => '2010', descr => 'octet length',
proname => 'length', prorettype => 'int4', proargtypes => 'bytea',
prosrc => 'byteaoctetlen' },
{ oid => '2011',
proname => 'byteacat', prorettype => 'bytea', proargtypes => 'bytea bytea',
prosrc => 'byteacat' },
{ oid => '2012', descr => 'extract portion of string',
proname => 'substring', prorettype => 'bytea',
proargtypes => 'bytea int4 int4', prosrc => 'bytea_substr' },
{ oid => '2013', descr => 'extract portion of string',
proname => 'substring', prorettype => 'bytea', proargtypes => 'bytea int4',
prosrc => 'bytea_substr_no_len' },
{ oid => '2085', descr => 'extract portion of string',
proname => 'substr', prorettype => 'bytea', proargtypes => 'bytea int4 int4',
prosrc => 'bytea_substr' },
{ oid => '2086', descr => 'extract portion of string',
proname => 'substr', prorettype => 'bytea', proargtypes => 'bytea int4',
prosrc => 'bytea_substr_no_len' },
{ oid => '2014', descr => 'position of substring',
proname => 'position', prorettype => 'int4', proargtypes => 'bytea bytea',
prosrc => 'byteapos' },
{ oid => '2015', descr => 'trim selected bytes from both ends of string',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'btrim', prorettype => 'bytea', proargtypes => 'bytea bytea',
prosrc => 'byteatrim' },
{ oid => '6195', descr => 'trim selected bytes from left end of string',
proname => 'ltrim', prorettype => 'bytea', proargtypes => 'bytea bytea',
prosrc => 'bytealtrim' },
{ oid => '6196', descr => 'trim selected bytes from right end of string',
proname => 'rtrim', prorettype => 'bytea', proargtypes => 'bytea bytea',
prosrc => 'byteartrim' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2019', descr => 'convert timestamp with time zone to time',
proname => 'time', provolatile => 's', prorettype => 'time',
proargtypes => 'timestamptz', prosrc => 'timestamptz_time' },
{ oid => '2020', descr => 'truncate timestamp to specified units',
proname => 'date_trunc', prorettype => 'timestamp',
proargtypes => 'text timestamp', prosrc => 'timestamp_trunc' },
{ oid => '6177', descr => 'bin timestamp into specified interval',
proname => 'date_bin', prorettype => 'timestamp',
proargtypes => 'interval timestamp timestamp', prosrc => 'timestamp_bin' },
{ oid => '6178',
descr => 'bin timestamp with time zone into specified interval',
proname => 'date_bin', prorettype => 'timestamptz',
proargtypes => 'interval timestamptz timestamptz',
prosrc => 'timestamptz_bin' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2021', descr => 'extract field from timestamp',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
{ oid => '6202', descr => 'extract field from timestamp',
Change return type of EXTRACT to numeric The previous implementation of EXTRACT mapped internally to date_part(), which returned type double precision (since it was implemented long before the numeric type existed). This can lead to imprecise output in some cases, so returning numeric would be preferrable. Changing the return type of an existing function is a bit risky, so instead we do the following: We implement a new set of functions, which are now called "extract", in parallel to the existing date_part functions. They work the same way internally but use numeric instead of float8. The EXTRACT construct is now mapped by the parser to these new extract functions. That way, dumps of views etc. from old versions (which would use date_part) continue to work unchanged, but new uses will map to the new extract functions. Additionally, the reverse compilation of EXTRACT now reproduces the original syntax, using the new mechanism introduced in 40c24bfef92530bd846e111c1742c2a54441c62c. The following minor changes of behavior result from the new implementation: - The column name from an isolated EXTRACT call is now "extract" instead of "date_part". - Extract from date now rejects inappropriate field names such as HOUR. It was previously mapped internally to extract from timestamp, so it would silently accept everything appropriate for timestamp. - Return values when extracting fields with possibly fractional values, such as second and epoch, now have the full scale that the value has internally (so, for example, '1.000000' instead of just '1'). Reported-by: Petr Fedorov <petr.fedorov@phystech.edu> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:17:13 +02:00
proname => 'extract', prorettype => 'numeric',
proargtypes => 'text timestamp', prosrc => 'extract_timestamp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
prosrc => 'date_timestamp' },
{ oid => '2025', descr => 'convert date and time to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date time',
prosrc => 'datetime_timestamp' },
{ oid => '2027', descr => 'convert timestamp with time zone to timestamp',
proname => 'timestamp', provolatile => 's', prorettype => 'timestamp',
proargtypes => 'timestamptz', prosrc => 'timestamptz_timestamp' },
{ oid => '2028', descr => 'convert timestamp to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'timestamp', prosrc => 'timestamp_timestamptz' },
{ oid => '2029', descr => 'convert timestamp to date',
proname => 'date', prorettype => 'date', proargtypes => 'timestamp',
prosrc => 'timestamp_date' },
{ oid => '2031',
proname => 'timestamp_mi', prorettype => 'interval',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_mi' },
{ oid => '2032',
proname => 'timestamp_pl_interval', prorettype => 'timestamp',
proargtypes => 'timestamp interval', prosrc => 'timestamp_pl_interval' },
{ oid => '2033',
proname => 'timestamp_mi_interval', prorettype => 'timestamp',
proargtypes => 'timestamp interval', prosrc => 'timestamp_mi_interval' },
{ oid => '2035', descr => 'smaller of two',
proname => 'timestamp_smaller', prorettype => 'timestamp',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_smaller' },
{ oid => '2036', descr => 'larger of two',
proname => 'timestamp_larger', prorettype => 'timestamp',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_larger' },
{ oid => '2037', descr => 'adjust time with time zone to new zone',
proname => 'timezone', provolatile => 's', prorettype => 'timetz',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text timetz', prosrc => 'timetz_zone' },
{ oid => '2038', descr => 'adjust time with time zone to new zone',
proname => 'timezone', prorettype => 'timetz',
proargtypes => 'interval timetz', prosrc => 'timetz_izone' },
{ oid => '9161', descr => 'adjust time to local time zone',
proname => 'timezone', provolatile => 's', prorettype => 'timetz',
proargtypes => 'timetz', prosrc => 'timetz_at_local' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2039', descr => 'hash',
proname => 'timestamp_hash', prorettype => 'int4', proargtypes => 'timestamp',
prosrc => 'timestamp_hash' },
{ oid => '3411', descr => 'hash',
proname => 'timestamp_hash_extended', prorettype => 'int8',
proargtypes => 'timestamp int8', prosrc => 'timestamp_hash_extended' },
{ oid => '2041', descr => 'intervals overlap?',
proname => 'overlaps', proisstrict => 'f', prorettype => 'bool',
proargtypes => 'timestamp timestamp timestamp timestamp',
prosrc => 'overlaps_timestamp' },
{ oid => '2042', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'timestamp interval timestamp interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2043', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'timestamp timestamp timestamp interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2044', descr => 'intervals overlap?',
proname => 'overlaps', prolang => 'sql', proisstrict => 'f',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'timestamp interval timestamp timestamp',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2045', descr => 'less-equal-greater',
proname => 'timestamp_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_cmp' },
{ oid => '3137', descr => 'sort support',
proname => 'timestamp_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'timestamp_sortsupport' },
{ oid => '4134', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'timestamp timestamp interval bool bool',
prosrc => 'in_range_timestamp_interval' },
{ oid => '4135', descr => 'window RANGE support',
proname => 'in_range', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz timestamptz interval bool bool',
prosrc => 'in_range_timestamptz_interval' },
{ oid => '4136', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'interval interval interval bool bool',
prosrc => 'in_range_interval_interval' },
{ oid => '4137', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'time time interval bool bool',
prosrc => 'in_range_time_interval' },
{ oid => '4138', descr => 'window RANGE support',
proname => 'in_range', prorettype => 'bool',
proargtypes => 'timetz timetz interval bool bool',
prosrc => 'in_range_timetz_interval' },
{ oid => '2046', descr => 'convert time with time zone to time',
proname => 'time', prorettype => 'time', proargtypes => 'timetz',
prosrc => 'timetz_time' },
{ oid => '2047', descr => 'convert time to time with time zone',
proname => 'timetz', provolatile => 's', prorettype => 'timetz',
proargtypes => 'time', prosrc => 'time_timetz' },
{ oid => '2048', descr => 'finite timestamp?',
proname => 'isfinite', prorettype => 'bool', proargtypes => 'timestamp',
prosrc => 'timestamp_finite' },
{ oid => '2049', descr => 'format timestamp to text',
proname => 'to_char', provolatile => 's', prorettype => 'text',
proargtypes => 'timestamp text', prosrc => 'timestamp_to_char' },
{ oid => '2052',
proname => 'timestamp_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_eq' },
{ oid => '2053',
proname => 'timestamp_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_ne' },
{ oid => '2054',
proname => 'timestamp_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_lt' },
{ oid => '2055',
proname => 'timestamp_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_le' },
{ oid => '2056',
proname => 'timestamp_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_ge' },
{ oid => '2057',
proname => 'timestamp_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_gt' },
{ oid => '2058', descr => 'date difference preserving months and years',
proname => 'age', prorettype => 'interval',
proargtypes => 'timestamp timestamp', prosrc => 'timestamp_age' },
{ oid => '2059',
descr => 'date difference from today preserving months and years',
proname => 'age', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'interval', proargtypes => 'timestamp',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2069', descr => 'adjust timestamp to new time zone',
proname => 'timezone', prorettype => 'timestamptz',
proargtypes => 'text timestamp', prosrc => 'timestamp_zone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2070', descr => 'adjust timestamp to new time zone',
proname => 'timezone', prorettype => 'timestamptz',
proargtypes => 'interval timestamp', prosrc => 'timestamp_izone' },
{ oid => '9160', descr => 'adjust timestamp to local time zone',
proname => 'timezone', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'timestamp', prosrc => 'timestamp_at_local' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2071',
proname => 'date_pl_interval', prorettype => 'timestamp',
proargtypes => 'date interval', prosrc => 'date_pl_interval' },
{ oid => '2072',
proname => 'date_mi_interval', prorettype => 'timestamp',
proargtypes => 'date interval', prosrc => 'date_mi_interval' },
{ oid => '2073', descr => 'extract text matching regular expression',
proname => 'substring', prorettype => 'text', proargtypes => 'text text',
prosrc => 'textregexsubstr' },
{ oid => '2074', descr => 'extract text matching SQL regular expression',
proname => 'substring', prolang => 'sql', prorettype => 'text',
proargtypes => 'text text text', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2075', descr => 'convert int8 to bitstring',
proname => 'bit', prorettype => 'bit', proargtypes => 'int8 int4',
prosrc => 'bitfromint8' },
{ oid => '2076', descr => 'convert bitstring to int8',
proname => 'int8', prorettype => 'int8', proargtypes => 'bit',
prosrc => 'bittoint8' },
{ oid => '2077', descr => 'SHOW X as a function',
proname => 'current_setting', provolatile => 's', prorettype => 'text',
proargtypes => 'text', prosrc => 'show_config_by_name' },
{ oid => '3294',
descr => 'SHOW X as a function, optionally no error for missing variable',
proname => 'current_setting', provolatile => 's', prorettype => 'text',
proargtypes => 'text bool', prosrc => 'show_config_by_name_missing_ok' },
{ oid => '2078', descr => 'SET X as a function',
proname => 'set_config', proisstrict => 'f', provolatile => 'v',
proparallel => 'u', prorettype => 'text', proargtypes => 'text text bool',
prosrc => 'set_config_by_name' },
{ oid => '2084', descr => 'SHOW ALL as a function',
proname => 'pg_show_all_settings', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,text,text,text,text,text,text,text,text,text,text,_text,text,text,text,int4,bool}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{name,setting,unit,category,short_desc,extra_desc,context,vartype,source,min_val,max_val,enumvals,boot_val,reset_val,sourcefile,sourceline,pending_restart}',
prosrc => 'show_all_settings' },
{ oid => '6240', descr => 'return flags for specified GUC',
proname => 'pg_settings_get_flags', provolatile => 's', prorettype => '_text',
proargtypes => 'text', prosrc => 'pg_settings_get_flags' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3329', descr => 'show config file settings',
proname => 'pg_show_all_file_settings', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int4,int4,text,text,bool,text}',
proargmodes => '{o,o,o,o,o,o,o}',
proargnames => '{sourcefile,sourceline,seqno,name,setting,applied,error}',
prosrc => 'show_all_file_settings' },
{ oid => '3401', descr => 'show pg_hba.conf rules',
proname => 'pg_hba_file_rules', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{int4,text,int4,text,_text,_text,text,text,text,_text,text}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{rule_number,file_name,line_number,type,database,user_name,address,netmask,auth_method,options,error}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_hba_file_rules' },
{ oid => '6250', descr => 'show pg_ident.conf mappings',
proname => 'pg_ident_file_mappings', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{int4,text,int4,text,text,text,text}',
proargmodes => '{o,o,o,o,o,o,o}',
proargnames => '{map_number,file_name,line_number,map_name,sys_name,pg_username,error}',
prosrc => 'pg_ident_file_mappings' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1371', descr => 'view system lock information',
proname => 'pg_lock_status', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
Display the time when the process started waiting for the lock, in pg_locks, take 2 This commit adds new column "waitstart" into pg_locks view. This column reports the time when the server process started waiting for the lock if the lock is not held. This information is useful, for example, when examining the amount of time to wait on a lock by subtracting "waitstart" in pg_locks from the current time, and identify the lock that the processes are waiting for very long. This feature uses the current time obtained for the deadlock timeout timer as "waitstart" (i.e., the time when this process started waiting for the lock). Since getting the current time newly can cause overhead, we reuse the already-obtained time to avoid that overhead. Note that "waitstart" is updated without holding the lock table's partition lock, to avoid the overhead by additional lock acquisition. This can cause "waitstart" in pg_locks to become NULL for a very short period of time after the wait started even though "granted" is false. This is OK in practice because we can assume that users are likely to look at "waitstart" when waiting for the lock for a long time. The first attempt of this patch (commit 3b733fcd04) caused the buildfarm member "rorqual" (built with --disable-atomics --disable-spinlocks) to report the failure of the regression test. It was reverted by commit 890d2182a2. The cause of this failure was that the atomic variable for "waitstart" in the dummy process entry created at the end of prepare transaction was not initialized. This second attempt fixes that issue. Bump catalog version. Author: Atsushi Torikoshi Reviewed-by: Ian Lawrence Barwick, Robert Haas, Justin Pryzby, Fujii Masao Discussion: https://postgr.es/m/a96013dc51cdc56b2a2b84fa8a16a993@oss.nttdata.com
2021-02-15 07:13:37 +01:00
proallargtypes => '{text,oid,oid,int4,int2,text,xid,oid,oid,int2,text,int4,text,bool,bool,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid,virtualtransaction,pid,mode,granted,fastpath,waitstart}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_lock_status' },
{ oid => '2561',
descr => 'get array of PIDs of sessions blocking specified backend PID from acquiring a heavyweight lock',
proname => 'pg_blocking_pids', provolatile => 'v', prorettype => '_int4',
proargtypes => 'int4', prosrc => 'pg_blocking_pids' },
{ oid => '3376',
descr => 'get array of PIDs of sessions blocking specified backend PID from acquiring a safe snapshot',
proname => 'pg_safe_snapshot_blocking_pids', provolatile => 'v',
prorettype => '_int4', proargtypes => 'int4',
prosrc => 'pg_safe_snapshot_blocking_pids' },
{ oid => '3378', descr => 'isolationtester support function',
proname => 'pg_isolation_test_session_is_blocked', provolatile => 'v',
prorettype => 'bool', proargtypes => 'int4 _int4',
prosrc => 'pg_isolation_test_session_is_blocked' },
{ oid => '1065', descr => 'view two-phase transactions',
proname => 'pg_prepared_xact', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{xid,text,timestamptz,oid,oid}',
proargmodes => '{o,o,o,o,o}',
proargnames => '{transaction,gid,prepared,ownerid,dbid}',
prosrc => 'pg_prepared_xact' },
{ oid => '3819', descr => 'view members of a multixactid',
proname => 'pg_get_multixact_members', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => 'xid',
proallargtypes => '{xid,xid,text}', proargmodes => '{i,o,o}',
proargnames => '{multixid,xid,mode}', prosrc => 'pg_get_multixact_members' },
{ oid => '3581', descr => 'get commit timestamp of a transaction',
proname => 'pg_xact_commit_timestamp', provolatile => 'v',
prorettype => 'timestamptz', proargtypes => 'xid',
prosrc => 'pg_xact_commit_timestamp' },
{ oid => '6168',
descr => 'get commit timestamp and replication origin of a transaction',
proname => 'pg_xact_commit_timestamp_origin', provolatile => 'v',
prorettype => 'record', proargtypes => 'xid',
proallargtypes => '{xid,timestamptz,oid}', proargmodes => '{i,o,o}',
proargnames => '{xid,timestamp,roident}',
prosrc => 'pg_xact_commit_timestamp_origin' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3583',
descr => 'get transaction Id, commit timestamp and replication origin of latest transaction commit',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'pg_last_committed_xact', provolatile => 'v',
prorettype => 'record', proargtypes => '',
proallargtypes => '{xid,timestamptz,oid}', proargmodes => '{o,o,o}',
proargnames => '{xid,timestamp,roident}',
prosrc => 'pg_last_committed_xact' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3537', descr => 'get identification of SQL object',
proname => 'pg_describe_object', provolatile => 's', prorettype => 'text',
proargtypes => 'oid oid int4', prosrc => 'pg_describe_object' },
{ oid => '3839',
descr => 'get machine-parseable identification of SQL object',
proname => 'pg_identify_object', provolatile => 's', prorettype => 'record',
proargtypes => 'oid oid int4',
proallargtypes => '{oid,oid,int4,text,text,text,text}',
proargmodes => '{i,i,i,o,o,o,o}',
proargnames => '{classid,objid,objsubid,type,schema,name,identity}',
prosrc => 'pg_identify_object' },
{ oid => '3382',
descr => 'get identification of SQL object for pg_get_object_address()',
proname => 'pg_identify_object_as_address', provolatile => 's',
prorettype => 'record', proargtypes => 'oid oid int4',
proallargtypes => '{oid,oid,int4,text,_text,_text}',
proargmodes => '{i,i,i,o,o,o}',
proargnames => '{classid,objid,objsubid,type,object_names,object_args}',
prosrc => 'pg_identify_object_as_address' },
{ oid => '3954',
descr => 'get OID-based object address from name/args arrays',
proname => 'pg_get_object_address', provolatile => 's',
prorettype => 'record', proargtypes => 'text _text _text',
proallargtypes => '{text,_text,_text,oid,oid,int4}',
proargmodes => '{i,i,i,o,o,o}',
proargnames => '{type,object_names,object_args,classid,objid,objsubid}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_get_object_address' },
{ oid => '2079', descr => 'is table visible in search path?',
proname => 'pg_table_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid', prosrc => 'pg_table_is_visible' },
{ oid => '2080', descr => 'is type visible in search path?',
proname => 'pg_type_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid', prosrc => 'pg_type_is_visible' },
{ oid => '2081', descr => 'is function visible in search path?',
proname => 'pg_function_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_function_is_visible' },
{ oid => '2082', descr => 'is operator visible in search path?',
proname => 'pg_operator_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_operator_is_visible' },
{ oid => '2083', descr => 'is opclass visible in search path?',
proname => 'pg_opclass_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_opclass_is_visible' },
{ oid => '3829', descr => 'is opfamily visible in search path?',
proname => 'pg_opfamily_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_opfamily_is_visible' },
{ oid => '2093', descr => 'is conversion visible in search path?',
proname => 'pg_conversion_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_conversion_is_visible' },
{ oid => '3403', descr => 'is statistics object visible in search path?',
proname => 'pg_statistics_obj_is_visible', procost => '10',
provolatile => 's', prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_statistics_obj_is_visible' },
{ oid => '3756', descr => 'is text search parser visible in search path?',
proname => 'pg_ts_parser_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_ts_parser_is_visible' },
{ oid => '3757', descr => 'is text search dictionary visible in search path?',
proname => 'pg_ts_dict_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_ts_dict_is_visible' },
{ oid => '3768', descr => 'is text search template visible in search path?',
proname => 'pg_ts_template_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_ts_template_is_visible' },
{ oid => '3758',
descr => 'is text search configuration visible in search path?',
proname => 'pg_ts_config_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_ts_config_is_visible' },
{ oid => '3815', descr => 'is collation visible in search path?',
proname => 'pg_collation_is_visible', procost => '10', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_collation_is_visible' },
{ oid => '2854', descr => 'get OID of current session\'s temp schema, if any',
proname => 'pg_my_temp_schema', provolatile => 's', proparallel => 'r',
prorettype => 'oid', proargtypes => '', prosrc => 'pg_my_temp_schema' },
{ oid => '2855', descr => 'is schema another session\'s temp schema?',
proname => 'pg_is_other_temp_schema', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid',
prosrc => 'pg_is_other_temp_schema' },
{ oid => '2171', descr => 'cancel a server process\' current query',
proname => 'pg_cancel_backend', provolatile => 'v', prorettype => 'bool',
proargtypes => 'int4', prosrc => 'pg_cancel_backend' },
{ oid => '2096', descr => 'terminate a server process',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'pg_terminate_backend', provolatile => 'v', prorettype => 'bool',
proargtypes => 'int4 int8', proargnames => '{pid,timeout}',
prosrc => 'pg_terminate_backend' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2172', descr => 'prepare for taking an online backup',
proname => 'pg_backup_start', provolatile => 'v', proparallel => 'r',
prorettype => 'pg_lsn', proargtypes => 'text bool',
prosrc => 'pg_backup_start' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2739', descr => 'finish taking an online backup',
proname => 'pg_backup_stop', provolatile => 'v', proparallel => 'r',
prorettype => 'record', proargtypes => 'bool',
proallargtypes => '{bool,pg_lsn,text,text}', proargmodes => '{i,o,o,o}',
proargnames => '{wait_for_archive,lsn,labelfile,spcmapfile}',
prosrc => 'pg_backup_stop' },
{ oid => '3436', descr => 'promote standby server',
proname => 'pg_promote', provolatile => 'v', prorettype => 'bool',
proargtypes => 'bool int4', proargnames => '{wait,wait_seconds}',
prosrc => 'pg_promote' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2848', descr => 'switch to new wal file',
proname => 'pg_switch_wal', provolatile => 'v', prorettype => 'pg_lsn',
proargtypes => '', prosrc => 'pg_switch_wal' },
{ oid => '6305', descr => 'log details of the current snapshot to WAL',
proname => 'pg_log_standby_snapshot', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => '',
prosrc => 'pg_log_standby_snapshot' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3098', descr => 'create a named restore point',
proname => 'pg_create_restore_point', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => 'text',
prosrc => 'pg_create_restore_point' },
{ oid => '2849', descr => 'current wal write location',
proname => 'pg_current_wal_lsn', provolatile => 'v', prorettype => 'pg_lsn',
proargtypes => '', prosrc => 'pg_current_wal_lsn' },
{ oid => '2852', descr => 'current wal insert location',
proname => 'pg_current_wal_insert_lsn', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => '',
prosrc => 'pg_current_wal_insert_lsn' },
{ oid => '3330', descr => 'current wal flush location',
proname => 'pg_current_wal_flush_lsn', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => '',
prosrc => 'pg_current_wal_flush_lsn' },
{ oid => '2850',
descr => 'wal filename and byte offset, given a wal location',
proname => 'pg_walfile_name_offset', prorettype => 'record',
proargtypes => 'pg_lsn', proallargtypes => '{pg_lsn,text,int4}',
proargmodes => '{i,o,o}', proargnames => '{lsn,file_name,file_offset}',
prosrc => 'pg_walfile_name_offset' },
{ oid => '2851', descr => 'wal filename, given a wal location',
proname => 'pg_walfile_name', prorettype => 'text', proargtypes => 'pg_lsn',
prosrc => 'pg_walfile_name' },
{ oid => '6213',
descr => 'sequence number and timeline ID given a wal filename',
proname => 'pg_split_walfile_name', provolatile => 's',
prorettype => 'record', proargtypes => 'text',
proallargtypes => '{text,numeric,int8}', proargmodes => '{i,o,o}',
proargnames => '{file_name,segment_number,timeline_id}',
prosrc => 'pg_split_walfile_name' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3165', descr => 'difference in bytes, given two wal locations',
proname => 'pg_wal_lsn_diff', prorettype => 'numeric',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_wal_lsn_diff' },
{ oid => '3809', descr => 'export a snapshot',
proname => 'pg_export_snapshot', provolatile => 'v', proparallel => 'u',
prorettype => 'text', proargtypes => '', prosrc => 'pg_export_snapshot' },
{ oid => '3810', descr => 'true if server is in recovery',
proname => 'pg_is_in_recovery', provolatile => 'v', prorettype => 'bool',
proargtypes => '', prosrc => 'pg_is_in_recovery' },
{ oid => '3820', descr => 'current wal flush location',
proname => 'pg_last_wal_receive_lsn', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => '',
prosrc => 'pg_last_wal_receive_lsn' },
{ oid => '3821', descr => 'last wal replay location',
proname => 'pg_last_wal_replay_lsn', provolatile => 'v',
prorettype => 'pg_lsn', proargtypes => '',
prosrc => 'pg_last_wal_replay_lsn' },
{ oid => '3830', descr => 'timestamp of last replay xact',
proname => 'pg_last_xact_replay_timestamp', provolatile => 'v',
prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_last_xact_replay_timestamp' },
{ oid => '3071', descr => 'pause wal replay',
proname => 'pg_wal_replay_pause', provolatile => 'v', prorettype => 'void',
proargtypes => '', prosrc => 'pg_wal_replay_pause' },
{ oid => '3072', descr => 'resume wal replay, if it was paused',
proname => 'pg_wal_replay_resume', provolatile => 'v', prorettype => 'void',
proargtypes => '', prosrc => 'pg_wal_replay_resume' },
{ oid => '3073', descr => 'true if wal replay is paused',
proname => 'pg_is_wal_replay_paused', provolatile => 'v',
prorettype => 'bool', proargtypes => '',
prosrc => 'pg_is_wal_replay_paused' },
2021-03-11 20:52:32 +01:00
{ oid => '1137', descr => 'get wal replay pause state',
proname => 'pg_get_wal_replay_pause_state', provolatile => 'v',
prorettype => 'text', proargtypes => '',
prosrc => 'pg_get_wal_replay_pause_state' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '6224', descr => 'get resource managers loaded in system',
proname => 'pg_get_wal_resource_managers', prorows => '50', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{int4,text,bool}', proargmodes => '{o,o,o}',
proargnames => '{rm_id, rm_name, rm_builtin}',
prosrc => 'pg_get_wal_resource_managers' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2621', descr => 'reload configuration files',
proname => 'pg_reload_conf', provolatile => 'v', prorettype => 'bool',
proargtypes => '', prosrc => 'pg_reload_conf' },
{ oid => '2622', descr => 'rotate log file',
proname => 'pg_rotate_logfile', provolatile => 'v', prorettype => 'bool',
proargtypes => '', prosrc => 'pg_rotate_logfile' },
{ oid => '3800', descr => 'current logging collector file location',
proname => 'pg_current_logfile', proisstrict => 'f', provolatile => 'v',
prorettype => 'text', proargtypes => '', prosrc => 'pg_current_logfile' },
{ oid => '3801', descr => 'current logging collector file location',
proname => 'pg_current_logfile', proisstrict => 'f', provolatile => 'v',
prorettype => 'text', proargtypes => 'text',
prosrc => 'pg_current_logfile_1arg' },
{ oid => '2623', descr => 'get information about file',
proname => 'pg_stat_file', provolatile => 'v', prorettype => 'record',
proargtypes => 'text',
proallargtypes => '{text,int8,timestamptz,timestamptz,timestamptz,timestamptz,bool}',
proargmodes => '{i,o,o,o,o,o,o}',
proargnames => '{filename,size,access,modification,change,creation,isdir}',
prosrc => 'pg_stat_file_1arg' },
{ oid => '3307', descr => 'get information about file',
proname => 'pg_stat_file', provolatile => 'v', prorettype => 'record',
proargtypes => 'text bool',
proallargtypes => '{text,bool,int8,timestamptz,timestamptz,timestamptz,timestamptz,bool}',
proargmodes => '{i,i,o,o,o,o,o,o}',
proargnames => '{filename,missing_ok,size,access,modification,change,creation,isdir}',
prosrc => 'pg_stat_file' },
{ oid => '2624', descr => 'read text from a file',
proname => 'pg_read_file', provolatile => 'v', prorettype => 'text',
proargtypes => 'text int8 int8', prosrc => 'pg_read_file_off_len' },
{ oid => '3293', descr => 'read text from a file',
proname => 'pg_read_file', provolatile => 'v', prorettype => 'text',
proargtypes => 'text int8 int8 bool',
prosrc => 'pg_read_file_off_len_missing' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3826', descr => 'read text from a file',
proname => 'pg_read_file', provolatile => 'v', prorettype => 'text',
proargtypes => 'text', prosrc => 'pg_read_file_all' },
{ oid => '6208', descr => 'read text from a file',
proname => 'pg_read_file', provolatile => 'v', prorettype => 'text',
proargtypes => 'text bool', prosrc => 'pg_read_file_all_missing' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3827', descr => 'read bytea from a file',
proname => 'pg_read_binary_file', provolatile => 'v', prorettype => 'bytea',
proargtypes => 'text int8 int8', prosrc => 'pg_read_binary_file_off_len' },
{ oid => '3295', descr => 'read bytea from a file',
proname => 'pg_read_binary_file', provolatile => 'v', prorettype => 'bytea',
proargtypes => 'text int8 int8 bool',
prosrc => 'pg_read_binary_file_off_len_missing' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3828', descr => 'read bytea from a file',
proname => 'pg_read_binary_file', provolatile => 'v', prorettype => 'bytea',
proargtypes => 'text', prosrc => 'pg_read_binary_file_all' },
{ oid => '6209', descr => 'read bytea from a file',
proname => 'pg_read_binary_file', provolatile => 'v', prorettype => 'bytea',
proargtypes => 'text bool', prosrc => 'pg_read_binary_file_all_missing' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2625', descr => 'list all files in a directory',
proname => 'pg_ls_dir', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'text', proargtypes => 'text',
prosrc => 'pg_ls_dir_1arg' },
{ oid => '3297', descr => 'list all files in a directory',
proname => 'pg_ls_dir', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'text', proargtypes => 'text bool bool',
prosrc => 'pg_ls_dir' },
{ oid => '2626', descr => 'sleep for the specified time in seconds',
proname => 'pg_sleep', provolatile => 'v', prorettype => 'void',
proargtypes => 'float8', prosrc => 'pg_sleep' },
{ oid => '3935', descr => 'sleep for the specified interval',
proname => 'pg_sleep_for', prolang => 'sql', provolatile => 'v',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'interval',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3936', descr => 'sleep until the specified time',
proname => 'pg_sleep_until', prolang => 'sql', provolatile => 'v',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'timestamptz',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '315', descr => 'Is JIT compilation available in this session?',
proname => 'pg_jit_available', provolatile => 'v', prorettype => 'bool',
proargtypes => '', prosrc => 'pg_jit_available' },
{ oid => '2971', descr => 'convert boolean to text',
proname => 'text', prorettype => 'text', proargtypes => 'bool',
prosrc => 'booltext' },
# Aggregates (moved here from pg_aggregate for 7.3)
{ oid => '2100',
descr => 'the average (arithmetic mean) as numeric of all bigint values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2101',
descr => 'the average (arithmetic mean) as numeric of all integer values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2102',
descr => 'the average (arithmetic mean) as numeric of all smallint values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2103',
descr => 'the average (arithmetic mean) as numeric of all numeric values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'aggregate_dummy' },
{ oid => '2104',
descr => 'the average (arithmetic mean) as float8 of all float4 values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float4', prosrc => 'aggregate_dummy' },
{ oid => '2105',
descr => 'the average (arithmetic mean) as float8 of all float8 values',
proname => 'avg', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float8', prosrc => 'aggregate_dummy' },
{ oid => '2106',
descr => 'the average (arithmetic mean) as interval of all interval values',
proname => 'avg', prokind => 'a', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'interval',
prosrc => 'aggregate_dummy' },
{ oid => '2107', descr => 'sum as numeric across all bigint input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2108', descr => 'sum as bigint across all integer input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2109', descr => 'sum as bigint across all smallint input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2110', descr => 'sum as float4 across all float4 input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'float4',
proargtypes => 'float4', prosrc => 'aggregate_dummy' },
{ oid => '2111', descr => 'sum as float8 across all float8 input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float8', prosrc => 'aggregate_dummy' },
{ oid => '2112', descr => 'sum as money across all money input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'money',
proargtypes => 'money', prosrc => 'aggregate_dummy' },
{ oid => '2113', descr => 'sum as interval across all interval input values',
proname => 'sum', prokind => 'a', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'interval',
prosrc => 'aggregate_dummy' },
{ oid => '2114', descr => 'sum as numeric across all numeric input values',
proname => 'sum', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'aggregate_dummy' },
{ oid => '2115', descr => 'maximum value of all bigint input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2116', descr => 'maximum value of all integer input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'int4',
proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2117', descr => 'maximum value of all smallint input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'int2',
proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2118', descr => 'maximum value of all oid input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'oid',
proargtypes => 'oid', prosrc => 'aggregate_dummy' },
{ oid => '2119', descr => 'maximum value of all float4 input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'float4',
proargtypes => 'float4', prosrc => 'aggregate_dummy' },
{ oid => '2120', descr => 'maximum value of all float8 input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float8', prosrc => 'aggregate_dummy' },
{ oid => '2122', descr => 'maximum value of all date input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'date',
proargtypes => 'date', prosrc => 'aggregate_dummy' },
{ oid => '2123', descr => 'maximum value of all time input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'time',
proargtypes => 'time', prosrc => 'aggregate_dummy' },
{ oid => '2124',
descr => 'maximum value of all time with time zone input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'timetz',
proargtypes => 'timetz', prosrc => 'aggregate_dummy' },
{ oid => '2125', descr => 'maximum value of all money input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'money',
proargtypes => 'money', prosrc => 'aggregate_dummy' },
{ oid => '2126', descr => 'maximum value of all timestamp input values',
proname => 'max', prokind => 'a', proisstrict => 'f',
prorettype => 'timestamp', proargtypes => 'timestamp',
prosrc => 'aggregate_dummy' },
{ oid => '2127',
descr => 'maximum value of all timestamp with time zone input values',
proname => 'max', prokind => 'a', proisstrict => 'f',
prorettype => 'timestamptz', proargtypes => 'timestamptz',
prosrc => 'aggregate_dummy' },
{ oid => '2128', descr => 'maximum value of all interval input values',
proname => 'max', prokind => 'a', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'interval',
prosrc => 'aggregate_dummy' },
{ oid => '2129', descr => 'maximum value of all text input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'text',
proargtypes => 'text', prosrc => 'aggregate_dummy' },
{ oid => '2130', descr => 'maximum value of all numeric input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'aggregate_dummy' },
{ oid => '2050', descr => 'maximum value of all anyarray input values',
proname => 'max', prokind => 'a', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'anyarray',
prosrc => 'aggregate_dummy' },
{ oid => '2244', descr => 'maximum value of all bpchar input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'bpchar',
proargtypes => 'bpchar', prosrc => 'aggregate_dummy' },
{ oid => '2797', descr => 'maximum value of all tid input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'tid',
proargtypes => 'tid', prosrc => 'aggregate_dummy' },
{ oid => '3564', descr => 'maximum value of all inet input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
proargtypes => 'inet', prosrc => 'aggregate_dummy' },
{ oid => '4189', descr => 'maximum value of all pg_lsn input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
{ oid => '5099', descr => 'maximum value of all xid8 input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'xid8',
proargtypes => 'xid8', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2131', descr => 'minimum value of all bigint input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2132', descr => 'minimum value of all integer input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'int4',
proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2133', descr => 'minimum value of all smallint input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'int2',
proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2134', descr => 'minimum value of all oid input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'oid',
proargtypes => 'oid', prosrc => 'aggregate_dummy' },
{ oid => '2135', descr => 'minimum value of all float4 input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'float4',
proargtypes => 'float4', prosrc => 'aggregate_dummy' },
{ oid => '2136', descr => 'minimum value of all float8 input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float8', prosrc => 'aggregate_dummy' },
{ oid => '2138', descr => 'minimum value of all date input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'date',
proargtypes => 'date', prosrc => 'aggregate_dummy' },
{ oid => '2139', descr => 'minimum value of all time input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'time',
proargtypes => 'time', prosrc => 'aggregate_dummy' },
{ oid => '2140',
descr => 'minimum value of all time with time zone input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'timetz',
proargtypes => 'timetz', prosrc => 'aggregate_dummy' },
{ oid => '2141', descr => 'minimum value of all money input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'money',
proargtypes => 'money', prosrc => 'aggregate_dummy' },
{ oid => '2142', descr => 'minimum value of all timestamp input values',
proname => 'min', prokind => 'a', proisstrict => 'f',
prorettype => 'timestamp', proargtypes => 'timestamp',
prosrc => 'aggregate_dummy' },
{ oid => '2143',
descr => 'minimum value of all timestamp with time zone input values',
proname => 'min', prokind => 'a', proisstrict => 'f',
prorettype => 'timestamptz', proargtypes => 'timestamptz',
prosrc => 'aggregate_dummy' },
{ oid => '2144', descr => 'minimum value of all interval input values',
proname => 'min', prokind => 'a', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'interval',
prosrc => 'aggregate_dummy' },
{ oid => '2145', descr => 'minimum value of all text values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'text',
proargtypes => 'text', prosrc => 'aggregate_dummy' },
{ oid => '2146', descr => 'minimum value of all numeric input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'numeric',
proargtypes => 'numeric', prosrc => 'aggregate_dummy' },
{ oid => '2051', descr => 'minimum value of all anyarray input values',
proname => 'min', prokind => 'a', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'anyarray',
prosrc => 'aggregate_dummy' },
{ oid => '2245', descr => 'minimum value of all bpchar input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'bpchar',
proargtypes => 'bpchar', prosrc => 'aggregate_dummy' },
{ oid => '2798', descr => 'minimum value of all tid input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'tid',
proargtypes => 'tid', prosrc => 'aggregate_dummy' },
{ oid => '3565', descr => 'minimum value of all inet input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
proargtypes => 'inet', prosrc => 'aggregate_dummy' },
{ oid => '4190', descr => 'minimum value of all pg_lsn input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
{ oid => '5100', descr => 'minimum value of all xid8 input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'xid8',
proargtypes => 'xid8', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# count has two forms: count(any) and count(*)
{ oid => '2147',
descr => 'number of input rows for which the input expression is not null',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'count', prosupport => 'int8inc_support', prokind => 'a',
proisstrict => 'f', prorettype => 'int8', proargtypes => 'any',
prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2803', descr => 'number of input rows',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'count', prosupport => 'int8inc_support', prokind => 'a',
proisstrict => 'f', prorettype => 'int8', proargtypes => '',
prosrc => 'aggregate_dummy' },
{ oid => '6236', descr => 'planner support for count run condition',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'int8inc_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'int8inc_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2718',
descr => 'population variance of bigint input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2719',
descr => 'population variance of integer input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2720',
descr => 'population variance of smallint input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2721',
descr => 'population variance of float4 input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2722',
descr => 'population variance of float8 input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2723',
descr => 'population variance of numeric input values (square of the population standard deviation)',
proname => 'var_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2641',
descr => 'sample variance of bigint input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2642',
descr => 'sample variance of integer input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2643',
descr => 'sample variance of smallint input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2644',
descr => 'sample variance of float4 input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2645',
descr => 'sample variance of float8 input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2646',
descr => 'sample variance of numeric input values (square of the sample standard deviation)',
proname => 'var_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2148', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2149', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2150', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2151', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2152', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2153', descr => 'historical alias for var_samp',
proname => 'variance', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2724',
descr => 'population standard deviation of bigint input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2725',
descr => 'population standard deviation of integer input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2726',
descr => 'population standard deviation of smallint input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2727',
descr => 'population standard deviation of float4 input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2728',
descr => 'population standard deviation of float8 input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2729',
descr => 'population standard deviation of numeric input values',
proname => 'stddev_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2712', descr => 'sample standard deviation of bigint input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2713', descr => 'sample standard deviation of integer input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2714',
descr => 'sample standard deviation of smallint input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2715', descr => 'sample standard deviation of float4 input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2716', descr => 'sample standard deviation of float8 input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2717', descr => 'sample standard deviation of numeric input values',
proname => 'stddev_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2154', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2155', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2156', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2157', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float4',
prosrc => 'aggregate_dummy' },
{ oid => '2158', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8',
prosrc => 'aggregate_dummy' },
{ oid => '2159', descr => 'historical alias for stddev_samp',
proname => 'stddev', prokind => 'a', proisstrict => 'f',
prorettype => 'numeric', proargtypes => 'numeric',
prosrc => 'aggregate_dummy' },
{ oid => '2818',
descr => 'number of input rows in which both expressions are not null',
proname => 'regr_count', prokind => 'a', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2819',
descr => 'sum of squares of the independent variable (sum(X^2) - sum(X)^2/N)',
proname => 'regr_sxx', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2820',
descr => 'sum of squares of the dependent variable (sum(Y^2) - sum(Y)^2/N)',
proname => 'regr_syy', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2821',
descr => 'sum of products of independent times dependent variable (sum(X*Y) - sum(X) * sum(Y)/N)',
proname => 'regr_sxy', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2822', descr => 'average of the independent variable (sum(X)/N)',
proname => 'regr_avgx', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2823', descr => 'average of the dependent variable (sum(Y)/N)',
proname => 'regr_avgy', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2824', descr => 'square of the correlation coefficient',
proname => 'regr_r2', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2825',
descr => 'slope of the least-squares-fit linear equation determined by the (X, Y) pairs',
proname => 'regr_slope', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2826',
descr => 'y-intercept of the least-squares-fit linear equation determined by the (X, Y) pairs',
proname => 'regr_intercept', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2827', descr => 'population covariance',
proname => 'covar_pop', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2828', descr => 'sample covariance',
proname => 'covar_samp', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '2829', descr => 'correlation coefficient',
proname => 'corr', prokind => 'a', proisstrict => 'f', prorettype => 'float8',
proargtypes => 'float8 float8', prosrc => 'aggregate_dummy' },
{ oid => '2160',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_pattern_lt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text text', prosrc => 'text_pattern_lt' },
{ oid => '2161',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_pattern_le', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text text', prosrc => 'text_pattern_le' },
{ oid => '2163',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_pattern_ge', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text text', prosrc => 'text_pattern_ge' },
{ oid => '2164',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'text_pattern_gt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text text', prosrc => 'text_pattern_gt' },
{ oid => '2166', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bttext_pattern_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'text text', prosrc => 'bttext_pattern_cmp' },
{ oid => '3332', descr => 'sort support',
proname => 'bttext_pattern_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'bttext_pattern_sortsupport' },
{ oid => '2174',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_pattern_lt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_pattern_lt' },
{ oid => '2175',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_pattern_le', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_pattern_le' },
{ oid => '2177',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_pattern_ge', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_pattern_ge' },
{ oid => '2178',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'bpchar_pattern_gt', proleakproof => 't', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'bpchar_pattern_gt' },
{ oid => '2180', descr => 'less-equal-greater',
Straighten out leakproofness markings on text comparison functions. Since we introduced the idea of leakproof functions, texteq and textne were marked leakproof but their sibling text comparison functions were not. This inconsistency seemed justified because texteq/textne just relied on memcmp() and so could easily be seen to be leakproof, while the other comparison functions are far more complex and indeed can throw input-dependent errors. However, that argument crashed and burned with the addition of nondeterministic collations, because now texteq/textne may invoke the exact same varstr_cmp() infrastructure as the rest. It makes no sense whatever to give them different leakproofness markings. After a certain amount of angst we've concluded that it's all right to consider varstr_cmp() to be leakproof, mostly because the other choice would be disastrous for performance of many queries where leakproofness matters. The input-dependent errors should only be reachable for corrupt input data, or so we hope anyway; certainly, if they are reachable in practice, we've got problems with requirements as basic as maintaining a btree index on a text column. Hence, run around to all the SQL functions that derive from varstr_cmp() and mark them leakproof. This should result in a useful gain in flexibility/performance for queries in which non-leakproofness degrades the efficiency of the query plan. Back-patch to v12 where nondeterministic collations were added. While this isn't an essential bug fix given the determination that varstr_cmp() is leakproof, we might as well apply it now that we've been forced into a post-beta4 catversion bump. Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 22:56:30 +02:00
proname => 'btbpchar_pattern_cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'bpchar bpchar', prosrc => 'btbpchar_pattern_cmp' },
{ oid => '3333', descr => 'sort support',
proname => 'btbpchar_pattern_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'btbpchar_pattern_sortsupport' },
{ oid => '2188', descr => 'less-equal-greater',
proname => 'btint48cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int4 int8', prosrc => 'btint48cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2189', descr => 'less-equal-greater',
proname => 'btint84cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int8 int4', prosrc => 'btint84cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2190', descr => 'less-equal-greater',
proname => 'btint24cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int2 int4', prosrc => 'btint24cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2191', descr => 'less-equal-greater',
proname => 'btint42cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int4 int2', prosrc => 'btint42cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2192', descr => 'less-equal-greater',
proname => 'btint28cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int2 int8', prosrc => 'btint28cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2193', descr => 'less-equal-greater',
proname => 'btint82cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'int8 int2', prosrc => 'btint82cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2194', descr => 'less-equal-greater',
proname => 'btfloat48cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'float4 float8', prosrc => 'btfloat48cmp' },
{ oid => '2195', descr => 'less-equal-greater',
proname => 'btfloat84cmp', proleakproof => 't', prorettype => 'int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'float8 float4', prosrc => 'btfloat84cmp' },
{ oid => '2212', descr => 'I/O',
proname => 'regprocedurein', provolatile => 's', prorettype => 'regprocedure',
proargtypes => 'cstring', prosrc => 'regprocedurein' },
{ oid => '2213', descr => 'I/O',
proname => 'regprocedureout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regprocedure', prosrc => 'regprocedureout' },
{ oid => '2214', descr => 'I/O',
proname => 'regoperin', provolatile => 's', prorettype => 'regoper',
proargtypes => 'cstring', prosrc => 'regoperin' },
{ oid => '2215', descr => 'I/O',
proname => 'regoperout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regoper', prosrc => 'regoperout' },
{ oid => '3492', descr => 'convert operator name to regoper',
proname => 'to_regoper', provolatile => 's', prorettype => 'regoper',
proargtypes => 'text', prosrc => 'to_regoper' },
{ oid => '3476', descr => 'convert operator name to regoperator',
proname => 'to_regoperator', provolatile => 's', prorettype => 'regoperator',
proargtypes => 'text', prosrc => 'to_regoperator' },
{ oid => '2216', descr => 'I/O',
proname => 'regoperatorin', provolatile => 's', prorettype => 'regoperator',
proargtypes => 'cstring', prosrc => 'regoperatorin' },
{ oid => '2217', descr => 'I/O',
proname => 'regoperatorout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regoperator', prosrc => 'regoperatorout' },
{ oid => '2218', descr => 'I/O',
proname => 'regclassin', provolatile => 's', prorettype => 'regclass',
proargtypes => 'cstring', prosrc => 'regclassin' },
{ oid => '2219', descr => 'I/O',
proname => 'regclassout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regclass', prosrc => 'regclassout' },
{ oid => '3495', descr => 'convert classname to regclass',
proname => 'to_regclass', provolatile => 's', prorettype => 'regclass',
proargtypes => 'text', prosrc => 'to_regclass' },
{ oid => '4193', descr => 'I/O',
proname => 'regcollationin', provolatile => 's', prorettype => 'regcollation',
proargtypes => 'cstring', prosrc => 'regcollationin' },
{ oid => '4194', descr => 'I/O',
proname => 'regcollationout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regcollation', prosrc => 'regcollationout' },
{ oid => '4195', descr => 'convert classname to regcollation',
proname => 'to_regcollation', provolatile => 's',
prorettype => 'regcollation', proargtypes => 'text',
prosrc => 'to_regcollation' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2220', descr => 'I/O',
proname => 'regtypein', provolatile => 's', prorettype => 'regtype',
proargtypes => 'cstring', prosrc => 'regtypein' },
{ oid => '2221', descr => 'I/O',
proname => 'regtypeout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regtype', prosrc => 'regtypeout' },
{ oid => '3493', descr => 'convert type name to regtype',
proname => 'to_regtype', provolatile => 's', prorettype => 'regtype',
proargtypes => 'text', prosrc => 'to_regtype' },
{ oid => '8401', descr => 'convert type name to type modifier',
proname => 'to_regtypemod', provolatile => 's', prorettype => 'int4',
proargtypes => 'text', prosrc => 'to_regtypemod' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1079', descr => 'convert text to regclass',
proname => 'regclass', provolatile => 's', prorettype => 'regclass',
proargtypes => 'text', prosrc => 'text_regclass' },
{ oid => '4098', descr => 'I/O',
proname => 'regrolein', provolatile => 's', prorettype => 'regrole',
proargtypes => 'cstring', prosrc => 'regrolein' },
{ oid => '4092', descr => 'I/O',
proname => 'regroleout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regrole', prosrc => 'regroleout' },
{ oid => '4093', descr => 'convert role name to regrole',
proname => 'to_regrole', provolatile => 's', prorettype => 'regrole',
proargtypes => 'text', prosrc => 'to_regrole' },
{ oid => '4084', descr => 'I/O',
proname => 'regnamespacein', provolatile => 's', prorettype => 'regnamespace',
proargtypes => 'cstring', prosrc => 'regnamespacein' },
{ oid => '4085', descr => 'I/O',
proname => 'regnamespaceout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regnamespace', prosrc => 'regnamespaceout' },
{ oid => '4086', descr => 'convert namespace name to regnamespace',
proname => 'to_regnamespace', provolatile => 's',
prorettype => 'regnamespace', proargtypes => 'text',
prosrc => 'to_regnamespace' },
{ oid => '6210', descr => 'test whether string is valid input for data type',
proname => 'pg_input_is_valid', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'pg_input_is_valid' },
{ oid => '6211',
descr => 'get error details if string is not valid input for data type',
proname => 'pg_input_error_info', provolatile => 's', prorettype => 'record',
proargtypes => 'text text',
proallargtypes => '{text,text,text,text,text,text}',
proargmodes => '{i,i,o,o,o,o}',
proargnames => '{value,type_name,message,detail,hint,sql_error_code}',
prosrc => 'pg_input_error_info' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1268',
descr => 'parse qualified identifier to array of identifiers',
proname => 'parse_ident', prorettype => '_text', proargtypes => 'text bool',
proargnames => '{str,strict}', prosrc => 'parse_ident' },
{ oid => '2246', descr => '(internal)',
proname => 'fmgr_internal_validator', provolatile => 's',
prorettype => 'void', proargtypes => 'oid',
prosrc => 'fmgr_internal_validator' },
{ oid => '2247', descr => '(internal)',
proname => 'fmgr_c_validator', provolatile => 's', prorettype => 'void',
proargtypes => 'oid', prosrc => 'fmgr_c_validator' },
{ oid => '2248', descr => '(internal)',
proname => 'fmgr_sql_validator', provolatile => 's', prorettype => 'void',
proargtypes => 'oid', prosrc => 'fmgr_sql_validator' },
{ oid => '2250',
descr => 'user privilege on database by username, database name',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text',
prosrc => 'has_database_privilege_name_name' },
{ oid => '2251',
descr => 'user privilege on database by username, database oid',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_database_privilege_name_id' },
{ oid => '2252',
descr => 'user privilege on database by user oid, database name',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_database_privilege_id_name' },
{ oid => '2253',
descr => 'user privilege on database by user oid, database oid',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_database_privilege_id_id' },
{ oid => '2254',
descr => 'current user privilege on database by database name',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_database_privilege_name' },
{ oid => '2255',
descr => 'current user privilege on database by database oid',
proname => 'has_database_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_database_privilege_id' },
{ oid => '2256',
descr => 'user privilege on function by username, function name',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text',
prosrc => 'has_function_privilege_name_name' },
{ oid => '2257',
descr => 'user privilege on function by username, function oid',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_function_privilege_name_id' },
{ oid => '2258',
descr => 'user privilege on function by user oid, function name',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_function_privilege_id_name' },
{ oid => '2259',
descr => 'user privilege on function by user oid, function oid',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_function_privilege_id_id' },
{ oid => '2260',
descr => 'current user privilege on function by function name',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_function_privilege_name' },
{ oid => '2261',
descr => 'current user privilege on function by function oid',
proname => 'has_function_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_function_privilege_id' },
{ oid => '2262',
descr => 'user privilege on language by username, language name',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text',
prosrc => 'has_language_privilege_name_name' },
{ oid => '2263',
descr => 'user privilege on language by username, language oid',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_language_privilege_name_id' },
{ oid => '2264',
descr => 'user privilege on language by user oid, language name',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_language_privilege_id_name' },
{ oid => '2265',
descr => 'user privilege on language by user oid, language oid',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_language_privilege_id_id' },
{ oid => '2266',
descr => 'current user privilege on language by language name',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_language_privilege_name' },
{ oid => '2267',
descr => 'current user privilege on language by language oid',
proname => 'has_language_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_language_privilege_id' },
{ oid => '2268', descr => 'user privilege on schema by username, schema name',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text', prosrc => 'has_schema_privilege_name_name' },
{ oid => '2269', descr => 'user privilege on schema by username, schema oid',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_schema_privilege_name_id' },
{ oid => '2270', descr => 'user privilege on schema by user oid, schema name',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_schema_privilege_id_name' },
{ oid => '2271', descr => 'user privilege on schema by user oid, schema oid',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_schema_privilege_id_id' },
{ oid => '2272', descr => 'current user privilege on schema by schema name',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_schema_privilege_name' },
{ oid => '2273', descr => 'current user privilege on schema by schema oid',
proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_schema_privilege_id' },
{ oid => '2390',
descr => 'user privilege on tablespace by username, tablespace name',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'name text text',
prosrc => 'has_tablespace_privilege_name_name' },
{ oid => '2391',
descr => 'user privilege on tablespace by username, tablespace oid',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'name oid text',
prosrc => 'has_tablespace_privilege_name_id' },
{ oid => '2392',
descr => 'user privilege on tablespace by user oid, tablespace name',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text text',
prosrc => 'has_tablespace_privilege_id_name' },
{ oid => '2393',
descr => 'user privilege on tablespace by user oid, tablespace oid',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid oid text',
prosrc => 'has_tablespace_privilege_id_id' },
{ oid => '2394',
descr => 'current user privilege on tablespace by tablespace name',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'text text',
prosrc => 'has_tablespace_privilege_name' },
{ oid => '2395',
descr => 'current user privilege on tablespace by tablespace oid',
proname => 'has_tablespace_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text',
prosrc => 'has_tablespace_privilege_id' },
{ oid => '3000',
descr => 'user privilege on foreign data wrapper by username, foreign data wrapper name',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'name text text',
prosrc => 'has_foreign_data_wrapper_privilege_name_name' },
{ oid => '3001',
descr => 'user privilege on foreign data wrapper by username, foreign data wrapper oid',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'name oid text',
prosrc => 'has_foreign_data_wrapper_privilege_name_id' },
{ oid => '3002',
descr => 'user privilege on foreign data wrapper by user oid, foreign data wrapper name',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text text',
prosrc => 'has_foreign_data_wrapper_privilege_id_name' },
{ oid => '3003',
descr => 'user privilege on foreign data wrapper by user oid, foreign data wrapper oid',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid oid text',
prosrc => 'has_foreign_data_wrapper_privilege_id_id' },
{ oid => '3004',
descr => 'current user privilege on foreign data wrapper by foreign data wrapper name',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'text text',
prosrc => 'has_foreign_data_wrapper_privilege_name' },
{ oid => '3005',
descr => 'current user privilege on foreign data wrapper by foreign data wrapper oid',
proname => 'has_foreign_data_wrapper_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text',
prosrc => 'has_foreign_data_wrapper_privilege_id' },
{ oid => '3006', descr => 'user privilege on server by username, server name',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text', prosrc => 'has_server_privilege_name_name' },
{ oid => '3007', descr => 'user privilege on server by username, server oid',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_server_privilege_name_id' },
{ oid => '3008', descr => 'user privilege on server by user oid, server name',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_server_privilege_id_name' },
{ oid => '3009', descr => 'user privilege on server by user oid, server oid',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_server_privilege_id_id' },
{ oid => '3010', descr => 'current user privilege on server by server name',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_server_privilege_name' },
{ oid => '3011', descr => 'current user privilege on server by server oid',
proname => 'has_server_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_server_privilege_id' },
{ oid => '3138', descr => 'user privilege on type by username, type name',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text text', prosrc => 'has_type_privilege_name_name' },
{ oid => '3139', descr => 'user privilege on type by username, type oid',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'has_type_privilege_name_id' },
{ oid => '3140', descr => 'user privilege on type by user oid, type name',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text text', prosrc => 'has_type_privilege_id_name' },
{ oid => '3141', descr => 'user privilege on type by user oid, type oid',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'has_type_privilege_id_id' },
{ oid => '3142', descr => 'current user privilege on type by type name',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'text text', prosrc => 'has_type_privilege_name' },
{ oid => '3143', descr => 'current user privilege on type by type oid',
proname => 'has_type_privilege', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'has_type_privilege_id' },
{ oid => '6205',
descr => 'user privilege on parameter by username, parameter name',
proname => 'has_parameter_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'name text text',
prosrc => 'has_parameter_privilege_name_name' },
{ oid => '6206',
descr => 'user privilege on parameter by user oid, parameter name',
proname => 'has_parameter_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'oid text text',
prosrc => 'has_parameter_privilege_id_name' },
{ oid => '6207',
descr => 'current user privilege on parameter by parameter name',
proname => 'has_parameter_privilege', provolatile => 's',
prorettype => 'bool', proargtypes => 'text text',
prosrc => 'has_parameter_privilege_name' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2705', descr => 'user privilege on role by username, role name',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'name name text', prosrc => 'pg_has_role_name_name' },
{ oid => '2706', descr => 'user privilege on role by username, role oid',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'name oid text', prosrc => 'pg_has_role_name_id' },
{ oid => '2707', descr => 'user privilege on role by user oid, role name',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid name text', prosrc => 'pg_has_role_id_name' },
{ oid => '2708', descr => 'user privilege on role by user oid, role oid',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid oid text', prosrc => 'pg_has_role_id_id' },
{ oid => '2709', descr => 'current user privilege on role by role name',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'name text', prosrc => 'pg_has_role_name' },
{ oid => '2710', descr => 'current user privilege on role by role oid',
proname => 'pg_has_role', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid text', prosrc => 'pg_has_role_id' },
{ oid => '1269',
descr => 'bytes required to store the value, perhaps with compression',
proname => 'pg_column_size', provolatile => 's', prorettype => 'int4',
proargtypes => 'any', prosrc => 'pg_column_size' },
Allow configurable LZ4 TOAST compression. There is now a per-column COMPRESSION option which can be set to pglz (the default, and the only option in up until now) or lz4. Or, if you like, you can set the new default_toast_compression GUC to lz4, and then that will be the default for new table columns for which no value is specified. We don't have lz4 support in the PostgreSQL code, so to use lz4 compression, PostgreSQL must be built --with-lz4. In general, TOAST compression means compression of individual column values, not the whole tuple, and those values can either be compressed inline within the tuple or compressed and then stored externally in the TOAST table, so those properties also apply to this feature. Prior to this commit, a TOAST pointer has two unused bits as part of the va_extsize field, and a compessed datum has two unused bits as part of the va_rawsize field. These bits are unused because the length of a varlena is limited to 1GB; we now use them to indicate the compression type that was used. This means we only have bit space for 2 more built-in compresison types, but we could work around that problem, if necessary, by introducing a new vartag_external value for any further types we end up wanting to add. Hopefully, it won't be too important to offer a wide selection of algorithms here, since each one we add not only takes more coding but also adds a build dependency for every packager. Nevertheless, it seems worth doing at least this much, because LZ4 gets better compression than PGLZ with less CPU usage. It's possible for LZ4-compressed datums to leak into composite type values stored on disk, just as it is for PGLZ. It's also possible for LZ4-compressed attributes to be copied into a different table via SQL commands such as CREATE TABLE AS or INSERT .. SELECT. It would be expensive to force such values to be decompressed, so PostgreSQL has never done so. For the same reasons, we also don't force recompression of already-compressed values even if the target table prefers a different compression method than was used for the source data. These architectural decisions are perhaps arguable but revisiting them is well beyond the scope of what seemed possible to do as part of this project. However, it's relatively cheap to recompress as part of VACUUM FULL or CLUSTER, so this commit adjusts those commands to do so, if the configured compression method of the table happens not to match what was used for some column value stored therein. Dilip Kumar. The original patches on which this work was based were written by Ildus Kurbangaliev, and those were patches were based on even earlier work by Nikita Glukhov, but the design has since changed very substantially, since allow a potentially large number of compression methods that could be added and dropped on a running system proved too problematic given some of the architectural issues mentioned above; the choice of which specific compression method to add first is now different; and a lot of the code has been heavily refactored. More recently, Justin Przyby helped quite a bit with testing and reviewing and this version also includes some code contributions from him. Other design input and review from Tomas Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander Korotkov, and me. Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
2021-03-19 20:10:38 +01:00
{ oid => '2121', descr => 'compression method for the compressed datum',
proname => 'pg_column_compression', provolatile => 's', prorettype => 'text',
proargtypes => 'any', prosrc => 'pg_column_compression' },
{ oid => '8393', descr => 'chunk ID of on-disk TOASTed value',
proname => 'pg_column_toast_chunk_id', provolatile => 's', prorettype => 'oid',
proargtypes => 'any', prosrc => 'pg_column_toast_chunk_id' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2322',
descr => 'total disk space usage for the specified tablespace',
proname => 'pg_tablespace_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'oid', prosrc => 'pg_tablespace_size_oid' },
{ oid => '2323',
descr => 'total disk space usage for the specified tablespace',
proname => 'pg_tablespace_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'name', prosrc => 'pg_tablespace_size_name' },
{ oid => '2324', descr => 'total disk space usage for the specified database',
proname => 'pg_database_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'oid', prosrc => 'pg_database_size_oid' },
{ oid => '2168', descr => 'total disk space usage for the specified database',
proname => 'pg_database_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'name', prosrc => 'pg_database_size_name' },
{ oid => '2325',
descr => 'disk space usage for the main fork of the specified table or index',
proname => 'pg_relation_size', prolang => 'sql', provolatile => 'v',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'int8', proargtypes => 'regclass',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2332',
descr => 'disk space usage for the specified fork of a table or index',
proname => 'pg_relation_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'regclass text', prosrc => 'pg_relation_size' },
{ oid => '2286',
descr => 'total disk space usage for the specified table and associated indexes',
proname => 'pg_total_relation_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'regclass', prosrc => 'pg_total_relation_size' },
{ oid => '2288',
descr => 'convert a long int to a human readable text using size units',
proname => 'pg_size_pretty', prorettype => 'text', proargtypes => 'int8',
prosrc => 'pg_size_pretty' },
{ oid => '3166',
descr => 'convert a numeric to a human readable text using size units',
proname => 'pg_size_pretty', prorettype => 'text', proargtypes => 'numeric',
prosrc => 'pg_size_pretty_numeric' },
{ oid => '3334',
descr => 'convert a size in human-readable format with size units into bytes',
proname => 'pg_size_bytes', prorettype => 'int8', proargtypes => 'text',
prosrc => 'pg_size_bytes' },
{ oid => '2997',
descr => 'disk space usage for the specified table, including TOAST, free space and visibility map',
proname => 'pg_table_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'regclass', prosrc => 'pg_table_size' },
{ oid => '2998',
descr => 'disk space usage for all indexes attached to the specified table',
proname => 'pg_indexes_size', provolatile => 'v', prorettype => 'int8',
proargtypes => 'regclass', prosrc => 'pg_indexes_size' },
{ oid => '2999', descr => 'filenode identifier of relation',
proname => 'pg_relation_filenode', provolatile => 's', prorettype => 'oid',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'regclass', prosrc => 'pg_relation_filenode' },
{ oid => '3454', descr => 'relation OID for filenode and tablespace',
proname => 'pg_filenode_relation', provolatile => 's',
prorettype => 'regclass', proargtypes => 'oid oid',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_filenode_relation' },
{ oid => '3034', descr => 'file path of relation',
proname => 'pg_relation_filepath', provolatile => 's', prorettype => 'text',
proargtypes => 'regclass', prosrc => 'pg_relation_filepath' },
{ oid => '2316', descr => '(internal)',
proname => 'postgresql_fdw_validator', prorettype => 'bool',
proargtypes => '_text oid', prosrc => 'postgresql_fdw_validator' },
{ oid => '2290', descr => 'I/O',
proname => 'record_in', provolatile => 's', prorettype => 'record',
proargtypes => 'cstring oid int4', prosrc => 'record_in' },
{ oid => '2291', descr => 'I/O',
proname => 'record_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'record', prosrc => 'record_out' },
{ oid => '2292', descr => 'I/O',
proname => 'cstring_in', prorettype => 'cstring', proargtypes => 'cstring',
prosrc => 'cstring_in' },
{ oid => '2293', descr => 'I/O',
proname => 'cstring_out', prorettype => 'cstring', proargtypes => 'cstring',
prosrc => 'cstring_out' },
{ oid => '2294', descr => 'I/O',
proname => 'any_in', prorettype => 'any', proargtypes => 'cstring',
prosrc => 'any_in' },
{ oid => '2295', descr => 'I/O',
proname => 'any_out', prorettype => 'cstring', proargtypes => 'any',
prosrc => 'any_out' },
{ oid => '2296', descr => 'I/O',
proname => 'anyarray_in', prorettype => 'anyarray', proargtypes => 'cstring',
prosrc => 'anyarray_in' },
{ oid => '2297', descr => 'I/O',
proname => 'anyarray_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyarray', prosrc => 'anyarray_out' },
{ oid => '2298', descr => 'I/O',
proname => 'void_in', prorettype => 'void', proargtypes => 'cstring',
prosrc => 'void_in' },
{ oid => '2299', descr => 'I/O',
proname => 'void_out', prorettype => 'cstring', proargtypes => 'void',
prosrc => 'void_out' },
{ oid => '2300', descr => 'I/O',
proname => 'trigger_in', proisstrict => 'f', prorettype => 'trigger',
proargtypes => 'cstring', prosrc => 'trigger_in' },
{ oid => '2301', descr => 'I/O',
proname => 'trigger_out', prorettype => 'cstring', proargtypes => 'trigger',
prosrc => 'trigger_out' },
{ oid => '3594', descr => 'I/O',
proname => 'event_trigger_in', proisstrict => 'f',
prorettype => 'event_trigger', proargtypes => 'cstring',
prosrc => 'event_trigger_in' },
{ oid => '3595', descr => 'I/O',
proname => 'event_trigger_out', prorettype => 'cstring',
proargtypes => 'event_trigger', prosrc => 'event_trigger_out' },
{ oid => '2302', descr => 'I/O',
proname => 'language_handler_in', proisstrict => 'f',
prorettype => 'language_handler', proargtypes => 'cstring',
prosrc => 'language_handler_in' },
{ oid => '2303', descr => 'I/O',
proname => 'language_handler_out', prorettype => 'cstring',
proargtypes => 'language_handler', prosrc => 'language_handler_out' },
{ oid => '2304', descr => 'I/O',
proname => 'internal_in', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'cstring', prosrc => 'internal_in' },
{ oid => '2305', descr => 'I/O',
proname => 'internal_out', prorettype => 'cstring', proargtypes => 'internal',
prosrc => 'internal_out' },
{ oid => '2312', descr => 'I/O',
proname => 'anyelement_in', prorettype => 'anyelement',
proargtypes => 'cstring', prosrc => 'anyelement_in' },
{ oid => '2313', descr => 'I/O',
proname => 'anyelement_out', prorettype => 'cstring',
proargtypes => 'anyelement', prosrc => 'anyelement_out' },
{ oid => '2398', descr => 'I/O',
proname => 'shell_in', proisstrict => 'f', prorettype => 'void',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'cstring', prosrc => 'shell_in' },
{ oid => '2399', descr => 'I/O',
proname => 'shell_out', prorettype => 'cstring', proargtypes => 'void',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'shell_out' },
{ oid => '2597', descr => 'I/O',
proname => 'domain_in', proisstrict => 'f', provolatile => 's',
prorettype => 'any', proargtypes => 'cstring oid int4',
prosrc => 'domain_in' },
{ oid => '2598', descr => 'I/O',
proname => 'domain_recv', proisstrict => 'f', provolatile => 's',
prorettype => 'any', proargtypes => 'internal oid int4',
prosrc => 'domain_recv' },
{ oid => '2777', descr => 'I/O',
proname => 'anynonarray_in', prorettype => 'anynonarray',
proargtypes => 'cstring', prosrc => 'anynonarray_in' },
{ oid => '2778', descr => 'I/O',
proname => 'anynonarray_out', prorettype => 'cstring',
proargtypes => 'anynonarray', prosrc => 'anynonarray_out' },
{ oid => '3116', descr => 'I/O',
proname => 'fdw_handler_in', proisstrict => 'f', prorettype => 'fdw_handler',
proargtypes => 'cstring', prosrc => 'fdw_handler_in' },
{ oid => '3117', descr => 'I/O',
proname => 'fdw_handler_out', prorettype => 'cstring',
proargtypes => 'fdw_handler', prosrc => 'fdw_handler_out' },
{ oid => '326', descr => 'I/O',
proname => 'index_am_handler_in', proisstrict => 'f',
prorettype => 'index_am_handler', proargtypes => 'cstring',
prosrc => 'index_am_handler_in' },
{ oid => '327', descr => 'I/O',
proname => 'index_am_handler_out', prorettype => 'cstring',
proargtypes => 'index_am_handler', prosrc => 'index_am_handler_out' },
{ oid => '3311', descr => 'I/O',
proname => 'tsm_handler_in', proisstrict => 'f', prorettype => 'tsm_handler',
proargtypes => 'cstring', prosrc => 'tsm_handler_in' },
{ oid => '3312', descr => 'I/O',
proname => 'tsm_handler_out', prorettype => 'cstring',
proargtypes => 'tsm_handler', prosrc => 'tsm_handler_out' },
tableam: introduce table AM infrastructure. This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 18:54:38 +01:00
{ oid => '267', descr => 'I/O',
proname => 'table_am_handler_in', proisstrict => 'f',
prorettype => 'table_am_handler', proargtypes => 'cstring',
prosrc => 'table_am_handler_in' },
{ oid => '268', descr => 'I/O',
proname => 'table_am_handler_out', prorettype => 'cstring',
proargtypes => 'table_am_handler', prosrc => 'table_am_handler_out' },
{ oid => '5086', descr => 'I/O',
proname => 'anycompatible_in', prorettype => 'anycompatible',
proargtypes => 'cstring', prosrc => 'anycompatible_in' },
{ oid => '5087', descr => 'I/O',
proname => 'anycompatible_out', prorettype => 'cstring',
proargtypes => 'anycompatible', prosrc => 'anycompatible_out' },
{ oid => '5088', descr => 'I/O',
proname => 'anycompatiblearray_in', prorettype => 'anycompatiblearray',
proargtypes => 'cstring', prosrc => 'anycompatiblearray_in' },
{ oid => '5089', descr => 'I/O',
proname => 'anycompatiblearray_out', provolatile => 's',
prorettype => 'cstring', proargtypes => 'anycompatiblearray',
prosrc => 'anycompatiblearray_out' },
{ oid => '5090', descr => 'I/O',
proname => 'anycompatiblearray_recv', provolatile => 's',
prorettype => 'anycompatiblearray', proargtypes => 'internal',
prosrc => 'anycompatiblearray_recv' },
{ oid => '5091', descr => 'I/O',
proname => 'anycompatiblearray_send', provolatile => 's',
prorettype => 'bytea', proargtypes => 'anycompatiblearray',
prosrc => 'anycompatiblearray_send' },
{ oid => '5092', descr => 'I/O',
proname => 'anycompatiblenonarray_in', prorettype => 'anycompatiblenonarray',
proargtypes => 'cstring', prosrc => 'anycompatiblenonarray_in' },
{ oid => '5093', descr => 'I/O',
proname => 'anycompatiblenonarray_out', prorettype => 'cstring',
proargtypes => 'anycompatiblenonarray',
prosrc => 'anycompatiblenonarray_out' },
{ oid => '5094', descr => 'I/O',
proname => 'anycompatiblerange_in', provolatile => 's',
prorettype => 'anycompatiblerange', proargtypes => 'cstring oid int4',
prosrc => 'anycompatiblerange_in' },
{ oid => '5095', descr => 'I/O',
proname => 'anycompatiblerange_out', provolatile => 's',
prorettype => 'cstring', proargtypes => 'anycompatiblerange',
prosrc => 'anycompatiblerange_out' },
2020-12-20 05:20:33 +01:00
{ oid => '4226', descr => 'I/O',
proname => 'anycompatiblemultirange_in', provolatile => 's',
prorettype => 'anycompatiblemultirange', proargtypes => 'cstring oid int4',
prosrc => 'anycompatiblemultirange_in' },
{ oid => '4227', descr => 'I/O',
proname => 'anycompatiblemultirange_out', provolatile => 's',
prorettype => 'cstring', proargtypes => 'anycompatiblemultirange',
prosrc => 'anycompatiblemultirange_out' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# tablesample method handlers
{ oid => '3313', descr => 'BERNOULLI tablesample method handler',
proname => 'bernoulli', provolatile => 'v', prorettype => 'tsm_handler',
proargtypes => 'internal', prosrc => 'tsm_bernoulli_handler' },
{ oid => '3314', descr => 'SYSTEM tablesample method handler',
proname => 'system', provolatile => 'v', prorettype => 'tsm_handler',
proargtypes => 'internal', prosrc => 'tsm_system_handler' },
# cryptographic
{ oid => '2311', descr => 'MD5 hash',
proname => 'md5', proleakproof => 't', prorettype => 'text',
proargtypes => 'text', prosrc => 'md5_text' },
{ oid => '2321', descr => 'MD5 hash',
proname => 'md5', proleakproof => 't', prorettype => 'text',
proargtypes => 'bytea', prosrc => 'md5_bytea' },
{ oid => '3419', descr => 'SHA-224 hash',
proname => 'sha224', proleakproof => 't', prorettype => 'bytea',
proargtypes => 'bytea', prosrc => 'sha224_bytea' },
{ oid => '3420', descr => 'SHA-256 hash',
proname => 'sha256', proleakproof => 't', prorettype => 'bytea',
proargtypes => 'bytea', prosrc => 'sha256_bytea' },
{ oid => '3421', descr => 'SHA-384 hash',
proname => 'sha384', proleakproof => 't', prorettype => 'bytea',
proargtypes => 'bytea', prosrc => 'sha384_bytea' },
{ oid => '3422', descr => 'SHA-512 hash',
proname => 'sha512', proleakproof => 't', prorettype => 'bytea',
proargtypes => 'bytea', prosrc => 'sha512_bytea' },
# crosstype operations for date vs. timestamp and timestamptz
{ oid => '2338',
proname => 'date_lt_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_lt_timestamp' },
{ oid => '2339',
proname => 'date_le_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_le_timestamp' },
{ oid => '2340',
proname => 'date_eq_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_eq_timestamp' },
{ oid => '2341',
proname => 'date_gt_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_gt_timestamp' },
{ oid => '2342',
proname => 'date_ge_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_ge_timestamp' },
{ oid => '2343',
proname => 'date_ne_timestamp', prorettype => 'bool',
proargtypes => 'date timestamp', prosrc => 'date_ne_timestamp' },
{ oid => '2344', descr => 'less-equal-greater',
proname => 'date_cmp_timestamp', prorettype => 'int4',
proargtypes => 'date timestamp', prosrc => 'date_cmp_timestamp' },
{ oid => '2351',
proname => 'date_lt_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_lt_timestamptz' },
{ oid => '2352',
proname => 'date_le_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_le_timestamptz' },
{ oid => '2353',
proname => 'date_eq_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_eq_timestamptz' },
{ oid => '2354',
proname => 'date_gt_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_gt_timestamptz' },
{ oid => '2355',
proname => 'date_ge_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_ge_timestamptz' },
{ oid => '2356',
proname => 'date_ne_timestamptz', provolatile => 's', prorettype => 'bool',
proargtypes => 'date timestamptz', prosrc => 'date_ne_timestamptz' },
{ oid => '2357', descr => 'less-equal-greater',
proname => 'date_cmp_timestamptz', provolatile => 's', prorettype => 'int4',
proargtypes => 'date timestamptz', prosrc => 'date_cmp_timestamptz' },
{ oid => '2364',
proname => 'timestamp_lt_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_lt_date' },
{ oid => '2365',
proname => 'timestamp_le_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_le_date' },
{ oid => '2366',
proname => 'timestamp_eq_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_eq_date' },
{ oid => '2367',
proname => 'timestamp_gt_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_gt_date' },
{ oid => '2368',
proname => 'timestamp_ge_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_ge_date' },
{ oid => '2369',
proname => 'timestamp_ne_date', prorettype => 'bool',
proargtypes => 'timestamp date', prosrc => 'timestamp_ne_date' },
{ oid => '2370', descr => 'less-equal-greater',
proname => 'timestamp_cmp_date', prorettype => 'int4',
proargtypes => 'timestamp date', prosrc => 'timestamp_cmp_date' },
{ oid => '2377',
proname => 'timestamptz_lt_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_lt_date' },
{ oid => '2378',
proname => 'timestamptz_le_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_le_date' },
{ oid => '2379',
proname => 'timestamptz_eq_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_eq_date' },
{ oid => '2380',
proname => 'timestamptz_gt_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_gt_date' },
{ oid => '2381',
proname => 'timestamptz_ge_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_ge_date' },
{ oid => '2382',
proname => 'timestamptz_ne_date', provolatile => 's', prorettype => 'bool',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_ne_date' },
{ oid => '2383', descr => 'less-equal-greater',
proname => 'timestamptz_cmp_date', provolatile => 's', prorettype => 'int4',
proargtypes => 'timestamptz date', prosrc => 'timestamptz_cmp_date' },
# crosstype operations for timestamp vs. timestamptz
{ oid => '2520',
proname => 'timestamp_lt_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_lt_timestamptz' },
{ oid => '2521',
proname => 'timestamp_le_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_le_timestamptz' },
{ oid => '2522',
proname => 'timestamp_eq_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_eq_timestamptz' },
{ oid => '2523',
proname => 'timestamp_gt_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_gt_timestamptz' },
{ oid => '2524',
proname => 'timestamp_ge_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_ge_timestamptz' },
{ oid => '2525',
proname => 'timestamp_ne_timestamptz', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_ne_timestamptz' },
{ oid => '2526', descr => 'less-equal-greater',
proname => 'timestamp_cmp_timestamptz', provolatile => 's',
prorettype => 'int4', proargtypes => 'timestamp timestamptz',
prosrc => 'timestamp_cmp_timestamptz' },
{ oid => '2527',
proname => 'timestamptz_lt_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_lt_timestamp' },
{ oid => '2528',
proname => 'timestamptz_le_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_le_timestamp' },
{ oid => '2529',
proname => 'timestamptz_eq_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_eq_timestamp' },
{ oid => '2530',
proname => 'timestamptz_gt_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_gt_timestamp' },
{ oid => '2531',
proname => 'timestamptz_ge_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_ge_timestamp' },
{ oid => '2532',
proname => 'timestamptz_ne_timestamp', provolatile => 's',
prorettype => 'bool', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_ne_timestamp' },
{ oid => '2533', descr => 'less-equal-greater',
proname => 'timestamptz_cmp_timestamp', provolatile => 's',
prorettype => 'int4', proargtypes => 'timestamptz timestamp',
prosrc => 'timestamptz_cmp_timestamp' },
# send/receive functions
{ oid => '2400', descr => 'I/O',
proname => 'array_recv', provolatile => 's', prorettype => 'anyarray',
proargtypes => 'internal oid int4', prosrc => 'array_recv' },
{ oid => '2401', descr => 'I/O',
proname => 'array_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'anyarray', prosrc => 'array_send' },
{ oid => '2402', descr => 'I/O',
proname => 'record_recv', provolatile => 's', prorettype => 'record',
proargtypes => 'internal oid int4', prosrc => 'record_recv' },
{ oid => '2403', descr => 'I/O',
proname => 'record_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'record', prosrc => 'record_send' },
{ oid => '2404', descr => 'I/O',
proname => 'int2recv', prorettype => 'int2', proargtypes => 'internal',
prosrc => 'int2recv' },
{ oid => '2405', descr => 'I/O',
proname => 'int2send', prorettype => 'bytea', proargtypes => 'int2',
prosrc => 'int2send' },
{ oid => '2406', descr => 'I/O',
proname => 'int4recv', prorettype => 'int4', proargtypes => 'internal',
prosrc => 'int4recv' },
{ oid => '2407', descr => 'I/O',
proname => 'int4send', prorettype => 'bytea', proargtypes => 'int4',
prosrc => 'int4send' },
{ oid => '2408', descr => 'I/O',
proname => 'int8recv', prorettype => 'int8', proargtypes => 'internal',
prosrc => 'int8recv' },
{ oid => '2409', descr => 'I/O',
proname => 'int8send', prorettype => 'bytea', proargtypes => 'int8',
prosrc => 'int8send' },
{ oid => '2410', descr => 'I/O',
proname => 'int2vectorrecv', prorettype => 'int2vector',
proargtypes => 'internal', prosrc => 'int2vectorrecv' },
{ oid => '2411', descr => 'I/O',
proname => 'int2vectorsend', prorettype => 'bytea',
proargtypes => 'int2vector', prosrc => 'int2vectorsend' },
{ oid => '2412', descr => 'I/O',
proname => 'bytearecv', prorettype => 'bytea', proargtypes => 'internal',
prosrc => 'bytearecv' },
{ oid => '2413', descr => 'I/O',
proname => 'byteasend', prorettype => 'bytea', proargtypes => 'bytea',
prosrc => 'byteasend' },
{ oid => '2414', descr => 'I/O',
proname => 'textrecv', provolatile => 's', prorettype => 'text',
proargtypes => 'internal', prosrc => 'textrecv' },
{ oid => '2415', descr => 'I/O',
proname => 'textsend', provolatile => 's', prorettype => 'bytea',
proargtypes => 'text', prosrc => 'textsend' },
{ oid => '2416', descr => 'I/O',
proname => 'unknownrecv', prorettype => 'unknown', proargtypes => 'internal',
prosrc => 'unknownrecv' },
{ oid => '2417', descr => 'I/O',
proname => 'unknownsend', prorettype => 'bytea', proargtypes => 'unknown',
prosrc => 'unknownsend' },
{ oid => '2418', descr => 'I/O',
proname => 'oidrecv', prorettype => 'oid', proargtypes => 'internal',
prosrc => 'oidrecv' },
{ oid => '2419', descr => 'I/O',
proname => 'oidsend', prorettype => 'bytea', proargtypes => 'oid',
prosrc => 'oidsend' },
{ oid => '2420', descr => 'I/O',
proname => 'oidvectorrecv', prorettype => 'oidvector',
proargtypes => 'internal', prosrc => 'oidvectorrecv' },
{ oid => '2421', descr => 'I/O',
proname => 'oidvectorsend', prorettype => 'bytea', proargtypes => 'oidvector',
prosrc => 'oidvectorsend' },
{ oid => '2422', descr => 'I/O',
proname => 'namerecv', provolatile => 's', prorettype => 'name',
proargtypes => 'internal', prosrc => 'namerecv' },
{ oid => '2423', descr => 'I/O',
proname => 'namesend', provolatile => 's', prorettype => 'bytea',
proargtypes => 'name', prosrc => 'namesend' },
{ oid => '2424', descr => 'I/O',
proname => 'float4recv', prorettype => 'float4', proargtypes => 'internal',
prosrc => 'float4recv' },
{ oid => '2425', descr => 'I/O',
proname => 'float4send', prorettype => 'bytea', proargtypes => 'float4',
prosrc => 'float4send' },
{ oid => '2426', descr => 'I/O',
proname => 'float8recv', prorettype => 'float8', proargtypes => 'internal',
prosrc => 'float8recv' },
{ oid => '2427', descr => 'I/O',
proname => 'float8send', prorettype => 'bytea', proargtypes => 'float8',
prosrc => 'float8send' },
{ oid => '2428', descr => 'I/O',
proname => 'point_recv', prorettype => 'point', proargtypes => 'internal',
prosrc => 'point_recv' },
{ oid => '2429', descr => 'I/O',
proname => 'point_send', prorettype => 'bytea', proargtypes => 'point',
prosrc => 'point_send' },
{ oid => '2430', descr => 'I/O',
proname => 'bpcharrecv', provolatile => 's', prorettype => 'bpchar',
proargtypes => 'internal oid int4', prosrc => 'bpcharrecv' },
{ oid => '2431', descr => 'I/O',
proname => 'bpcharsend', provolatile => 's', prorettype => 'bytea',
proargtypes => 'bpchar', prosrc => 'bpcharsend' },
{ oid => '2432', descr => 'I/O',
proname => 'varcharrecv', provolatile => 's', prorettype => 'varchar',
proargtypes => 'internal oid int4', prosrc => 'varcharrecv' },
{ oid => '2433', descr => 'I/O',
proname => 'varcharsend', provolatile => 's', prorettype => 'bytea',
proargtypes => 'varchar', prosrc => 'varcharsend' },
{ oid => '2434', descr => 'I/O',
proname => 'charrecv', prorettype => 'char', proargtypes => 'internal',
prosrc => 'charrecv' },
{ oid => '2435', descr => 'I/O',
proname => 'charsend', prorettype => 'bytea', proargtypes => 'char',
prosrc => 'charsend' },
{ oid => '2436', descr => 'I/O',
proname => 'boolrecv', prorettype => 'bool', proargtypes => 'internal',
prosrc => 'boolrecv' },
{ oid => '2437', descr => 'I/O',
proname => 'boolsend', prorettype => 'bytea', proargtypes => 'bool',
prosrc => 'boolsend' },
{ oid => '2438', descr => 'I/O',
proname => 'tidrecv', prorettype => 'tid', proargtypes => 'internal',
prosrc => 'tidrecv' },
{ oid => '2439', descr => 'I/O',
proname => 'tidsend', prorettype => 'bytea', proargtypes => 'tid',
prosrc => 'tidsend' },
{ oid => '2440', descr => 'I/O',
proname => 'xidrecv', prorettype => 'xid', proargtypes => 'internal',
prosrc => 'xidrecv' },
{ oid => '2441', descr => 'I/O',
proname => 'xidsend', prorettype => 'bytea', proargtypes => 'xid',
prosrc => 'xidsend' },
{ oid => '2442', descr => 'I/O',
proname => 'cidrecv', prorettype => 'cid', proargtypes => 'internal',
prosrc => 'cidrecv' },
{ oid => '2443', descr => 'I/O',
proname => 'cidsend', prorettype => 'bytea', proargtypes => 'cid',
prosrc => 'cidsend' },
{ oid => '2444', descr => 'I/O',
proname => 'regprocrecv', prorettype => 'regproc', proargtypes => 'internal',
prosrc => 'regprocrecv' },
{ oid => '2445', descr => 'I/O',
proname => 'regprocsend', prorettype => 'bytea', proargtypes => 'regproc',
prosrc => 'regprocsend' },
{ oid => '2446', descr => 'I/O',
proname => 'regprocedurerecv', prorettype => 'regprocedure',
proargtypes => 'internal', prosrc => 'regprocedurerecv' },
{ oid => '2447', descr => 'I/O',
proname => 'regproceduresend', prorettype => 'bytea',
proargtypes => 'regprocedure', prosrc => 'regproceduresend' },
{ oid => '2448', descr => 'I/O',
proname => 'regoperrecv', prorettype => 'regoper', proargtypes => 'internal',
prosrc => 'regoperrecv' },
{ oid => '2449', descr => 'I/O',
proname => 'regopersend', prorettype => 'bytea', proargtypes => 'regoper',
prosrc => 'regopersend' },
{ oid => '2450', descr => 'I/O',
proname => 'regoperatorrecv', prorettype => 'regoperator',
proargtypes => 'internal', prosrc => 'regoperatorrecv' },
{ oid => '2451', descr => 'I/O',
proname => 'regoperatorsend', prorettype => 'bytea',
proargtypes => 'regoperator', prosrc => 'regoperatorsend' },
{ oid => '2452', descr => 'I/O',
proname => 'regclassrecv', prorettype => 'regclass',
proargtypes => 'internal', prosrc => 'regclassrecv' },
{ oid => '2453', descr => 'I/O',
proname => 'regclasssend', prorettype => 'bytea', proargtypes => 'regclass',
prosrc => 'regclasssend' },
{ oid => '4196', descr => 'I/O',
proname => 'regcollationrecv', prorettype => 'regcollation',
proargtypes => 'internal', prosrc => 'regcollationrecv' },
{ oid => '4197', descr => 'I/O',
proname => 'regcollationsend', prorettype => 'bytea',
proargtypes => 'regcollation', prosrc => 'regcollationsend' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2454', descr => 'I/O',
proname => 'regtyperecv', prorettype => 'regtype', proargtypes => 'internal',
prosrc => 'regtyperecv' },
{ oid => '2455', descr => 'I/O',
proname => 'regtypesend', prorettype => 'bytea', proargtypes => 'regtype',
prosrc => 'regtypesend' },
{ oid => '4094', descr => 'I/O',
proname => 'regrolerecv', prorettype => 'regrole', proargtypes => 'internal',
prosrc => 'regrolerecv' },
{ oid => '4095', descr => 'I/O',
proname => 'regrolesend', prorettype => 'bytea', proargtypes => 'regrole',
prosrc => 'regrolesend' },
{ oid => '4087', descr => 'I/O',
proname => 'regnamespacerecv', prorettype => 'regnamespace',
proargtypes => 'internal', prosrc => 'regnamespacerecv' },
{ oid => '4088', descr => 'I/O',
proname => 'regnamespacesend', prorettype => 'bytea',
proargtypes => 'regnamespace', prosrc => 'regnamespacesend' },
{ oid => '2456', descr => 'I/O',
proname => 'bit_recv', prorettype => 'bit',
proargtypes => 'internal oid int4', prosrc => 'bit_recv' },
{ oid => '2457', descr => 'I/O',
proname => 'bit_send', prorettype => 'bytea', proargtypes => 'bit',
prosrc => 'bit_send' },
{ oid => '2458', descr => 'I/O',
proname => 'varbit_recv', prorettype => 'varbit',
proargtypes => 'internal oid int4', prosrc => 'varbit_recv' },
{ oid => '2459', descr => 'I/O',
proname => 'varbit_send', prorettype => 'bytea', proargtypes => 'varbit',
prosrc => 'varbit_send' },
{ oid => '2460', descr => 'I/O',
proname => 'numeric_recv', prorettype => 'numeric',
proargtypes => 'internal oid int4', prosrc => 'numeric_recv' },
{ oid => '2461', descr => 'I/O',
proname => 'numeric_send', prorettype => 'bytea', proargtypes => 'numeric',
prosrc => 'numeric_send' },
{ oid => '2468', descr => 'I/O',
proname => 'date_recv', prorettype => 'date', proargtypes => 'internal',
prosrc => 'date_recv' },
{ oid => '2469', descr => 'I/O',
proname => 'date_send', prorettype => 'bytea', proargtypes => 'date',
prosrc => 'date_send' },
{ oid => '2470', descr => 'I/O',
proname => 'time_recv', prorettype => 'time',
proargtypes => 'internal oid int4', prosrc => 'time_recv' },
{ oid => '2471', descr => 'I/O',
proname => 'time_send', prorettype => 'bytea', proargtypes => 'time',
prosrc => 'time_send' },
{ oid => '2472', descr => 'I/O',
proname => 'timetz_recv', prorettype => 'timetz',
proargtypes => 'internal oid int4', prosrc => 'timetz_recv' },
{ oid => '2473', descr => 'I/O',
proname => 'timetz_send', prorettype => 'bytea', proargtypes => 'timetz',
prosrc => 'timetz_send' },
{ oid => '2474', descr => 'I/O',
proname => 'timestamp_recv', prorettype => 'timestamp',
proargtypes => 'internal oid int4', prosrc => 'timestamp_recv' },
{ oid => '2475', descr => 'I/O',
proname => 'timestamp_send', prorettype => 'bytea',
proargtypes => 'timestamp', prosrc => 'timestamp_send' },
{ oid => '2476', descr => 'I/O',
proname => 'timestamptz_recv', prorettype => 'timestamptz',
proargtypes => 'internal oid int4', prosrc => 'timestamptz_recv' },
{ oid => '2477', descr => 'I/O',
proname => 'timestamptz_send', prorettype => 'bytea',
proargtypes => 'timestamptz', prosrc => 'timestamptz_send' },
{ oid => '2478', descr => 'I/O',
proname => 'interval_recv', prorettype => 'interval',
proargtypes => 'internal oid int4', prosrc => 'interval_recv' },
{ oid => '2479', descr => 'I/O',
proname => 'interval_send', prorettype => 'bytea', proargtypes => 'interval',
prosrc => 'interval_send' },
{ oid => '2480', descr => 'I/O',
proname => 'lseg_recv', prorettype => 'lseg', proargtypes => 'internal',
prosrc => 'lseg_recv' },
{ oid => '2481', descr => 'I/O',
proname => 'lseg_send', prorettype => 'bytea', proargtypes => 'lseg',
prosrc => 'lseg_send' },
{ oid => '2482', descr => 'I/O',
proname => 'path_recv', prorettype => 'path', proargtypes => 'internal',
prosrc => 'path_recv' },
{ oid => '2483', descr => 'I/O',
proname => 'path_send', prorettype => 'bytea', proargtypes => 'path',
prosrc => 'path_send' },
{ oid => '2484', descr => 'I/O',
proname => 'box_recv', prorettype => 'box', proargtypes => 'internal',
prosrc => 'box_recv' },
{ oid => '2485', descr => 'I/O',
proname => 'box_send', prorettype => 'bytea', proargtypes => 'box',
prosrc => 'box_send' },
{ oid => '2486', descr => 'I/O',
proname => 'poly_recv', prorettype => 'polygon', proargtypes => 'internal',
prosrc => 'poly_recv' },
{ oid => '2487', descr => 'I/O',
proname => 'poly_send', prorettype => 'bytea', proargtypes => 'polygon',
prosrc => 'poly_send' },
{ oid => '2488', descr => 'I/O',
proname => 'line_recv', prorettype => 'line', proargtypes => 'internal',
prosrc => 'line_recv' },
{ oid => '2489', descr => 'I/O',
proname => 'line_send', prorettype => 'bytea', proargtypes => 'line',
prosrc => 'line_send' },
{ oid => '2490', descr => 'I/O',
proname => 'circle_recv', prorettype => 'circle', proargtypes => 'internal',
prosrc => 'circle_recv' },
{ oid => '2491', descr => 'I/O',
proname => 'circle_send', prorettype => 'bytea', proargtypes => 'circle',
prosrc => 'circle_send' },
{ oid => '2492', descr => 'I/O',
proname => 'cash_recv', prorettype => 'money', proargtypes => 'internal',
prosrc => 'cash_recv' },
{ oid => '2493', descr => 'I/O',
proname => 'cash_send', prorettype => 'bytea', proargtypes => 'money',
prosrc => 'cash_send' },
{ oid => '2494', descr => 'I/O',
proname => 'macaddr_recv', prorettype => 'macaddr', proargtypes => 'internal',
prosrc => 'macaddr_recv' },
{ oid => '2495', descr => 'I/O',
proname => 'macaddr_send', prorettype => 'bytea', proargtypes => 'macaddr',
prosrc => 'macaddr_send' },
{ oid => '2496', descr => 'I/O',
proname => 'inet_recv', prorettype => 'inet', proargtypes => 'internal',
prosrc => 'inet_recv' },
{ oid => '2497', descr => 'I/O',
proname => 'inet_send', prorettype => 'bytea', proargtypes => 'inet',
prosrc => 'inet_send' },
{ oid => '2498', descr => 'I/O',
proname => 'cidr_recv', prorettype => 'cidr', proargtypes => 'internal',
prosrc => 'cidr_recv' },
{ oid => '2499', descr => 'I/O',
proname => 'cidr_send', prorettype => 'bytea', proargtypes => 'cidr',
prosrc => 'cidr_send' },
{ oid => '2500', descr => 'I/O',
proname => 'cstring_recv', provolatile => 's', prorettype => 'cstring',
proargtypes => 'internal', prosrc => 'cstring_recv' },
{ oid => '2501', descr => 'I/O',
proname => 'cstring_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'cstring', prosrc => 'cstring_send' },
{ oid => '2502', descr => 'I/O',
proname => 'anyarray_recv', provolatile => 's', prorettype => 'anyarray',
proargtypes => 'internal', prosrc => 'anyarray_recv' },
{ oid => '2503', descr => 'I/O',
proname => 'anyarray_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'anyarray', prosrc => 'anyarray_send' },
{ oid => '3120', descr => 'I/O',
proname => 'void_recv', prorettype => 'void', proargtypes => 'internal',
prosrc => 'void_recv' },
{ oid => '3121', descr => 'I/O',
proname => 'void_send', prorettype => 'bytea', proargtypes => 'void',
prosrc => 'void_send' },
{ oid => '3446', descr => 'I/O',
proname => 'macaddr8_recv', prorettype => 'macaddr8',
proargtypes => 'internal', prosrc => 'macaddr8_recv' },
{ oid => '3447', descr => 'I/O',
proname => 'macaddr8_send', prorettype => 'bytea', proargtypes => 'macaddr8',
prosrc => 'macaddr8_send' },
# System-view support functions with pretty-print option
{ oid => '2504', descr => 'source text of a rule with pretty-print option',
proname => 'pg_get_ruledef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid bool', prosrc => 'pg_get_ruledef_ext' },
{ oid => '2505',
descr => 'select statement of a view with pretty-print option',
proname => 'pg_get_viewdef', provolatile => 's', proparallel => 'r',
prorettype => 'text', proargtypes => 'text bool',
prosrc => 'pg_get_viewdef_name_ext' },
{ oid => '2506',
descr => 'select statement of a view with pretty-print option',
proname => 'pg_get_viewdef', provolatile => 's', proparallel => 'r',
prorettype => 'text', proargtypes => 'oid bool',
prosrc => 'pg_get_viewdef_ext' },
{ oid => '3159',
descr => 'select statement of a view with pretty-printing and specified line wrapping',
proname => 'pg_get_viewdef', provolatile => 's', proparallel => 'r',
prorettype => 'text', proargtypes => 'oid int4',
prosrc => 'pg_get_viewdef_wrap' },
{ oid => '2507',
descr => 'index description (full create statement or single expression) with pretty-print option',
proname => 'pg_get_indexdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid int4 bool', prosrc => 'pg_get_indexdef_ext' },
{ oid => '2508', descr => 'constraint description with pretty-print option',
proname => 'pg_get_constraintdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid bool', prosrc => 'pg_get_constraintdef_ext' },
{ oid => '2509',
descr => 'deparse an encoded expression with pretty-print option',
proname => 'pg_get_expr', provolatile => 's', prorettype => 'text',
proargtypes => 'pg_node_tree oid bool', prosrc => 'pg_get_expr_ext' },
{ oid => '2510', descr => 'get the prepared statements for this session',
proname => 'pg_prepared_statement', prorows => '1000', proretset => 't',
provolatile => 's', proparallel => 'r', prorettype => 'record',
proargtypes => '',
proallargtypes => '{text,text,timestamptz,_regtype,_regtype,bool,int8,int8}',
proargmodes => '{o,o,o,o,o,o,o,o}',
proargnames => '{name,statement,prepare_time,parameter_types,result_types,from_sql,generic_plans,custom_plans}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_prepared_statement' },
{ oid => '2511', descr => 'get the open cursors for this session',
proname => 'pg_cursor', prorows => '1000', proretset => 't',
provolatile => 's', proparallel => 'r', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,text,bool,bool,bool,timestamptz}',
proargmodes => '{o,o,o,o,o,o}',
proargnames => '{name,statement,is_holdable,is_binary,is_scrollable,creation_time}',
prosrc => 'pg_cursor' },
{ oid => '2599', descr => 'get the available time zone abbreviations',
proname => 'pg_timezone_abbrevs', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,interval,bool}', proargmodes => '{o,o,o}',
proargnames => '{abbrev,utc_offset,is_dst}',
prosrc => 'pg_timezone_abbrevs' },
{ oid => '2856', descr => 'get the available time zone names',
proname => 'pg_timezone_names', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,text,interval,bool}', proargmodes => '{o,o,o,o}',
proargnames => '{name,abbrev,utc_offset,is_dst}',
prosrc => 'pg_timezone_names' },
{ oid => '2730', descr => 'trigger description with pretty-print option',
proname => 'pg_get_triggerdef', provolatile => 's', prorettype => 'text',
proargtypes => 'oid bool', prosrc => 'pg_get_triggerdef_ext' },
# asynchronous notifications
{ oid => '3035',
descr => 'get the channels that the current backend listens to',
proname => 'pg_listening_channels', prorows => '10', proretset => 't',
provolatile => 's', proparallel => 'r', prorettype => 'text',
proargtypes => '', prosrc => 'pg_listening_channels' },
{ oid => '3036', descr => 'send a notification event',
proname => 'pg_notify', proisstrict => 'f', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'text text',
prosrc => 'pg_notify' },
{ oid => '3296',
descr => 'get the fraction of the asynchronous notification queue currently in use',
proname => 'pg_notification_queue_usage', provolatile => 'v',
Stabilize the results of pg_notification_queue_usage(). This function wasn't touched in commit 51004c717, but that turns out to be a bad idea, because its results now include any dead space that exists in the NOTIFY queue on account of our being lazy about advancing the queue tail. Notably, the isolation tests now fail if run twice without a server restart between, because async-notify's first test of the function will already show a positive value. It seems likely that end users would be equally unhappy about the result's instability. To fix, just make the function call asyncQueueAdvanceTail before computing its result. That should end in producing the same value as before, and it's hard to believe that there's any practical use-case where pg_notification_queue_usage() is called so often as to create a performance degradation, especially compared to what we did before. Out of paranoia, also mark this function parallel-restricted (it was volatile, but parallel-safe by default, before). Although the code seems to work fine when run in a parallel worker, that's outside the design scope of async.c, and it's a bit scary to have intentional side-effects happening in a parallel worker. There seems no plausible use-case where it'd be important to try to parallelize this, so let's not take any risk of introducing new bugs. In passing, re-pgindent async.c and run reformat-dat-files on pg_proc.dat, just because I'm a neatnik. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
2019-11-24 20:09:33 +01:00
proparallel => 'r', prorettype => 'float8', proargtypes => '',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_notification_queue_usage' },
# shared memory usage
{ oid => '5052', descr => 'allocations from the main shared memory segment',
proname => 'pg_get_shmem_allocations', prorows => '50', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int8,int8,int8}', proargmodes => '{o,o,o,o}',
proargnames => '{name,off,size,allocated_size}',
prosrc => 'pg_get_shmem_allocations' },
# memory context of local backend
{ oid => '2282',
descr => 'information about all memory contexts of local backend',
proname => 'pg_get_backend_memory_contexts', prorows => '100',
proretset => 't', provolatile => 'v', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{text,text,text,int4,int8,int8,int8,int8,int8}',
proargmodes => '{o,o,o,o,o,o,o,o,o}',
proargnames => '{name, ident, parent, level, total_bytes, total_nblocks, free_bytes, free_chunks, used_bytes}',
prosrc => 'pg_get_backend_memory_contexts' },
Add function to log the memory contexts of specified backend process. Commit 3e98c0bafb added pg_backend_memory_contexts view to display the memory contexts of the backend process. However its target process is limited to the backend that is accessing to the view. So this is not so convenient when investigating the local memory bloat of other backend process. To improve this situation, this commit adds pg_log_backend_memory_contexts() function that requests to log the memory contexts of the specified backend process. This information can be also collected by calling MemoryContextStats(TopMemoryContext) via a debugger. But this technique cannot be used in some environments because no debugger is available there. So, pg_log_backend_memory_contexts() allows us to see the memory contexts of specified backend more easily. Only superusers are allowed to request to log the memory contexts because allowing any users to issue this request at an unbounded rate would cause lots of log messages and which can lead to denial of service. On receipt of the request, at the next CHECK_FOR_INTERRUPTS(), the target backend logs its memory contexts at LOG_SERVER_ONLY level, so that these memory contexts will appear in the server log but not be sent to the client. It logs one message per memory context. Because if it buffers all memory contexts into StringInfo to log them as one message, which may require the buffer to be enlarged very much and lead to OOM error since there can be a large number of memory contexts in a backend. When a backend process is consuming huge memory, logging all its memory contexts might overrun available disk space. To prevent this, now this patch limits the number of child contexts to log per parent to 100. As with MemoryContextStats(), it supposes that practical cases where the log gets long will typically be huge numbers of siblings under the same parent context; while the additional debugging value from seeing details about individual siblings beyond 100 will not be large. There was another proposed patch to add the function to return the memory contexts of specified backend as the result sets, instead of logging them, in the discussion. However that patch is not included in this commit because it had several issues to address. Thanks to Tatsuhito Kasahara, Andres Freund, Tom Lane, Tomas Vondra, Michael Paquier, Kyotaro Horiguchi and Zhihong Yu for the discussion. Bump catalog version. Author: Atsushi Torikoshi Reviewed-by: Kyotaro Horiguchi, Zhihong Yu, Fujii Masao Discussion: https://postgr.es/m/0271f440ac77f2a4180e0e56ebd944d1@oss.nttdata.com
2021-04-06 06:44:15 +02:00
# logging memory contexts of the specified backend
{ oid => '4543', descr => 'log memory contexts of the specified backend',
proname => 'pg_log_backend_memory_contexts', provolatile => 'v',
prorettype => 'bool', proargtypes => 'int4',
prosrc => 'pg_log_backend_memory_contexts' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# non-persistent series generator
{ oid => '1066', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000',
prosupport => 'generate_series_int4_support', proretset => 't',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'int4', proargtypes => 'int4 int4 int4',
prosrc => 'generate_series_step_int4' },
{ oid => '1067', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000',
prosupport => 'generate_series_int4_support', proretset => 't',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'int4', proargtypes => 'int4 int4',
prosrc => 'generate_series_int4' },
{ oid => '3994', descr => 'planner support for generate_series',
proname => 'generate_series_int4_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'generate_series_int4_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '1068', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000',
prosupport => 'generate_series_int8_support', proretset => 't',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'int8', proargtypes => 'int8 int8 int8',
prosrc => 'generate_series_step_int8' },
{ oid => '1069', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000',
prosupport => 'generate_series_int8_support', proretset => 't',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'int8', proargtypes => 'int8 int8',
prosrc => 'generate_series_int8' },
{ oid => '3995', descr => 'planner support for generate_series',
proname => 'generate_series_int8_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'generate_series_int8_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3259', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000', proretset => 't',
prorettype => 'numeric', proargtypes => 'numeric numeric numeric',
prosrc => 'generate_series_step_numeric' },
{ oid => '3260', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000', proretset => 't',
prorettype => 'numeric', proargtypes => 'numeric numeric',
prosrc => 'generate_series_numeric' },
{ oid => '938', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000', proretset => 't',
prorettype => 'timestamp', proargtypes => 'timestamp timestamp interval',
prosrc => 'generate_series_timestamp' },
{ oid => '939', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'timestamptz timestamptz interval',
prosrc => 'generate_series_timestamptz' },
{ oid => '6274', descr => 'non-persistent series generator',
proname => 'generate_series', prorows => '1000', proretset => 't',
prorettype => 'timestamptz',
proargtypes => 'timestamptz timestamptz interval text',
prosrc => 'generate_series_timestamptz_at_zone' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# boolean aggregates
{ oid => '2515', descr => 'aggregate transition function',
proname => 'booland_statefunc', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'booland_statefunc' },
{ oid => '2516', descr => 'aggregate transition function',
proname => 'boolor_statefunc', prorettype => 'bool',
proargtypes => 'bool bool', prosrc => 'boolor_statefunc' },
{ oid => '3496', descr => 'aggregate transition function',
proname => 'bool_accum', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal bool', prosrc => 'bool_accum' },
{ oid => '3497', descr => 'aggregate transition function',
proname => 'bool_accum_inv', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal bool', prosrc => 'bool_accum_inv' },
{ oid => '3498', descr => 'aggregate final function',
proname => 'bool_alltrue', prorettype => 'bool', proargtypes => 'internal',
prosrc => 'bool_alltrue' },
{ oid => '3499', descr => 'aggregate final function',
proname => 'bool_anytrue', prorettype => 'bool', proargtypes => 'internal',
prosrc => 'bool_anytrue' },
{ oid => '2517', descr => 'boolean-and aggregate',
proname => 'bool_and', prokind => 'a', proisstrict => 'f',
prorettype => 'bool', proargtypes => 'bool', prosrc => 'aggregate_dummy' },
# ANY, SOME? These names conflict with subquery operators. See doc.
{ oid => '2518', descr => 'boolean-or aggregate',
proname => 'bool_or', prokind => 'a', proisstrict => 'f',
prorettype => 'bool', proargtypes => 'bool', prosrc => 'aggregate_dummy' },
{ oid => '2519', descr => 'boolean-and aggregate',
proname => 'every', prokind => 'a', proisstrict => 'f', prorettype => 'bool',
proargtypes => 'bool', prosrc => 'aggregate_dummy' },
# bitwise integer aggregates
{ oid => '2236', descr => 'bitwise-and smallint aggregate',
proname => 'bit_and', prokind => 'a', proisstrict => 'f',
prorettype => 'int2', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '2237', descr => 'bitwise-or smallint aggregate',
proname => 'bit_or', prokind => 'a', proisstrict => 'f', prorettype => 'int2',
proargtypes => 'int2', prosrc => 'aggregate_dummy' },
{ oid => '6164', descr => 'bitwise-xor smallint aggregate',
proname => 'bit_xor', prokind => 'a', proisstrict => 'f',
prorettype => 'int2', proargtypes => 'int2', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2238', descr => 'bitwise-and integer aggregate',
proname => 'bit_and', prokind => 'a', proisstrict => 'f',
prorettype => 'int4', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '2239', descr => 'bitwise-or integer aggregate',
proname => 'bit_or', prokind => 'a', proisstrict => 'f', prorettype => 'int4',
proargtypes => 'int4', prosrc => 'aggregate_dummy' },
{ oid => '6165', descr => 'bitwise-xor integer aggregate',
proname => 'bit_xor', prokind => 'a', proisstrict => 'f',
prorettype => 'int4', proargtypes => 'int4', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2240', descr => 'bitwise-and bigint aggregate',
proname => 'bit_and', prokind => 'a', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '2241', descr => 'bitwise-or bigint aggregate',
proname => 'bit_or', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
proargtypes => 'int8', prosrc => 'aggregate_dummy' },
{ oid => '6166', descr => 'bitwise-xor bigint aggregate',
proname => 'bit_xor', prokind => 'a', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'int8', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2242', descr => 'bitwise-and bit aggregate',
proname => 'bit_and', prokind => 'a', proisstrict => 'f', prorettype => 'bit',
proargtypes => 'bit', prosrc => 'aggregate_dummy' },
{ oid => '2243', descr => 'bitwise-or bit aggregate',
proname => 'bit_or', prokind => 'a', proisstrict => 'f', prorettype => 'bit',
proargtypes => 'bit', prosrc => 'aggregate_dummy' },
{ oid => '6167', descr => 'bitwise-xor bit aggregate',
proname => 'bit_xor', prokind => 'a', proisstrict => 'f', prorettype => 'bit',
proargtypes => 'bit', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# formerly-missing interval + datetime operators
{ oid => '2546',
proname => 'interval_pl_date', prolang => 'sql', prorettype => 'timestamp',
proargtypes => 'interval date', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2547',
proname => 'interval_pl_timetz', prolang => 'sql', prorettype => 'timetz',
proargtypes => 'interval timetz', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2548',
proname => 'interval_pl_timestamp', prolang => 'sql',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamp', proargtypes => 'interval timestamp',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2549',
proname => 'interval_pl_timestamptz', prolang => 'sql', provolatile => 's',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'timestamptz', proargtypes => 'interval timestamptz',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2550',
proname => 'integer_pl_date', prolang => 'sql', prorettype => 'date',
proargtypes => 'int4 date', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2556', descr => 'get OIDs of databases in a tablespace',
proname => 'pg_tablespace_databases', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'oid', proargtypes => 'oid',
prosrc => 'pg_tablespace_databases' },
{ oid => '2557', descr => 'convert int4 to boolean',
proname => 'bool', proleakproof => 't', prorettype => 'bool',
proargtypes => 'int4', prosrc => 'int4_bool' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2558', descr => 'convert boolean to int4',
proname => 'int4', proleakproof => 't', prorettype => 'int4',
proargtypes => 'bool', prosrc => 'bool_int4' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2559', descr => 'current value from last used sequence',
proname => 'lastval', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => '', prosrc => 'lastval' },
# start time function
{ oid => '2560', descr => 'postmaster start time',
proname => 'pg_postmaster_start_time', provolatile => 's',
prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_postmaster_start_time' },
# config reload time function
{ oid => '2034', descr => 'configuration load time',
proname => 'pg_conf_load_time', provolatile => 's', proparallel => 'r',
prorettype => 'timestamptz', proargtypes => '',
prosrc => 'pg_conf_load_time' },
# new functions for Y-direction rtree opclasses
{ oid => '2562',
proname => 'box_below', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_below' },
{ oid => '2563',
proname => 'box_overbelow', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_overbelow' },
{ oid => '2564',
proname => 'box_overabove', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_overabove' },
{ oid => '2565',
proname => 'box_above', prorettype => 'bool', proargtypes => 'box box',
prosrc => 'box_above' },
{ oid => '2566',
proname => 'poly_below', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_below' },
{ oid => '2567',
proname => 'poly_overbelow', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_overbelow' },
{ oid => '2568',
proname => 'poly_overabove', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_overabove' },
{ oid => '2569',
proname => 'poly_above', prorettype => 'bool',
proargtypes => 'polygon polygon', prosrc => 'poly_above' },
{ oid => '2587',
proname => 'circle_overbelow', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_overbelow' },
{ oid => '2588',
proname => 'circle_overabove', prorettype => 'bool',
proargtypes => 'circle circle', prosrc => 'circle_overabove' },
# support functions for GiST r-tree emulation
{ oid => '2578', descr => 'GiST support',
proname => 'gist_box_consistent', prorettype => 'bool',
proargtypes => 'internal box int2 oid internal',
prosrc => 'gist_box_consistent' },
{ oid => '2581', descr => 'GiST support',
proname => 'gist_box_penalty', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'gist_box_penalty' },
{ oid => '2582', descr => 'GiST support',
proname => 'gist_box_picksplit', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'gist_box_picksplit' },
{ oid => '2583', descr => 'GiST support',
proname => 'gist_box_union', prorettype => 'box',
proargtypes => 'internal internal', prosrc => 'gist_box_union' },
{ oid => '2584', descr => 'GiST support',
proname => 'gist_box_same', prorettype => 'internal',
proargtypes => 'box box internal', prosrc => 'gist_box_same' },
{ oid => '3998', descr => 'GiST support',
proname => 'gist_box_distance', prorettype => 'float8',
proargtypes => 'internal box int2 oid internal',
prosrc => 'gist_box_distance' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2585', descr => 'GiST support',
proname => 'gist_poly_consistent', prorettype => 'bool',
proargtypes => 'internal polygon int2 oid internal',
prosrc => 'gist_poly_consistent' },
{ oid => '2586', descr => 'GiST support',
proname => 'gist_poly_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gist_poly_compress' },
{ oid => '2591', descr => 'GiST support',
proname => 'gist_circle_consistent', prorettype => 'bool',
proargtypes => 'internal circle int2 oid internal',
prosrc => 'gist_circle_consistent' },
{ oid => '2592', descr => 'GiST support',
proname => 'gist_circle_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gist_circle_compress' },
{ oid => '1030', descr => 'GiST support',
proname => 'gist_point_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gist_point_compress' },
{ oid => '3282', descr => 'GiST support',
proname => 'gist_point_fetch', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gist_point_fetch' },
{ oid => '2179', descr => 'GiST support',
proname => 'gist_point_consistent', prorettype => 'bool',
proargtypes => 'internal point int2 oid internal',
prosrc => 'gist_point_consistent' },
{ oid => '3064', descr => 'GiST support',
proname => 'gist_point_distance', prorettype => 'float8',
proargtypes => 'internal point int2 oid internal',
prosrc => 'gist_point_distance' },
{ oid => '3280', descr => 'GiST support',
proname => 'gist_circle_distance', prorettype => 'float8',
proargtypes => 'internal circle int2 oid internal',
prosrc => 'gist_circle_distance' },
{ oid => '3288', descr => 'GiST support',
proname => 'gist_poly_distance', prorettype => 'float8',
proargtypes => 'internal polygon int2 oid internal',
prosrc => 'gist_poly_distance' },
{ oid => '3435', descr => 'sort support',
proname => 'gist_point_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'gist_point_sortsupport' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# GIN array support
{ oid => '2743', descr => 'GIN array support',
proname => 'ginarrayextract', prorettype => 'internal',
proargtypes => 'anyarray internal internal', prosrc => 'ginarrayextract' },
{ oid => '2774', descr => 'GIN array support',
proname => 'ginqueryarrayextract', prorettype => 'internal',
proargtypes => 'anyarray internal int2 internal internal internal internal',
prosrc => 'ginqueryarrayextract' },
{ oid => '2744', descr => 'GIN array support',
proname => 'ginarrayconsistent', prorettype => 'bool',
proargtypes => 'internal int2 anyarray int4 internal internal internal internal',
prosrc => 'ginarrayconsistent' },
{ oid => '3920', descr => 'GIN array support',
proname => 'ginarraytriconsistent', prorettype => 'char',
proargtypes => 'internal int2 anyarray int4 internal internal internal',
prosrc => 'ginarraytriconsistent' },
{ oid => '3076', descr => 'GIN array support (obsolete)',
proname => 'ginarrayextract', prorettype => 'internal',
proargtypes => 'anyarray internal', prosrc => 'ginarrayextract_2args' },
# overlap/contains/contained
{ oid => '2747',
proname => 'arrayoverlap', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'arrayoverlap' },
{ oid => '2748',
proname => 'arraycontains', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'arraycontains' },
{ oid => '2749',
proname => 'arraycontained', prorettype => 'bool',
proargtypes => 'anyarray anyarray', prosrc => 'arraycontained' },
# BRIN minmax
{ oid => '3383', descr => 'BRIN minmax support',
proname => 'brin_minmax_opcinfo', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'brin_minmax_opcinfo' },
{ oid => '3384', descr => 'BRIN minmax support',
proname => 'brin_minmax_add_value', prorettype => 'bool',
proargtypes => 'internal internal internal internal',
prosrc => 'brin_minmax_add_value' },
{ oid => '3385', descr => 'BRIN minmax support',
proname => 'brin_minmax_consistent', prorettype => 'bool',
Support the old signature of BRIN consistent function Commit a1c649d889 changed the signature of the BRIN consistent function by adding a new required parameter. Treating the parameter as optional, which would make the change backwards incompatibile, was rejected with the justification that there are few out-of-core extensions, so it's not worth adding making the code more complex, and it's better to deal with that in the extension. But after further thought, that would be rather problematic, because pg_upgrade simply dumps catalog contents and the same version of an extension needs to work on both PostgreSQL versions. Supporting both variants of the consistent function (with 3 or 4 arguments) makes that possible. The signature is not the only thing that changed, as commit 72ccf55cb9 moved handling of IS [NOT] NULL keys from the support procedures. But this change is backward compatible - handling the keys in exension is unnecessary, but harmless. The consistent function will do a bit of unnecessary work, but it should be very cheap. This also undoes most of the changes to the existing opclasses (minmax and inclusion), making them use the old signature again. This should make backpatching simpler. Catversion bump, because of changes in pg_amproc. Author: Tomas Vondra <tomas.vondra@postgresql.org> Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Reviewed-by: Mark Dilger <hornschnorter@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Masahiko Sawada <masahiko.sawada@enterprisedb.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com
2021-03-26 13:17:56 +01:00
proargtypes => 'internal internal internal',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'brin_minmax_consistent' },
{ oid => '3386', descr => 'BRIN minmax support',
proname => 'brin_minmax_union', prorettype => 'bool',
proargtypes => 'internal internal internal', prosrc => 'brin_minmax_union' },
# BRIN minmax multi
{ oid => '4616', descr => 'BRIN multi minmax support',
proname => 'brin_minmax_multi_opcinfo', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'brin_minmax_multi_opcinfo' },
{ oid => '4617', descr => 'BRIN multi minmax support',
proname => 'brin_minmax_multi_add_value', prorettype => 'bool',
proargtypes => 'internal internal internal internal',
prosrc => 'brin_minmax_multi_add_value' },
{ oid => '4618', descr => 'BRIN multi minmax support',
proname => 'brin_minmax_multi_consistent', prorettype => 'bool',
proargtypes => 'internal internal internal int4',
prosrc => 'brin_minmax_multi_consistent' },
{ oid => '4619', descr => 'BRIN multi minmax support',
proname => 'brin_minmax_multi_union', prorettype => 'bool',
proargtypes => 'internal internal internal',
prosrc => 'brin_minmax_multi_union' },
{ oid => '4620', descr => 'BRIN multi minmax support',
proname => 'brin_minmax_multi_options', proisstrict => 'f',
prorettype => 'void', proargtypes => 'internal',
prosrc => 'brin_minmax_multi_options' },
{ oid => '4621', descr => 'BRIN multi minmax int2 distance',
proname => 'brin_minmax_multi_distance_int2', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_int2' },
{ oid => '4622', descr => 'BRIN multi minmax int4 distance',
proname => 'brin_minmax_multi_distance_int4', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_int4' },
{ oid => '4623', descr => 'BRIN multi minmax int8 distance',
proname => 'brin_minmax_multi_distance_int8', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_int8' },
{ oid => '4624', descr => 'BRIN multi minmax float4 distance',
proname => 'brin_minmax_multi_distance_float4', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_float4' },
{ oid => '4625', descr => 'BRIN multi minmax float8 distance',
proname => 'brin_minmax_multi_distance_float8', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_float8' },
{ oid => '4626', descr => 'BRIN multi minmax numeric distance',
proname => 'brin_minmax_multi_distance_numeric', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_numeric' },
{ oid => '4627', descr => 'BRIN multi minmax tid distance',
proname => 'brin_minmax_multi_distance_tid', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_tid' },
{ oid => '4628', descr => 'BRIN multi minmax uuid distance',
proname => 'brin_minmax_multi_distance_uuid', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_uuid' },
{ oid => '4629', descr => 'BRIN multi minmax date distance',
proname => 'brin_minmax_multi_distance_date', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_date' },
{ oid => '4630', descr => 'BRIN multi minmax time distance',
proname => 'brin_minmax_multi_distance_time', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_time' },
{ oid => '4631', descr => 'BRIN multi minmax interval distance',
proname => 'brin_minmax_multi_distance_interval', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_interval' },
{ oid => '4632', descr => 'BRIN multi minmax timetz distance',
proname => 'brin_minmax_multi_distance_timetz', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_timetz' },
{ oid => '4633', descr => 'BRIN multi minmax pg_lsn distance',
proname => 'brin_minmax_multi_distance_pg_lsn', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_pg_lsn' },
{ oid => '4634', descr => 'BRIN multi minmax macaddr distance',
proname => 'brin_minmax_multi_distance_macaddr', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_macaddr' },
{ oid => '4635', descr => 'BRIN multi minmax macaddr8 distance',
proname => 'brin_minmax_multi_distance_macaddr8', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_macaddr8' },
{ oid => '4636', descr => 'BRIN multi minmax inet distance',
proname => 'brin_minmax_multi_distance_inet', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_inet' },
{ oid => '4637', descr => 'BRIN multi minmax timestamp distance',
proname => 'brin_minmax_multi_distance_timestamp', prorettype => 'float8',
proargtypes => 'internal internal',
prosrc => 'brin_minmax_multi_distance_timestamp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# BRIN inclusion
{ oid => '4105', descr => 'BRIN inclusion support',
proname => 'brin_inclusion_opcinfo', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'brin_inclusion_opcinfo' },
{ oid => '4106', descr => 'BRIN inclusion support',
proname => 'brin_inclusion_add_value', prorettype => 'bool',
proargtypes => 'internal internal internal internal',
prosrc => 'brin_inclusion_add_value' },
{ oid => '4107', descr => 'BRIN inclusion support',
proname => 'brin_inclusion_consistent', prorettype => 'bool',
Support the old signature of BRIN consistent function Commit a1c649d889 changed the signature of the BRIN consistent function by adding a new required parameter. Treating the parameter as optional, which would make the change backwards incompatibile, was rejected with the justification that there are few out-of-core extensions, so it's not worth adding making the code more complex, and it's better to deal with that in the extension. But after further thought, that would be rather problematic, because pg_upgrade simply dumps catalog contents and the same version of an extension needs to work on both PostgreSQL versions. Supporting both variants of the consistent function (with 3 or 4 arguments) makes that possible. The signature is not the only thing that changed, as commit 72ccf55cb9 moved handling of IS [NOT] NULL keys from the support procedures. But this change is backward compatible - handling the keys in exension is unnecessary, but harmless. The consistent function will do a bit of unnecessary work, but it should be very cheap. This also undoes most of the changes to the existing opclasses (minmax and inclusion), making them use the old signature again. This should make backpatching simpler. Catversion bump, because of changes in pg_amproc. Author: Tomas Vondra <tomas.vondra@postgresql.org> Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Reviewed-by: Mark Dilger <hornschnorter@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Masahiko Sawada <masahiko.sawada@enterprisedb.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com
2021-03-26 13:17:56 +01:00
proargtypes => 'internal internal internal',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'brin_inclusion_consistent' },
{ oid => '4108', descr => 'BRIN inclusion support',
proname => 'brin_inclusion_union', prorettype => 'bool',
proargtypes => 'internal internal internal',
prosrc => 'brin_inclusion_union' },
BRIN bloom indexes Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes using the new opclasses allow only equality queries (similar to hash indexes), but that works fine for data like UUID, MAC addresses etc. for which range queries are not very common. This also means the indexes work for data that is not well correlated to physical location within the table, or perhaps even entirely random (which is a common issue with existing BRIN minmax opclasses). It's possible to specify opclass parameters with the usual Bloom filter parameters, i.e. the desired false-positive rate and the expected number of distinct values per page range. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05, n_distinct_per_range = 100)); The opclasses do not operate on the indexed values directly, but compute a 32-bit hash first, and the Bloom filter is built on the hash value. Collisions should not be a huge issue though, as the number of distinct values in a page ranges is usually fairly small. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: Nico Williams <nico@cryptonector.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:35:29 +01:00
# BRIN bloom
{ oid => '4591', descr => 'BRIN bloom support',
proname => 'brin_bloom_opcinfo', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'brin_bloom_opcinfo' },
{ oid => '4592', descr => 'BRIN bloom support',
proname => 'brin_bloom_add_value', prorettype => 'bool',
proargtypes => 'internal internal internal internal',
prosrc => 'brin_bloom_add_value' },
{ oid => '4593', descr => 'BRIN bloom support',
proname => 'brin_bloom_consistent', prorettype => 'bool',
proargtypes => 'internal internal internal int4',
prosrc => 'brin_bloom_consistent' },
{ oid => '4594', descr => 'BRIN bloom support',
proname => 'brin_bloom_union', prorettype => 'bool',
proargtypes => 'internal internal internal', prosrc => 'brin_bloom_union' },
{ oid => '4595', descr => 'BRIN bloom support',
proname => 'brin_bloom_options', proisstrict => 'f', prorettype => 'void',
proargtypes => 'internal', prosrc => 'brin_bloom_options' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# userlock replacements
{ oid => '2880', descr => 'obtain exclusive advisory lock',
proname => 'pg_advisory_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int8',
prosrc => 'pg_advisory_lock_int8' },
{ oid => '3089', descr => 'obtain exclusive advisory lock',
proname => 'pg_advisory_xact_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int8',
prosrc => 'pg_advisory_xact_lock_int8' },
{ oid => '2881', descr => 'obtain shared advisory lock',
proname => 'pg_advisory_lock_shared', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int8',
prosrc => 'pg_advisory_lock_shared_int8' },
{ oid => '3090', descr => 'obtain shared advisory lock',
proname => 'pg_advisory_xact_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_advisory_xact_lock_shared_int8' },
{ oid => '2882', descr => 'obtain exclusive advisory lock if available',
proname => 'pg_try_advisory_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'int8',
prosrc => 'pg_try_advisory_lock_int8' },
{ oid => '3091', descr => 'obtain exclusive advisory lock if available',
proname => 'pg_try_advisory_xact_lock', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_xact_lock_int8' },
{ oid => '2883', descr => 'obtain shared advisory lock if available',
proname => 'pg_try_advisory_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_lock_shared_int8' },
{ oid => '3092', descr => 'obtain shared advisory lock if available',
proname => 'pg_try_advisory_xact_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_xact_lock_shared_int8' },
{ oid => '2884', descr => 'release exclusive advisory lock',
proname => 'pg_advisory_unlock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'int8',
prosrc => 'pg_advisory_unlock_int8' },
{ oid => '2885', descr => 'release shared advisory lock',
proname => 'pg_advisory_unlock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int8',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_advisory_unlock_shared_int8' },
{ oid => '2886', descr => 'obtain exclusive advisory lock',
proname => 'pg_advisory_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int4 int4',
prosrc => 'pg_advisory_lock_int4' },
{ oid => '3093', descr => 'obtain exclusive advisory lock',
proname => 'pg_advisory_xact_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int4 int4',
prosrc => 'pg_advisory_xact_lock_int4' },
{ oid => '2887', descr => 'obtain shared advisory lock',
proname => 'pg_advisory_lock_shared', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => 'int4 int4',
prosrc => 'pg_advisory_lock_shared_int4' },
{ oid => '3094', descr => 'obtain shared advisory lock',
proname => 'pg_advisory_xact_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'int4 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_advisory_xact_lock_shared_int4' },
{ oid => '2888', descr => 'obtain exclusive advisory lock if available',
proname => 'pg_try_advisory_lock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'int4 int4',
prosrc => 'pg_try_advisory_lock_int4' },
{ oid => '3095', descr => 'obtain exclusive advisory lock if available',
proname => 'pg_try_advisory_xact_lock', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int4 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_xact_lock_int4' },
{ oid => '2889', descr => 'obtain shared advisory lock if available',
proname => 'pg_try_advisory_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int4 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_lock_shared_int4' },
{ oid => '3096', descr => 'obtain shared advisory lock if available',
proname => 'pg_try_advisory_xact_lock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int4 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_try_advisory_xact_lock_shared_int4' },
{ oid => '2890', descr => 'release exclusive advisory lock',
proname => 'pg_advisory_unlock', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'bool', proargtypes => 'int4 int4',
prosrc => 'pg_advisory_unlock_int4' },
{ oid => '2891', descr => 'release shared advisory lock',
proname => 'pg_advisory_unlock_shared', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => 'int4 int4',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_advisory_unlock_shared_int4' },
{ oid => '2892', descr => 'release all advisory locks',
proname => 'pg_advisory_unlock_all', provolatile => 'v', proparallel => 'r',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prorettype => 'void', proargtypes => '', prosrc => 'pg_advisory_unlock_all' },
# XML support
{ oid => '2893', descr => 'I/O',
proname => 'xml_in', provolatile => 's', prorettype => 'xml',
proargtypes => 'cstring', prosrc => 'xml_in' },
{ oid => '2894', descr => 'I/O',
proname => 'xml_out', prorettype => 'cstring', proargtypes => 'xml',
prosrc => 'xml_out' },
{ oid => '2895', descr => 'generate XML comment',
proname => 'xmlcomment', prorettype => 'xml', proargtypes => 'text',
prosrc => 'xmlcomment' },
{ oid => '2896',
descr => 'perform a non-validating parse of a character string to produce an XML value',
proname => 'xml', provolatile => 's', prorettype => 'xml',
proargtypes => 'text', prosrc => 'texttoxml' },
{ oid => '2897', descr => 'validate an XML value',
proname => 'xmlvalidate', prorettype => 'bool', proargtypes => 'xml text',
prosrc => 'xmlvalidate' },
{ oid => '2898', descr => 'I/O',
proname => 'xml_recv', provolatile => 's', prorettype => 'xml',
proargtypes => 'internal', prosrc => 'xml_recv' },
{ oid => '2899', descr => 'I/O',
proname => 'xml_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'xml', prosrc => 'xml_send' },
{ oid => '2900', descr => 'aggregate transition function',
proname => 'xmlconcat2', proisstrict => 'f', prorettype => 'xml',
proargtypes => 'xml xml', prosrc => 'xmlconcat2' },
{ oid => '2901', descr => 'concatenate XML values',
proname => 'xmlagg', prokind => 'a', proisstrict => 'f', prorettype => 'xml',
proargtypes => 'xml', prosrc => 'aggregate_dummy' },
{ oid => '2922', descr => 'serialize an XML value to a character string',
proname => 'text', prorettype => 'text', proargtypes => 'xml',
prosrc => 'xmltotext' },
{ oid => '3813', descr => 'generate XML text node',
proname => 'xmltext', proisstrict => 't', prorettype => 'xml',
proargtypes => 'text', prosrc => 'xmltext' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2923', descr => 'map table contents to XML',
proname => 'table_to_xml', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml',
proargtypes => 'regclass bool bool text',
proargnames => '{tbl,nulls,tableforest,targetns}', prosrc => 'table_to_xml' },
{ oid => '2924', descr => 'map query result to XML',
proname => 'query_to_xml', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'xml', proargtypes => 'text bool bool text',
proargnames => '{query,nulls,tableforest,targetns}',
prosrc => 'query_to_xml' },
{ oid => '2925', descr => 'map rows from cursor to XML',
proname => 'cursor_to_xml', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'xml',
proargtypes => 'refcursor int4 bool bool text',
proargnames => '{cursor,count,nulls,tableforest,targetns}',
prosrc => 'cursor_to_xml' },
{ oid => '2926', descr => 'map table structure to XML Schema',
proname => 'table_to_xmlschema', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml',
proargtypes => 'regclass bool bool text',
proargnames => '{tbl,nulls,tableforest,targetns}',
prosrc => 'table_to_xmlschema' },
{ oid => '2927', descr => 'map query result structure to XML Schema',
proname => 'query_to_xmlschema', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'xml', proargtypes => 'text bool bool text',
proargnames => '{query,nulls,tableforest,targetns}',
prosrc => 'query_to_xmlschema' },
{ oid => '2928', descr => 'map cursor structure to XML Schema',
proname => 'cursor_to_xmlschema', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'xml',
proargtypes => 'refcursor bool bool text',
proargnames => '{cursor,nulls,tableforest,targetns}',
prosrc => 'cursor_to_xmlschema' },
{ oid => '2929',
descr => 'map table contents and structure to XML and XML Schema',
proname => 'table_to_xml_and_xmlschema', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml',
proargtypes => 'regclass bool bool text',
proargnames => '{tbl,nulls,tableforest,targetns}',
prosrc => 'table_to_xml_and_xmlschema' },
{ oid => '2930',
descr => 'map query result and structure to XML and XML Schema',
proname => 'query_to_xml_and_xmlschema', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'xml', proargtypes => 'text bool bool text',
proargnames => '{query,nulls,tableforest,targetns}',
prosrc => 'query_to_xml_and_xmlschema' },
{ oid => '2933', descr => 'map schema contents to XML',
proname => 'schema_to_xml', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml', proargtypes => 'name bool bool text',
proargnames => '{schema,nulls,tableforest,targetns}',
prosrc => 'schema_to_xml' },
{ oid => '2934', descr => 'map schema structure to XML Schema',
proname => 'schema_to_xmlschema', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml', proargtypes => 'name bool bool text',
proargnames => '{schema,nulls,tableforest,targetns}',
prosrc => 'schema_to_xmlschema' },
{ oid => '2935',
descr => 'map schema contents and structure to XML and XML Schema',
proname => 'schema_to_xml_and_xmlschema', procost => '100',
provolatile => 's', proparallel => 'r', prorettype => 'xml',
proargtypes => 'name bool bool text',
proargnames => '{schema,nulls,tableforest,targetns}',
prosrc => 'schema_to_xml_and_xmlschema' },
{ oid => '2936', descr => 'map database contents to XML',
proname => 'database_to_xml', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml', proargtypes => 'bool bool text',
proargnames => '{nulls,tableforest,targetns}', prosrc => 'database_to_xml' },
{ oid => '2937', descr => 'map database structure to XML Schema',
proname => 'database_to_xmlschema', procost => '100', provolatile => 's',
proparallel => 'r', prorettype => 'xml', proargtypes => 'bool bool text',
proargnames => '{nulls,tableforest,targetns}',
prosrc => 'database_to_xmlschema' },
{ oid => '2938',
descr => 'map database contents and structure to XML and XML Schema',
proname => 'database_to_xml_and_xmlschema', procost => '100',
provolatile => 's', proparallel => 'r', prorettype => 'xml',
proargtypes => 'bool bool text',
proargnames => '{nulls,tableforest,targetns}',
prosrc => 'database_to_xml_and_xmlschema' },
{ oid => '2931',
descr => 'evaluate XPath expression, with namespaces support',
proname => 'xpath', prorettype => '_xml', proargtypes => 'text xml _text',
prosrc => 'xpath' },
{ oid => '2932', descr => 'evaluate XPath expression',
proname => 'xpath', prolang => 'sql', prorettype => '_xml',
proargtypes => 'text xml', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2614', descr => 'test XML value against XPath expression',
proname => 'xmlexists', prorettype => 'bool', proargtypes => 'text xml',
prosrc => 'xmlexists' },
{ oid => '3049',
descr => 'test XML value against XPath expression, with namespace support',
proname => 'xpath_exists', prorettype => 'bool',
proargtypes => 'text xml _text', prosrc => 'xpath_exists' },
{ oid => '3050', descr => 'test XML value against XPath expression',
proname => 'xpath_exists', prolang => 'sql', prorettype => 'bool',
proargtypes => 'text xml', prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3051', descr => 'determine if a string is well formed XML',
proname => 'xml_is_well_formed', provolatile => 's', prorettype => 'bool',
proargtypes => 'text', prosrc => 'xml_is_well_formed' },
{ oid => '3052', descr => 'determine if a string is well formed XML document',
proname => 'xml_is_well_formed_document', prorettype => 'bool',
proargtypes => 'text', prosrc => 'xml_is_well_formed_document' },
{ oid => '3053', descr => 'determine if a string is well formed XML content',
proname => 'xml_is_well_formed_content', prorettype => 'bool',
proargtypes => 'text', prosrc => 'xml_is_well_formed_content' },
# json
{ oid => '321', descr => 'I/O',
proname => 'json_in', prorettype => 'json', proargtypes => 'cstring',
prosrc => 'json_in' },
{ oid => '322', descr => 'I/O',
proname => 'json_out', prorettype => 'cstring', proargtypes => 'json',
prosrc => 'json_out' },
{ oid => '323', descr => 'I/O',
proname => 'json_recv', prorettype => 'json', proargtypes => 'internal',
prosrc => 'json_recv' },
{ oid => '324', descr => 'I/O',
proname => 'json_send', prorettype => 'bytea', proargtypes => 'json',
prosrc => 'json_send' },
{ oid => '3153', descr => 'map array to json',
proname => 'array_to_json', provolatile => 's', prorettype => 'json',
proargtypes => 'anyarray', prosrc => 'array_to_json' },
{ oid => '3154', descr => 'map array to json with optional pretty printing',
proname => 'array_to_json', provolatile => 's', prorettype => 'json',
proargtypes => 'anyarray bool', prosrc => 'array_to_json_pretty' },
{ oid => '3155', descr => 'map row to json',
proname => 'row_to_json', provolatile => 's', prorettype => 'json',
proargtypes => 'record', prosrc => 'row_to_json' },
{ oid => '3156', descr => 'map row to json with optional pretty printing',
proname => 'row_to_json', provolatile => 's', prorettype => 'json',
proargtypes => 'record bool', prosrc => 'row_to_json_pretty' },
{ oid => '3173', descr => 'json aggregate transition function',
proname => 'json_agg_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal anyelement',
prosrc => 'json_agg_transfn' },
{ oid => '6275', descr => 'json aggregate transition function',
proname => 'json_agg_strict_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal anyelement',
prosrc => 'json_agg_strict_transfn' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3174', descr => 'json aggregate final function',
proname => 'json_agg_finalfn', proisstrict => 'f', prorettype => 'json',
proargtypes => 'internal', prosrc => 'json_agg_finalfn' },
{ oid => '3175', descr => 'aggregate input into json',
proname => 'json_agg', prokind => 'a', proisstrict => 'f', provolatile => 's',
prorettype => 'json', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '6276', descr => 'aggregate input into json',
proname => 'json_agg_strict', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3180', descr => 'json object aggregate transition function',
proname => 'json_object_agg_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal any any',
prosrc => 'json_object_agg_transfn' },
{ oid => '6277', descr => 'json object aggregate transition function',
proname => 'json_object_agg_strict_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'json_object_agg_strict_transfn' },
{ oid => '6278', descr => 'json object aggregate transition function',
proname => 'json_object_agg_unique_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'json_object_agg_unique_transfn' },
{ oid => '6279', descr => 'json object aggregate transition function',
proname => 'json_object_agg_unique_strict_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'json_object_agg_unique_strict_transfn' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3196', descr => 'json object aggregate final function',
proname => 'json_object_agg_finalfn', proisstrict => 'f',
prorettype => 'json', proargtypes => 'internal',
prosrc => 'json_object_agg_finalfn' },
{ oid => '3197', descr => 'aggregate input into a json object',
proname => 'json_object_agg', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'any any',
proargnames => '{key,value}', prosrc => 'aggregate_dummy' },
{ oid => '6280', descr => 'aggregate non-NULL input into a json object',
proname => 'json_object_agg_strict', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'any any',
proargnames => '{key,value}', prosrc => 'aggregate_dummy' },
{ oid => '6281',
descr => 'aggregate input into a json object with unique keys',
proname => 'json_object_agg_unique', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'any any',
proargnames => '{key,value}', prosrc => 'aggregate_dummy' },
{ oid => '6282',
descr => 'aggregate non-NULL input into a json object with unique keys',
proname => 'json_object_agg_unique_strict', prokind => 'a',
proisstrict => 'f', provolatile => 's', prorettype => 'json',
proargtypes => 'any any', proargnames => '{key,value}',
prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3198', descr => 'build a json array from any inputs',
proname => 'json_build_array', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'json_build_array' },
{ oid => '3199', descr => 'build an empty json array',
proname => 'json_build_array', proisstrict => 'f', provolatile => 's',
prorettype => 'json', proargtypes => '',
prosrc => 'json_build_array_noargs' },
{ oid => '3200',
descr => 'build a json object from pairwise key/value inputs',
proname => 'json_build_object', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'json', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'json_build_object' },
{ oid => '3201', descr => 'build an empty json object',
proname => 'json_build_object', proisstrict => 'f', provolatile => 's',
prorettype => 'json', proargtypes => '',
prosrc => 'json_build_object_noargs' },
{ oid => '3202', descr => 'map text array of key value pairs to json object',
proname => 'json_object', prorettype => 'json', proargtypes => '_text',
prosrc => 'json_object' },
{ oid => '3203', descr => 'map text arrays of keys and values to json object',
proname => 'json_object', prorettype => 'json', proargtypes => '_text _text',
prosrc => 'json_object_two_arg' },
{ oid => '3176', descr => 'map input to json',
proname => 'to_json', provolatile => 's', prorettype => 'json',
proargtypes => 'anyelement', prosrc => 'to_json' },
{ oid => '3261', descr => 'remove object fields with null values from json',
proname => 'json_strip_nulls', prorettype => 'json', proargtypes => 'json',
prosrc => 'json_strip_nulls' },
{ oid => '3947',
proname => 'json_object_field', prorettype => 'json',
proargtypes => 'json text', proargnames => '{from_json, field_name}',
prosrc => 'json_object_field' },
{ oid => '3948',
proname => 'json_object_field_text', prorettype => 'text',
proargtypes => 'json text', proargnames => '{from_json, field_name}',
prosrc => 'json_object_field_text' },
{ oid => '3949',
proname => 'json_array_element', prorettype => 'json',
proargtypes => 'json int4', proargnames => '{from_json, element_index}',
prosrc => 'json_array_element' },
{ oid => '3950',
proname => 'json_array_element_text', prorettype => 'text',
proargtypes => 'json int4', proargnames => '{from_json, element_index}',
prosrc => 'json_array_element_text' },
{ oid => '3951', descr => 'get value from json with path elements',
proname => 'json_extract_path', provariadic => 'text', prorettype => 'json',
proargtypes => 'json _text', proallargtypes => '{json,_text}',
proargmodes => '{i,v}', proargnames => '{from_json,path_elems}',
prosrc => 'json_extract_path' },
{ oid => '3953', descr => 'get value from json as text with path elements',
proname => 'json_extract_path_text', provariadic => 'text',
prorettype => 'text', proargtypes => 'json _text',
proallargtypes => '{json,_text}', proargmodes => '{i,v}',
proargnames => '{from_json,path_elems}', prosrc => 'json_extract_path_text' },
{ oid => '3955', descr => 'key value pairs of a json object',
proname => 'json_array_elements', prorows => '100', proretset => 't',
prorettype => 'json', proargtypes => 'json', proallargtypes => '{json,json}',
proargmodes => '{i,o}', proargnames => '{from_json,value}',
prosrc => 'json_array_elements' },
{ oid => '3969', descr => 'elements of json array',
proname => 'json_array_elements_text', prorows => '100', proretset => 't',
prorettype => 'text', proargtypes => 'json', proallargtypes => '{json,text}',
proargmodes => '{i,o}', proargnames => '{from_json,value}',
prosrc => 'json_array_elements_text' },
{ oid => '3956', descr => 'length of json array',
proname => 'json_array_length', prorettype => 'int4', proargtypes => 'json',
prosrc => 'json_array_length' },
{ oid => '3957', descr => 'get json object keys',
proname => 'json_object_keys', prorows => '100', proretset => 't',
prorettype => 'text', proargtypes => 'json', prosrc => 'json_object_keys' },
{ oid => '3958', descr => 'key value pairs of a json object',
proname => 'json_each', prorows => '100', proretset => 't',
prorettype => 'record', proargtypes => 'json',
proallargtypes => '{json,text,json}', proargmodes => '{i,o,o}',
proargnames => '{from_json,key,value}', prosrc => 'json_each' },
{ oid => '3959', descr => 'key value pairs of a json object',
proname => 'json_each_text', prorows => '100', proretset => 't',
prorettype => 'record', proargtypes => 'json',
proallargtypes => '{json,text,text}', proargmodes => '{i,o,o}',
proargnames => '{from_json,key,value}', prosrc => 'json_each_text' },
{ oid => '3960', descr => 'get record fields from a json object',
proname => 'json_populate_record', proisstrict => 'f', provolatile => 's',
prorettype => 'anyelement', proargtypes => 'anyelement json bool',
prosrc => 'json_populate_record' },
{ oid => '3961',
descr => 'get set of records with fields from a json array of objects',
proname => 'json_populate_recordset', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', prorettype => 'anyelement',
proargtypes => 'anyelement json bool', prosrc => 'json_populate_recordset' },
{ oid => '3204', descr => 'get record fields from a json object',
proname => 'json_to_record', provolatile => 's', prorettype => 'record',
proargtypes => 'json', prosrc => 'json_to_record' },
{ oid => '3205',
descr => 'get set of records with fields from a json array of objects',
proname => 'json_to_recordset', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => 'json', prosrc => 'json_to_recordset' },
{ oid => '3968', descr => 'get the type of a json value',
proname => 'json_typeof', prorettype => 'text', proargtypes => 'json',
prosrc => 'json_typeof' },
# uuid
{ oid => '2952', descr => 'I/O',
proname => 'uuid_in', prorettype => 'uuid', proargtypes => 'cstring',
prosrc => 'uuid_in' },
{ oid => '2953', descr => 'I/O',
proname => 'uuid_out', prorettype => 'cstring', proargtypes => 'uuid',
prosrc => 'uuid_out' },
{ oid => '2954',
proname => 'uuid_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_lt' },
{ oid => '2955',
proname => 'uuid_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_le' },
{ oid => '2956',
proname => 'uuid_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_eq' },
{ oid => '2957',
proname => 'uuid_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_ge' },
{ oid => '2958',
proname => 'uuid_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_gt' },
{ oid => '2959',
proname => 'uuid_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'uuid uuid', prosrc => 'uuid_ne' },
{ oid => '2960', descr => 'less-equal-greater',
proname => 'uuid_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'uuid uuid', prosrc => 'uuid_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3300', descr => 'sort support',
proname => 'uuid_sortsupport', prorettype => 'void',
proargtypes => 'internal', prosrc => 'uuid_sortsupport' },
{ oid => '2961', descr => 'I/O',
proname => 'uuid_recv', prorettype => 'uuid', proargtypes => 'internal',
prosrc => 'uuid_recv' },
{ oid => '2962', descr => 'I/O',
proname => 'uuid_send', prorettype => 'bytea', proargtypes => 'uuid',
prosrc => 'uuid_send' },
{ oid => '2963', descr => 'hash',
proname => 'uuid_hash', prorettype => 'int4', proargtypes => 'uuid',
prosrc => 'uuid_hash' },
{ oid => '3412', descr => 'hash',
proname => 'uuid_hash_extended', prorettype => 'int8',
proargtypes => 'uuid int8', prosrc => 'uuid_hash_extended' },
{ oid => '3432', descr => 'generate random UUID',
proname => 'gen_random_uuid', proleakproof => 't', provolatile => 'v',
prorettype => 'uuid', proargtypes => '', prosrc => 'gen_random_uuid' },
{ oid => '9897', descr => 'extract timestamp from UUID',
proname => 'uuid_extract_timestamp', proleakproof => 't',
prorettype => 'timestamptz', proargtypes => 'uuid',
prosrc => 'uuid_extract_timestamp' },
{ oid => '9898', descr => 'extract version from RFC 4122 UUID',
proname => 'uuid_extract_version', proleakproof => 't', prorettype => 'int2',
proargtypes => 'uuid', prosrc => 'uuid_extract_version' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# pg_lsn
{ oid => '3229', descr => 'I/O',
proname => 'pg_lsn_in', prorettype => 'pg_lsn', proargtypes => 'cstring',
prosrc => 'pg_lsn_in' },
{ oid => '3230', descr => 'I/O',
proname => 'pg_lsn_out', prorettype => 'cstring', proargtypes => 'pg_lsn',
prosrc => 'pg_lsn_out' },
{ oid => '3231',
proname => 'pg_lsn_lt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_lt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3232',
proname => 'pg_lsn_le', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_le' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3233',
proname => 'pg_lsn_eq', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_eq' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3234',
proname => 'pg_lsn_ge', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_ge' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3235',
proname => 'pg_lsn_gt', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_gt' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3236',
proname => 'pg_lsn_ne', proleakproof => 't', prorettype => 'bool',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_ne' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3237',
proname => 'pg_lsn_mi', prorettype => 'numeric',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_mi' },
{ oid => '3238', descr => 'I/O',
proname => 'pg_lsn_recv', prorettype => 'pg_lsn', proargtypes => 'internal',
prosrc => 'pg_lsn_recv' },
{ oid => '3239', descr => 'I/O',
proname => 'pg_lsn_send', prorettype => 'bytea', proargtypes => 'pg_lsn',
prosrc => 'pg_lsn_send' },
{ oid => '3251', descr => 'less-equal-greater',
proname => 'pg_lsn_cmp', proleakproof => 't', prorettype => 'int4',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_cmp' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3252', descr => 'hash',
proname => 'pg_lsn_hash', prorettype => 'int4', proargtypes => 'pg_lsn',
prosrc => 'pg_lsn_hash' },
{ oid => '3413', descr => 'hash',
proname => 'pg_lsn_hash_extended', prorettype => 'int8',
proargtypes => 'pg_lsn int8', prosrc => 'pg_lsn_hash_extended' },
{ oid => '4187', descr => 'larger of two',
proname => 'pg_lsn_larger', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_larger' },
{ oid => '4188', descr => 'smaller of two',
proname => 'pg_lsn_smaller', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_smaller' },
{ oid => '5022',
proname => 'pg_lsn_pli', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn numeric', prosrc => 'pg_lsn_pli' },
{ oid => '5023',
proname => 'numeric_pl_pg_lsn', prolang => 'sql', prorettype => 'pg_lsn',
proargtypes => 'numeric pg_lsn', prosrc => 'see system_functions.sql' },
{ oid => '5024',
proname => 'pg_lsn_mii', prorettype => 'pg_lsn',
proargtypes => 'pg_lsn numeric', prosrc => 'pg_lsn_mii' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# enum related procs
{ oid => '3504', descr => 'I/O',
proname => 'anyenum_in', prorettype => 'anyenum', proargtypes => 'cstring',
prosrc => 'anyenum_in' },
{ oid => '3505', descr => 'I/O',
proname => 'anyenum_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyenum', prosrc => 'anyenum_out' },
{ oid => '3506', descr => 'I/O',
proname => 'enum_in', provolatile => 's', prorettype => 'anyenum',
proargtypes => 'cstring oid', prosrc => 'enum_in' },
{ oid => '3507', descr => 'I/O',
proname => 'enum_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyenum', prosrc => 'enum_out' },
{ oid => '3508',
proname => 'enum_eq', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_eq' },
{ oid => '3509',
proname => 'enum_ne', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_ne' },
{ oid => '3510',
proname => 'enum_lt', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_lt' },
{ oid => '3511',
proname => 'enum_gt', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_gt' },
{ oid => '3512',
proname => 'enum_le', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_le' },
{ oid => '3513',
proname => 'enum_ge', prorettype => 'bool', proargtypes => 'anyenum anyenum',
prosrc => 'enum_ge' },
{ oid => '3514', descr => 'less-equal-greater',
proname => 'enum_cmp', prorettype => 'int4', proargtypes => 'anyenum anyenum',
prosrc => 'enum_cmp' },
{ oid => '3515', descr => 'hash',
proname => 'hashenum', prorettype => 'int4', proargtypes => 'anyenum',
prosrc => 'hashenum' },
{ oid => '3414', descr => 'hash',
proname => 'hashenumextended', prorettype => 'int8',
proargtypes => 'anyenum int8', prosrc => 'hashenumextended' },
{ oid => '3524', descr => 'smaller of two',
proname => 'enum_smaller', prorettype => 'anyenum',
proargtypes => 'anyenum anyenum', prosrc => 'enum_smaller' },
{ oid => '3525', descr => 'larger of two',
proname => 'enum_larger', prorettype => 'anyenum',
proargtypes => 'anyenum anyenum', prosrc => 'enum_larger' },
{ oid => '3526', descr => 'maximum value of all enum input values',
proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'anyenum',
proargtypes => 'anyenum', prosrc => 'aggregate_dummy' },
{ oid => '3527', descr => 'minimum value of all enum input values',
proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'anyenum',
proargtypes => 'anyenum', prosrc => 'aggregate_dummy' },
{ oid => '3528', descr => 'first value of the input enum type',
proname => 'enum_first', proisstrict => 'f', provolatile => 's',
prorettype => 'anyenum', proargtypes => 'anyenum', prosrc => 'enum_first' },
{ oid => '3529', descr => 'last value of the input enum type',
proname => 'enum_last', proisstrict => 'f', provolatile => 's',
prorettype => 'anyenum', proargtypes => 'anyenum', prosrc => 'enum_last' },
{ oid => '3530',
descr => 'range between the two given enum values, as an ordered array',
proname => 'enum_range', proisstrict => 'f', provolatile => 's',
prorettype => 'anyarray', proargtypes => 'anyenum anyenum',
prosrc => 'enum_range_bounds' },
{ oid => '3531', descr => 'range of the given enum type, as an ordered array',
proname => 'enum_range', proisstrict => 'f', provolatile => 's',
prorettype => 'anyarray', proargtypes => 'anyenum',
prosrc => 'enum_range_all' },
{ oid => '3532', descr => 'I/O',
proname => 'enum_recv', provolatile => 's', prorettype => 'anyenum',
proargtypes => 'internal oid', prosrc => 'enum_recv' },
{ oid => '3533', descr => 'I/O',
proname => 'enum_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'anyenum', prosrc => 'enum_send' },
# text search stuff
{ oid => '3610', descr => 'I/O',
proname => 'tsvectorin', prorettype => 'tsvector', proargtypes => 'cstring',
prosrc => 'tsvectorin' },
{ oid => '3639', descr => 'I/O',
proname => 'tsvectorrecv', prorettype => 'tsvector',
proargtypes => 'internal', prosrc => 'tsvectorrecv' },
{ oid => '3611', descr => 'I/O',
proname => 'tsvectorout', prorettype => 'cstring', proargtypes => 'tsvector',
prosrc => 'tsvectorout' },
{ oid => '3638', descr => 'I/O',
proname => 'tsvectorsend', prorettype => 'bytea', proargtypes => 'tsvector',
prosrc => 'tsvectorsend' },
{ oid => '3612', descr => 'I/O',
proname => 'tsqueryin', prorettype => 'tsquery', proargtypes => 'cstring',
prosrc => 'tsqueryin' },
{ oid => '3641', descr => 'I/O',
proname => 'tsqueryrecv', prorettype => 'tsquery', proargtypes => 'internal',
prosrc => 'tsqueryrecv' },
{ oid => '3613', descr => 'I/O',
proname => 'tsqueryout', prorettype => 'cstring', proargtypes => 'tsquery',
prosrc => 'tsqueryout' },
{ oid => '3640', descr => 'I/O',
proname => 'tsquerysend', prorettype => 'bytea', proargtypes => 'tsquery',
prosrc => 'tsquerysend' },
{ oid => '3646', descr => 'I/O',
proname => 'gtsvectorin', prorettype => 'gtsvector', proargtypes => 'cstring',
prosrc => 'gtsvectorin' },
{ oid => '3647', descr => 'I/O',
proname => 'gtsvectorout', prorettype => 'cstring',
proargtypes => 'gtsvector', prosrc => 'gtsvectorout' },
{ oid => '3616',
proname => 'tsvector_lt', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_lt' },
{ oid => '3617',
proname => 'tsvector_le', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_le' },
{ oid => '3618',
proname => 'tsvector_eq', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_eq' },
{ oid => '3619',
proname => 'tsvector_ne', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_ne' },
{ oid => '3620',
proname => 'tsvector_ge', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_ge' },
{ oid => '3621',
proname => 'tsvector_gt', prorettype => 'bool',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_gt' },
{ oid => '3622', descr => 'less-equal-greater',
proname => 'tsvector_cmp', prorettype => 'int4',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_cmp' },
{ oid => '3711', descr => 'number of lexemes',
proname => 'length', prorettype => 'int4', proargtypes => 'tsvector',
prosrc => 'tsvector_length' },
{ oid => '3623', descr => 'strip position information',
proname => 'strip', prorettype => 'tsvector', proargtypes => 'tsvector',
prosrc => 'tsvector_strip' },
{ oid => '3624', descr => 'set given weight for whole tsvector',
proname => 'setweight', prorettype => 'tsvector',
proargtypes => 'tsvector char', prosrc => 'tsvector_setweight' },
{ oid => '3320', descr => 'set given weight for given lexemes',
proname => 'setweight', prorettype => 'tsvector',
proargtypes => 'tsvector char _text',
prosrc => 'tsvector_setweight_by_filter' },
{ oid => '3625',
proname => 'tsvector_concat', prorettype => 'tsvector',
proargtypes => 'tsvector tsvector', prosrc => 'tsvector_concat' },
{ oid => '3321', descr => 'delete lexeme',
proname => 'ts_delete', prorettype => 'tsvector',
proargtypes => 'tsvector text', prosrc => 'tsvector_delete_str' },
{ oid => '3323', descr => 'delete given lexemes',
proname => 'ts_delete', prorettype => 'tsvector',
proargtypes => 'tsvector _text', prosrc => 'tsvector_delete_arr' },
{ oid => '3322', descr => 'expand tsvector to set of rows',
proname => 'unnest', prorows => '10', proretset => 't',
prorettype => 'record', proargtypes => 'tsvector',
proallargtypes => '{tsvector,text,_int2,_text}', proargmodes => '{i,o,o,o}',
proargnames => '{tsvector,lexeme,positions,weights}',
prosrc => 'tsvector_unnest' },
{ oid => '3326', descr => 'convert tsvector to array of lexemes',
proname => 'tsvector_to_array', prorettype => '_text',
proargtypes => 'tsvector', prosrc => 'tsvector_to_array' },
{ oid => '3327', descr => 'build tsvector from array of lexemes',
proname => 'array_to_tsvector', prorettype => 'tsvector',
proargtypes => '_text', prosrc => 'array_to_tsvector' },
{ oid => '3319',
descr => 'delete lexemes that do not have one of the given weights',
proname => 'ts_filter', prorettype => 'tsvector',
proargtypes => 'tsvector _char', prosrc => 'tsvector_filter' },
{ oid => '3634',
proname => 'ts_match_vq', prorettype => 'bool',
proargtypes => 'tsvector tsquery', prosrc => 'ts_match_vq' },
{ oid => '3635',
proname => 'ts_match_qv', prorettype => 'bool',
proargtypes => 'tsquery tsvector', prosrc => 'ts_match_qv' },
{ oid => '3760',
proname => 'ts_match_tt', procost => '100', provolatile => 's',
prorettype => 'bool', proargtypes => 'text text', prosrc => 'ts_match_tt' },
{ oid => '3761',
proname => 'ts_match_tq', procost => '100', provolatile => 's',
prorettype => 'bool', proargtypes => 'text tsquery',
prosrc => 'ts_match_tq' },
{ oid => '3648', descr => 'GiST tsvector support',
proname => 'gtsvector_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gtsvector_compress' },
{ oid => '3649', descr => 'GiST tsvector support',
proname => 'gtsvector_decompress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gtsvector_decompress' },
{ oid => '3650', descr => 'GiST tsvector support',
proname => 'gtsvector_picksplit', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'gtsvector_picksplit' },
{ oid => '3651', descr => 'GiST tsvector support',
proname => 'gtsvector_union', prorettype => 'gtsvector',
proargtypes => 'internal internal', prosrc => 'gtsvector_union' },
{ oid => '3652', descr => 'GiST tsvector support',
proname => 'gtsvector_same', prorettype => 'internal',
proargtypes => 'gtsvector gtsvector internal', prosrc => 'gtsvector_same' },
{ oid => '3653', descr => 'GiST tsvector support',
proname => 'gtsvector_penalty', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'gtsvector_penalty' },
{ oid => '3654', descr => 'GiST tsvector support',
proname => 'gtsvector_consistent', prorettype => 'bool',
proargtypes => 'internal tsvector int2 oid internal',
prosrc => 'gtsvector_consistent' },
{ oid => '3790', descr => 'GiST tsvector support (obsolete)',
proname => 'gtsvector_consistent', prorettype => 'bool',
proargtypes => 'internal gtsvector int4 oid internal',
prosrc => 'gtsvector_consistent_oldsig' },
Implement operator class parameters PostgreSQL provides set of template index access methods, where opclasses have much freedom in the semantics of indexing. These index AMs are GiST, GIN, SP-GiST and BRIN. There opclasses define representation of keys, operations on them and supported search strategies. So, it's natural that opclasses may be faced some tradeoffs, which require user-side decision. This commit implements opclass parameters allowing users to set some values, which tell opclass how to index the particular dataset. This commit doesn't introduce new storage in system catalog. Instead it uses pg_attribute.attoptions, which is used for table column storage options but unused for index attributes. In order to evade changing signature of each opclass support function, we implement unified way to pass options to opclass support functions. Options are set to fn_expr as the constant bytea expression. It's possible due to the fact that opclass support functions are executed outside of expressions, so fn_expr is unused for them. This commit comes with some examples of opclass options usage. We parametrize signature length in GiST. That applies to multiple opclasses: tsvector_ops, gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and gist_hstore_ops. Also we parametrize maximum number of integer ranges for gist__int_ops. However, the main future usage of this feature is expected to be json, where users would be able to specify which way to index particular json parts. Catversion is bumped. Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru Author: Nikita Glukhov, revised by me Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
{ oid => '3434', descr => 'GiST tsvector support',
proname => 'gtsvector_options', proisstrict => 'f', prorettype => 'void',
Implement operator class parameters PostgreSQL provides set of template index access methods, where opclasses have much freedom in the semantics of indexing. These index AMs are GiST, GIN, SP-GiST and BRIN. There opclasses define representation of keys, operations on them and supported search strategies. So, it's natural that opclasses may be faced some tradeoffs, which require user-side decision. This commit implements opclass parameters allowing users to set some values, which tell opclass how to index the particular dataset. This commit doesn't introduce new storage in system catalog. Instead it uses pg_attribute.attoptions, which is used for table column storage options but unused for index attributes. In order to evade changing signature of each opclass support function, we implement unified way to pass options to opclass support functions. Options are set to fn_expr as the constant bytea expression. It's possible due to the fact that opclass support functions are executed outside of expressions, so fn_expr is unused for them. This commit comes with some examples of opclass options usage. We parametrize signature length in GiST. That applies to multiple opclasses: tsvector_ops, gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and gist_hstore_ops. Also we parametrize maximum number of integer ranges for gist__int_ops. However, the main future usage of this feature is expected to be json, where users would be able to specify which way to index particular json parts. Catversion is bumped. Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru Author: Nikita Glukhov, revised by me Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
proargtypes => 'internal', prosrc => 'gtsvector_options' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3656', descr => 'GIN tsvector support',
proname => 'gin_extract_tsvector', prorettype => 'internal',
proargtypes => 'tsvector internal internal',
prosrc => 'gin_extract_tsvector' },
{ oid => '3657', descr => 'GIN tsvector support',
proname => 'gin_extract_tsquery', prorettype => 'internal',
proargtypes => 'tsvector internal int2 internal internal internal internal',
prosrc => 'gin_extract_tsquery' },
{ oid => '3658', descr => 'GIN tsvector support',
proname => 'gin_tsquery_consistent', prorettype => 'bool',
proargtypes => 'internal int2 tsvector int4 internal internal internal internal',
prosrc => 'gin_tsquery_consistent' },
{ oid => '3921', descr => 'GIN tsvector support',
proname => 'gin_tsquery_triconsistent', prorettype => 'char',
proargtypes => 'internal int2 tsvector int4 internal internal internal',
prosrc => 'gin_tsquery_triconsistent' },
{ oid => '3724', descr => 'GIN tsvector support',
proname => 'gin_cmp_tslexeme', prorettype => 'int4',
proargtypes => 'text text', prosrc => 'gin_cmp_tslexeme' },
{ oid => '2700', descr => 'GIN tsvector support',
proname => 'gin_cmp_prefix', prorettype => 'int4',
proargtypes => 'text text int2 internal', prosrc => 'gin_cmp_prefix' },
{ oid => '3077', descr => 'GIN tsvector support (obsolete)',
proname => 'gin_extract_tsvector', prorettype => 'internal',
proargtypes => 'tsvector internal', prosrc => 'gin_extract_tsvector_2args' },
{ oid => '3087', descr => 'GIN tsvector support (obsolete)',
proname => 'gin_extract_tsquery', prorettype => 'internal',
proargtypes => 'tsquery internal int2 internal internal',
prosrc => 'gin_extract_tsquery_5args' },
{ oid => '3088', descr => 'GIN tsvector support (obsolete)',
proname => 'gin_tsquery_consistent', prorettype => 'bool',
proargtypes => 'internal int2 tsquery int4 internal internal',
prosrc => 'gin_tsquery_consistent_6args' },
{ oid => '3791', descr => 'GIN tsvector support (obsolete)',
proname => 'gin_extract_tsquery', prorettype => 'internal',
proargtypes => 'tsquery internal int2 internal internal internal internal',
prosrc => 'gin_extract_tsquery_oldsig' },
{ oid => '3792', descr => 'GIN tsvector support (obsolete)',
proname => 'gin_tsquery_consistent', prorettype => 'bool',
proargtypes => 'internal int2 tsquery int4 internal internal internal internal',
prosrc => 'gin_tsquery_consistent_oldsig' },
{ oid => '3789', descr => 'clean up GIN pending list',
proname => 'gin_clean_pending_list', provolatile => 'v', proparallel => 'u',
prorettype => 'int8', proargtypes => 'regclass',
prosrc => 'gin_clean_pending_list' },
{ oid => '3662',
proname => 'tsquery_lt', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_lt' },
{ oid => '3663',
proname => 'tsquery_le', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_le' },
{ oid => '3664',
proname => 'tsquery_eq', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_eq' },
{ oid => '3665',
proname => 'tsquery_ne', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_ne' },
{ oid => '3666',
proname => 'tsquery_ge', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_ge' },
{ oid => '3667',
proname => 'tsquery_gt', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_gt' },
{ oid => '3668', descr => 'less-equal-greater',
proname => 'tsquery_cmp', prorettype => 'int4',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_cmp' },
{ oid => '3669',
proname => 'tsquery_and', prorettype => 'tsquery',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_and' },
{ oid => '3670',
proname => 'tsquery_or', prorettype => 'tsquery',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_or' },
{ oid => '5003',
proname => 'tsquery_phrase', prorettype => 'tsquery',
proargtypes => 'tsquery tsquery', prosrc => 'tsquery_phrase' },
{ oid => '5004', descr => 'phrase-concatenate with distance',
proname => 'tsquery_phrase', prorettype => 'tsquery',
proargtypes => 'tsquery tsquery int4', prosrc => 'tsquery_phrase_distance' },
{ oid => '3671',
proname => 'tsquery_not', prorettype => 'tsquery', proargtypes => 'tsquery',
prosrc => 'tsquery_not' },
{ oid => '3691',
proname => 'tsq_mcontains', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsq_mcontains' },
{ oid => '3692',
proname => 'tsq_mcontained', prorettype => 'bool',
proargtypes => 'tsquery tsquery', prosrc => 'tsq_mcontained' },
{ oid => '3672', descr => 'number of nodes',
proname => 'numnode', prorettype => 'int4', proargtypes => 'tsquery',
prosrc => 'tsquery_numnode' },
{ oid => '3673', descr => 'show real useful query for GiST index',
proname => 'querytree', prorettype => 'text', proargtypes => 'tsquery',
prosrc => 'tsquerytree' },
{ oid => '3684', descr => 'rewrite tsquery',
proname => 'ts_rewrite', prorettype => 'tsquery',
proargtypes => 'tsquery tsquery tsquery', prosrc => 'tsquery_rewrite' },
{ oid => '3685', descr => 'rewrite tsquery',
proname => 'ts_rewrite', procost => '100', provolatile => 'v',
proparallel => 'u', prorettype => 'tsquery', proargtypes => 'tsquery text',
prosrc => 'tsquery_rewrite_query' },
{ oid => '3695', descr => 'GiST tsquery support',
proname => 'gtsquery_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'gtsquery_compress' },
{ oid => '3697', descr => 'GiST tsquery support',
proname => 'gtsquery_picksplit', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'gtsquery_picksplit' },
{ oid => '3698', descr => 'GiST tsquery support',
proname => 'gtsquery_union', prorettype => 'int8',
proargtypes => 'internal internal', prosrc => 'gtsquery_union' },
{ oid => '3699', descr => 'GiST tsquery support',
proname => 'gtsquery_same', prorettype => 'internal',
proargtypes => 'int8 int8 internal', prosrc => 'gtsquery_same' },
{ oid => '3700', descr => 'GiST tsquery support',
proname => 'gtsquery_penalty', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'gtsquery_penalty' },
{ oid => '3701', descr => 'GiST tsquery support',
proname => 'gtsquery_consistent', prorettype => 'bool',
proargtypes => 'internal tsquery int2 oid internal',
prosrc => 'gtsquery_consistent' },
{ oid => '3793', descr => 'GiST tsquery support (obsolete)',
proname => 'gtsquery_consistent', prorettype => 'bool',
proargtypes => 'internal internal int4 oid internal',
prosrc => 'gtsquery_consistent_oldsig' },
{ oid => '3686', descr => 'restriction selectivity of tsvector @@ tsquery',
proname => 'tsmatchsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'tsmatchsel' },
{ oid => '3687', descr => 'join selectivity of tsvector @@ tsquery',
proname => 'tsmatchjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'tsmatchjoinsel' },
{ oid => '3688', descr => 'tsvector typanalyze',
proname => 'ts_typanalyze', provolatile => 's', prorettype => 'bool',
proargtypes => 'internal', prosrc => 'ts_typanalyze' },
{ oid => '3689', descr => 'statistics of tsvector column',
proname => 'ts_stat', procost => '10', prorows => '10000', proretset => 't',
provolatile => 'v', proparallel => 'u', prorettype => 'record',
proargtypes => 'text', proallargtypes => '{text,text,int4,int4}',
proargmodes => '{i,o,o,o}', proargnames => '{query,word,ndoc,nentry}',
prosrc => 'ts_stat1' },
{ oid => '3690', descr => 'statistics of tsvector column',
proname => 'ts_stat', procost => '10', prorows => '10000', proretset => 't',
provolatile => 'v', proparallel => 'u', prorettype => 'record',
proargtypes => 'text text', proallargtypes => '{text,text,text,int4,int4}',
proargmodes => '{i,i,o,o,o}',
proargnames => '{query,weights,word,ndoc,nentry}', prosrc => 'ts_stat2' },
{ oid => '3703', descr => 'relevance',
proname => 'ts_rank', prorettype => 'float4',
proargtypes => '_float4 tsvector tsquery int4', prosrc => 'ts_rank_wttf' },
{ oid => '3704', descr => 'relevance',
proname => 'ts_rank', prorettype => 'float4',
proargtypes => '_float4 tsvector tsquery', prosrc => 'ts_rank_wtt' },
{ oid => '3705', descr => 'relevance',
proname => 'ts_rank', prorettype => 'float4',
proargtypes => 'tsvector tsquery int4', prosrc => 'ts_rank_ttf' },
{ oid => '3706', descr => 'relevance',
proname => 'ts_rank', prorettype => 'float4',
proargtypes => 'tsvector tsquery', prosrc => 'ts_rank_tt' },
{ oid => '3707', descr => 'relevance',
proname => 'ts_rank_cd', prorettype => 'float4',
proargtypes => '_float4 tsvector tsquery int4', prosrc => 'ts_rankcd_wttf' },
{ oid => '3708', descr => 'relevance',
proname => 'ts_rank_cd', prorettype => 'float4',
proargtypes => '_float4 tsvector tsquery', prosrc => 'ts_rankcd_wtt' },
{ oid => '3709', descr => 'relevance',
proname => 'ts_rank_cd', prorettype => 'float4',
proargtypes => 'tsvector tsquery int4', prosrc => 'ts_rankcd_ttf' },
{ oid => '3710', descr => 'relevance',
proname => 'ts_rank_cd', prorettype => 'float4',
proargtypes => 'tsvector tsquery', prosrc => 'ts_rankcd_tt' },
{ oid => '3713', descr => 'get parser\'s token types',
proname => 'ts_token_type', prorows => '16', proretset => 't',
prorettype => 'record', proargtypes => 'oid',
proallargtypes => '{oid,int4,text,text}', proargmodes => '{i,o,o,o}',
proargnames => '{parser_oid,tokid,alias,description}',
prosrc => 'ts_token_type_byid' },
{ oid => '3714', descr => 'get parser\'s token types',
proname => 'ts_token_type', prorows => '16', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => 'text',
proallargtypes => '{text,int4,text,text}', proargmodes => '{i,o,o,o}',
proargnames => '{parser_name,tokid,alias,description}',
prosrc => 'ts_token_type_byname' },
{ oid => '3715', descr => 'parse text to tokens',
proname => 'ts_parse', prorows => '1000', proretset => 't',
prorettype => 'record', proargtypes => 'oid text',
proallargtypes => '{oid,text,int4,text}', proargmodes => '{i,i,o,o}',
proargnames => '{parser_oid,txt,tokid,token}', prosrc => 'ts_parse_byid' },
{ oid => '3716', descr => 'parse text to tokens',
proname => 'ts_parse', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => 'text text',
proallargtypes => '{text,text,int4,text}', proargmodes => '{i,i,o,o}',
proargnames => '{parser_name,txt,tokid,token}', prosrc => 'ts_parse_byname' },
{ oid => '3717', descr => '(internal)',
proname => 'prsd_start', prorettype => 'internal',
proargtypes => 'internal int4', prosrc => 'prsd_start' },
{ oid => '3718', descr => '(internal)',
proname => 'prsd_nexttoken', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'prsd_nexttoken' },
{ oid => '3719', descr => '(internal)',
proname => 'prsd_end', prorettype => 'void', proargtypes => 'internal',
prosrc => 'prsd_end' },
{ oid => '3720', descr => '(internal)',
proname => 'prsd_headline', prorettype => 'internal',
proargtypes => 'internal internal tsquery', prosrc => 'prsd_headline' },
{ oid => '3721', descr => '(internal)',
proname => 'prsd_lextype', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'prsd_lextype' },
{ oid => '3723', descr => 'normalize one word by dictionary',
proname => 'ts_lexize', prorettype => '_text',
proargtypes => 'regdictionary text', prosrc => 'ts_lexize' },
{ oid => '6183', descr => 'debug function for text search configuration',
proname => 'ts_debug', prolang => 'sql', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => 'regconfig text',
proallargtypes => '{regconfig,text,text,text,text,_regdictionary,regdictionary,_text}',
proargmodes => '{i,i,o,o,o,o,o,o}',
proargnames => '{config,document,alias,description,token,dictionaries,dictionary,lexemes}',
prosrc => 'see system_functions.sql' },
{ oid => '6184',
descr => 'debug function for current text search configuration',
proname => 'ts_debug', prolang => 'sql', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'record', proargtypes => 'text',
proallargtypes => '{text,text,text,text,_regdictionary,regdictionary,_text}',
proargmodes => '{i,o,o,o,o,o,o}',
proargnames => '{document,alias,description,token,dictionaries,dictionary,lexemes}',
prosrc => 'see system_functions.sql' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3725', descr => '(internal)',
proname => 'dsimple_init', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'dsimple_init' },
{ oid => '3726', descr => '(internal)',
proname => 'dsimple_lexize', prorettype => 'internal',
proargtypes => 'internal internal internal internal',
prosrc => 'dsimple_lexize' },
{ oid => '3728', descr => '(internal)',
proname => 'dsynonym_init', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'dsynonym_init' },
{ oid => '3729', descr => '(internal)',
proname => 'dsynonym_lexize', prorettype => 'internal',
proargtypes => 'internal internal internal internal',
prosrc => 'dsynonym_lexize' },
{ oid => '3731', descr => '(internal)',
proname => 'dispell_init', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'dispell_init' },
{ oid => '3732', descr => '(internal)',
proname => 'dispell_lexize', prorettype => 'internal',
proargtypes => 'internal internal internal internal',
prosrc => 'dispell_lexize' },
{ oid => '3740', descr => '(internal)',
proname => 'thesaurus_init', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'thesaurus_init' },
{ oid => '3741', descr => '(internal)',
proname => 'thesaurus_lexize', prorettype => 'internal',
proargtypes => 'internal internal internal internal',
prosrc => 'thesaurus_lexize' },
{ oid => '3743', descr => 'generate headline',
proname => 'ts_headline', procost => '100', prorettype => 'text',
proargtypes => 'regconfig text tsquery text',
prosrc => 'ts_headline_byid_opt' },
{ oid => '3744', descr => 'generate headline',
proname => 'ts_headline', procost => '100', prorettype => 'text',
proargtypes => 'regconfig text tsquery', prosrc => 'ts_headline_byid' },
{ oid => '3754', descr => 'generate headline',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'text', proargtypes => 'text tsquery text',
prosrc => 'ts_headline_opt' },
{ oid => '3755', descr => 'generate headline',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'text', proargtypes => 'text tsquery',
prosrc => 'ts_headline' },
{ oid => '4201', descr => 'generate headline from jsonb',
proname => 'ts_headline', procost => '100', prorettype => 'jsonb',
proargtypes => 'regconfig jsonb tsquery text',
prosrc => 'ts_headline_jsonb_byid_opt' },
{ oid => '4202', descr => 'generate headline from jsonb',
proname => 'ts_headline', procost => '100', prorettype => 'jsonb',
proargtypes => 'regconfig jsonb tsquery',
prosrc => 'ts_headline_jsonb_byid' },
{ oid => '4203', descr => 'generate headline from jsonb',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'jsonb tsquery text',
prosrc => 'ts_headline_jsonb_opt' },
{ oid => '4204', descr => 'generate headline from jsonb',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'jsonb tsquery',
prosrc => 'ts_headline_jsonb' },
{ oid => '4205', descr => 'generate headline from json',
proname => 'ts_headline', procost => '100', prorettype => 'json',
proargtypes => 'regconfig json tsquery text',
prosrc => 'ts_headline_json_byid_opt' },
{ oid => '4206', descr => 'generate headline from json',
proname => 'ts_headline', procost => '100', prorettype => 'json',
proargtypes => 'regconfig json tsquery', prosrc => 'ts_headline_json_byid' },
{ oid => '4207', descr => 'generate headline from json',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'json', proargtypes => 'json tsquery text',
prosrc => 'ts_headline_json_opt' },
{ oid => '4208', descr => 'generate headline from json',
proname => 'ts_headline', procost => '100', provolatile => 's',
prorettype => 'json', proargtypes => 'json tsquery',
prosrc => 'ts_headline_json' },
{ oid => '3745', descr => 'transform to tsvector',
proname => 'to_tsvector', procost => '100', prorettype => 'tsvector',
proargtypes => 'regconfig text', prosrc => 'to_tsvector_byid' },
{ oid => '3746', descr => 'make tsquery',
proname => 'to_tsquery', procost => '100', prorettype => 'tsquery',
proargtypes => 'regconfig text', prosrc => 'to_tsquery_byid' },
{ oid => '3747', descr => 'transform to tsquery',
proname => 'plainto_tsquery', procost => '100', prorettype => 'tsquery',
proargtypes => 'regconfig text', prosrc => 'plainto_tsquery_byid' },
{ oid => '5006', descr => 'transform to tsquery',
proname => 'phraseto_tsquery', procost => '100', prorettype => 'tsquery',
proargtypes => 'regconfig text', prosrc => 'phraseto_tsquery_byid' },
{ oid => '5007', descr => 'transform to tsquery',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'websearch_to_tsquery', procost => '100', prorettype => 'tsquery',
proargtypes => 'regconfig text', prosrc => 'websearch_to_tsquery_byid' },
{ oid => '3749', descr => 'transform to tsvector',
proname => 'to_tsvector', procost => '100', provolatile => 's',
prorettype => 'tsvector', proargtypes => 'text', prosrc => 'to_tsvector' },
{ oid => '3750', descr => 'make tsquery',
proname => 'to_tsquery', procost => '100', provolatile => 's',
prorettype => 'tsquery', proargtypes => 'text', prosrc => 'to_tsquery' },
{ oid => '3751', descr => 'transform to tsquery',
proname => 'plainto_tsquery', procost => '100', provolatile => 's',
prorettype => 'tsquery', proargtypes => 'text', prosrc => 'plainto_tsquery' },
{ oid => '5001', descr => 'transform to tsquery',
proname => 'phraseto_tsquery', procost => '100', provolatile => 's',
prorettype => 'tsquery', proargtypes => 'text',
prosrc => 'phraseto_tsquery' },
{ oid => '5009', descr => 'transform to tsquery',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'websearch_to_tsquery', procost => '100', provolatile => 's',
prorettype => 'tsquery', proargtypes => 'text',
prosrc => 'websearch_to_tsquery' },
{ oid => '4209', descr => 'transform string values from jsonb to tsvector',
proname => 'to_tsvector', procost => '100', provolatile => 's',
prorettype => 'tsvector', proargtypes => 'jsonb',
prosrc => 'jsonb_string_to_tsvector' },
{ oid => '4213', descr => 'transform specified values from jsonb to tsvector',
proname => 'jsonb_to_tsvector', procost => '100', provolatile => 's',
prorettype => 'tsvector', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_to_tsvector' },
{ oid => '4210', descr => 'transform string values from json to tsvector',
proname => 'to_tsvector', procost => '100', provolatile => 's',
prorettype => 'tsvector', proargtypes => 'json',
prosrc => 'json_string_to_tsvector' },
{ oid => '4215', descr => 'transform specified values from json to tsvector',
proname => 'json_to_tsvector', procost => '100', provolatile => 's',
prorettype => 'tsvector', proargtypes => 'json jsonb',
prosrc => 'json_to_tsvector' },
{ oid => '4211', descr => 'transform string values from jsonb to tsvector',
proname => 'to_tsvector', procost => '100', prorettype => 'tsvector',
proargtypes => 'regconfig jsonb', prosrc => 'jsonb_string_to_tsvector_byid' },
{ oid => '4214', descr => 'transform specified values from jsonb to tsvector',
proname => 'jsonb_to_tsvector', procost => '100', prorettype => 'tsvector',
proargtypes => 'regconfig jsonb jsonb', prosrc => 'jsonb_to_tsvector_byid' },
{ oid => '4212', descr => 'transform string values from json to tsvector',
proname => 'to_tsvector', procost => '100', prorettype => 'tsvector',
proargtypes => 'regconfig json', prosrc => 'json_string_to_tsvector_byid' },
{ oid => '4216', descr => 'transform specified values from json to tsvector',
proname => 'json_to_tsvector', procost => '100', prorettype => 'tsvector',
proargtypes => 'regconfig json jsonb', prosrc => 'json_to_tsvector_byid' },
{ oid => '3752', descr => 'trigger for automatic update of tsvector column',
proname => 'tsvector_update_trigger', proisstrict => 'f', provolatile => 'v',
prorettype => 'trigger', proargtypes => '',
prosrc => 'tsvector_update_trigger_byid' },
{ oid => '3753', descr => 'trigger for automatic update of tsvector column',
proname => 'tsvector_update_trigger_column', proisstrict => 'f',
provolatile => 'v', prorettype => 'trigger', proargtypes => '',
prosrc => 'tsvector_update_trigger_bycolumn' },
{ oid => '3759', descr => 'get current tsearch configuration',
proname => 'get_current_ts_config', provolatile => 's',
prorettype => 'regconfig', proargtypes => '',
prosrc => 'get_current_ts_config' },
{ oid => '3736', descr => 'I/O',
proname => 'regconfigin', provolatile => 's', prorettype => 'regconfig',
proargtypes => 'cstring', prosrc => 'regconfigin' },
{ oid => '3737', descr => 'I/O',
proname => 'regconfigout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regconfig', prosrc => 'regconfigout' },
{ oid => '3738', descr => 'I/O',
proname => 'regconfigrecv', prorettype => 'regconfig',
proargtypes => 'internal', prosrc => 'regconfigrecv' },
{ oid => '3739', descr => 'I/O',
proname => 'regconfigsend', prorettype => 'bytea', proargtypes => 'regconfig',
prosrc => 'regconfigsend' },
{ oid => '3771', descr => 'I/O',
proname => 'regdictionaryin', provolatile => 's',
prorettype => 'regdictionary', proargtypes => 'cstring',
prosrc => 'regdictionaryin' },
{ oid => '3772', descr => 'I/O',
proname => 'regdictionaryout', provolatile => 's', prorettype => 'cstring',
proargtypes => 'regdictionary', prosrc => 'regdictionaryout' },
{ oid => '3773', descr => 'I/O',
proname => 'regdictionaryrecv', prorettype => 'regdictionary',
proargtypes => 'internal', prosrc => 'regdictionaryrecv' },
{ oid => '3774', descr => 'I/O',
proname => 'regdictionarysend', prorettype => 'bytea',
proargtypes => 'regdictionary', prosrc => 'regdictionarysend' },
# jsonb
{ oid => '3806', descr => 'I/O',
proname => 'jsonb_in', prorettype => 'jsonb', proargtypes => 'cstring',
prosrc => 'jsonb_in' },
{ oid => '3805', descr => 'I/O',
proname => 'jsonb_recv', prorettype => 'jsonb', proargtypes => 'internal',
prosrc => 'jsonb_recv' },
{ oid => '3804', descr => 'I/O',
proname => 'jsonb_out', prorettype => 'cstring', proargtypes => 'jsonb',
prosrc => 'jsonb_out' },
{ oid => '3803', descr => 'I/O',
proname => 'jsonb_send', prorettype => 'bytea', proargtypes => 'jsonb',
prosrc => 'jsonb_send' },
{ oid => '3263', descr => 'map text array of key value pairs to jsonb object',
proname => 'jsonb_object', prorettype => 'jsonb', proargtypes => '_text',
prosrc => 'jsonb_object' },
{ oid => '3264', descr => 'map text array of key value pairs to jsonb object',
proname => 'jsonb_object', prorettype => 'jsonb',
proargtypes => '_text _text', prosrc => 'jsonb_object_two_arg' },
{ oid => '3787', descr => 'map input to jsonb',
proname => 'to_jsonb', provolatile => 's', prorettype => 'jsonb',
proargtypes => 'anyelement', prosrc => 'to_jsonb' },
{ oid => '3265', descr => 'jsonb aggregate transition function',
proname => 'jsonb_agg_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal anyelement',
prosrc => 'jsonb_agg_transfn' },
{ oid => '6283', descr => 'jsonb aggregate transition function',
proname => 'jsonb_agg_strict_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal anyelement',
prosrc => 'jsonb_agg_strict_transfn' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3266', descr => 'jsonb aggregate final function',
proname => 'jsonb_agg_finalfn', proisstrict => 'f', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'internal',
prosrc => 'jsonb_agg_finalfn' },
{ oid => '3267', descr => 'aggregate input into jsonb',
proname => 'jsonb_agg', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'jsonb', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '6284', descr => 'aggregate input into jsonb skipping nulls',
proname => 'jsonb_agg_strict', prokind => 'a', proisstrict => 'f',
provolatile => 's', prorettype => 'jsonb', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3268', descr => 'jsonb object aggregate transition function',
proname => 'jsonb_object_agg_transfn', proisstrict => 'f', provolatile => 's',
prorettype => 'internal', proargtypes => 'internal any any',
prosrc => 'jsonb_object_agg_transfn' },
{ oid => '6285', descr => 'jsonb object aggregate transition function',
proname => 'jsonb_object_agg_strict_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'jsonb_object_agg_strict_transfn' },
{ oid => '6286', descr => 'jsonb object aggregate transition function',
proname => 'jsonb_object_agg_unique_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'jsonb_object_agg_unique_transfn' },
{ oid => '6287', descr => 'jsonb object aggregate transition function',
proname => 'jsonb_object_agg_unique_strict_transfn', proisstrict => 'f',
provolatile => 's', prorettype => 'internal',
proargtypes => 'internal any any',
prosrc => 'jsonb_object_agg_unique_strict_transfn' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3269', descr => 'jsonb object aggregate final function',
proname => 'jsonb_object_agg_finalfn', proisstrict => 'f', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'internal',
prosrc => 'jsonb_object_agg_finalfn' },
{ oid => '3270', descr => 'aggregate inputs into jsonb object',
proname => 'jsonb_object_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'jsonb', proargtypes => 'any any', proargnames => '{key,value}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'aggregate_dummy' },
{ oid => '6288', descr => 'aggregate non-NULL inputs into jsonb object',
proname => 'jsonb_object_agg_strict', prokind => 'a', proisstrict => 'f',
prorettype => 'jsonb', proargtypes => 'any any', proargnames => '{key,value}',
prosrc => 'aggregate_dummy' },
{ oid => '6289',
descr => 'aggregate inputs into jsonb object checking key uniqueness',
proname => 'jsonb_object_agg_unique', prokind => 'a', proisstrict => 'f',
prorettype => 'jsonb', proargtypes => 'any any', proargnames => '{key,value}',
prosrc => 'aggregate_dummy' },
{ oid => '6290',
descr => 'aggregate non-NULL inputs into jsonb object checking key uniqueness',
proname => 'jsonb_object_agg_unique_strict', prokind => 'a',
proisstrict => 'f', prorettype => 'jsonb', proargtypes => 'any any',
proargnames => '{key,value}', prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3271', descr => 'build a jsonb array from any inputs',
proname => 'jsonb_build_array', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'jsonb', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'jsonb_build_array' },
{ oid => '3272', descr => 'build an empty jsonb array',
proname => 'jsonb_build_array', proisstrict => 'f', provolatile => 's',
prorettype => 'jsonb', proargtypes => '',
prosrc => 'jsonb_build_array_noargs' },
{ oid => '3273',
descr => 'build a jsonb object from pairwise key/value inputs',
proname => 'jsonb_build_object', provariadic => 'any', proisstrict => 'f',
provolatile => 's', prorettype => 'jsonb', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'jsonb_build_object' },
{ oid => '3274', descr => 'build an empty jsonb object',
proname => 'jsonb_build_object', proisstrict => 'f', provolatile => 's',
prorettype => 'jsonb', proargtypes => '',
prosrc => 'jsonb_build_object_noargs' },
{ oid => '3262', descr => 'remove object fields with null values from jsonb',
proname => 'jsonb_strip_nulls', prorettype => 'jsonb', proargtypes => 'jsonb',
prosrc => 'jsonb_strip_nulls' },
{ oid => '3478',
proname => 'jsonb_object_field', prorettype => 'jsonb',
proargtypes => 'jsonb text', proargnames => '{from_json, field_name}',
prosrc => 'jsonb_object_field' },
{ oid => '3214',
proname => 'jsonb_object_field_text', prorettype => 'text',
proargtypes => 'jsonb text', proargnames => '{from_json, field_name}',
prosrc => 'jsonb_object_field_text' },
{ oid => '3215',
proname => 'jsonb_array_element', prorettype => 'jsonb',
proargtypes => 'jsonb int4', proargnames => '{from_json, element_index}',
prosrc => 'jsonb_array_element' },
{ oid => '3216',
proname => 'jsonb_array_element_text', prorettype => 'text',
proargtypes => 'jsonb int4', proargnames => '{from_json, element_index}',
prosrc => 'jsonb_array_element_text' },
{ oid => '3217', descr => 'get value from jsonb with path elements',
proname => 'jsonb_extract_path', provariadic => 'text', prorettype => 'jsonb',
proargtypes => 'jsonb _text', proallargtypes => '{jsonb,_text}',
proargmodes => '{i,v}', proargnames => '{from_json,path_elems}',
prosrc => 'jsonb_extract_path' },
{ oid => '3940', descr => 'get value from jsonb as text with path elements',
proname => 'jsonb_extract_path_text', provariadic => 'text',
prorettype => 'text', proargtypes => 'jsonb _text',
proallargtypes => '{jsonb,_text}', proargmodes => '{i,v}',
proargnames => '{from_json,path_elems}',
prosrc => 'jsonb_extract_path_text' },
{ oid => '3219', descr => 'elements of a jsonb array',
proname => 'jsonb_array_elements', prorows => '100', proretset => 't',
prorettype => 'jsonb', proargtypes => 'jsonb',
proallargtypes => '{jsonb,jsonb}', proargmodes => '{i,o}',
proargnames => '{from_json,value}', prosrc => 'jsonb_array_elements' },
{ oid => '3465', descr => 'elements of jsonb array',
proname => 'jsonb_array_elements_text', prorows => '100', proretset => 't',
prorettype => 'text', proargtypes => 'jsonb',
proallargtypes => '{jsonb,text}', proargmodes => '{i,o}',
proargnames => '{from_json,value}', prosrc => 'jsonb_array_elements_text' },
{ oid => '3207', descr => 'length of jsonb array',
proname => 'jsonb_array_length', prorettype => 'int4', proargtypes => 'jsonb',
prosrc => 'jsonb_array_length' },
{ oid => '3931', descr => 'get jsonb object keys',
proname => 'jsonb_object_keys', prorows => '100', proretset => 't',
prorettype => 'text', proargtypes => 'jsonb', prosrc => 'jsonb_object_keys' },
{ oid => '3208', descr => 'key value pairs of a jsonb object',
proname => 'jsonb_each', prorows => '100', proretset => 't',
prorettype => 'record', proargtypes => 'jsonb',
proallargtypes => '{jsonb,text,jsonb}', proargmodes => '{i,o,o}',
proargnames => '{from_json,key,value}', prosrc => 'jsonb_each' },
{ oid => '3932', descr => 'key value pairs of a jsonb object',
proname => 'jsonb_each_text', prorows => '100', proretset => 't',
prorettype => 'record', proargtypes => 'jsonb',
proallargtypes => '{jsonb,text,text}', proargmodes => '{i,o,o}',
proargnames => '{from_json,key,value}', prosrc => 'jsonb_each_text' },
{ oid => '3209', descr => 'get record fields from a jsonb object',
proname => 'jsonb_populate_record', proisstrict => 'f', provolatile => 's',
prorettype => 'anyelement', proargtypes => 'anyelement jsonb',
prosrc => 'jsonb_populate_record' },
Adjust populate_record_field() to handle errors softly This adds a Node *escontext parameter to it and a bunch of functions downstream to it, replacing any ereport()s in that path by either errsave() or ereturn() as appropriate. This also adds code to those functions where necessary to return early upon encountering a soft error. The changes here are mainly intended to suppress errors in the functions of jsonfuncs.c. Functions in any external modules, such as arrayfuncs.c, that those functions may in turn call are not changed here based on the assumption that the various checks in jsonfuncs.c functions should ensure that only values that are structurally valid get passed to the functions in those external modules. An exception is made for domain_check() to allow handling domain constraint violation errors softly. For testing, this adds a function jsonb_populate_record_valid(), which returns true if jsonb_populate_record() would finish without causing an error for the provided JSON object, false otherwise. Note that jsonb_populate_record() internally calls populate_record(), which in turn uses populate_record_field(). Extracted from a much larger patch to add SQL/JSON query functions. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He, Peter Eisentraut Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
2024-01-24 05:35:28 +01:00
{ oid => '9558', descr => 'test get record fields from a jsonb object',
proname => 'jsonb_populate_record_valid', proisstrict => 'f', provolatile => 's',
prorettype => 'bool', proargtypes => 'anyelement jsonb',
prosrc => 'jsonb_populate_record_valid' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3475',
descr => 'get set of records with fields from a jsonb array of objects',
proname => 'jsonb_populate_recordset', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', prorettype => 'anyelement',
proargtypes => 'anyelement jsonb', prosrc => 'jsonb_populate_recordset' },
{ oid => '3490', descr => 'get record fields from a jsonb object',
proname => 'jsonb_to_record', provolatile => 's', prorettype => 'record',
proargtypes => 'jsonb', prosrc => 'jsonb_to_record' },
{ oid => '3491',
descr => 'get set of records with fields from a jsonb array of objects',
proname => 'jsonb_to_recordset', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => 'jsonb', prosrc => 'jsonb_to_recordset' },
{ oid => '3210', descr => 'get the type of a jsonb value',
proname => 'jsonb_typeof', prorettype => 'text', proargtypes => 'jsonb',
prosrc => 'jsonb_typeof' },
{ oid => '4038',
proname => 'jsonb_ne', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_ne' },
{ oid => '4039',
proname => 'jsonb_lt', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_lt' },
{ oid => '4040',
proname => 'jsonb_gt', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_gt' },
{ oid => '4041',
proname => 'jsonb_le', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_le' },
{ oid => '4042',
proname => 'jsonb_ge', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_ge' },
{ oid => '4043',
proname => 'jsonb_eq', prorettype => 'bool', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_eq' },
{ oid => '4044', descr => 'less-equal-greater',
proname => 'jsonb_cmp', prorettype => 'int4', proargtypes => 'jsonb jsonb',
prosrc => 'jsonb_cmp' },
{ oid => '4045', descr => 'hash',
proname => 'jsonb_hash', prorettype => 'int4', proargtypes => 'jsonb',
prosrc => 'jsonb_hash' },
{ oid => '3416', descr => 'hash',
proname => 'jsonb_hash_extended', prorettype => 'int8',
proargtypes => 'jsonb int8', prosrc => 'jsonb_hash_extended' },
{ oid => '4046',
proname => 'jsonb_contains', prorettype => 'bool',
proargtypes => 'jsonb jsonb', prosrc => 'jsonb_contains' },
{ oid => '4047',
proname => 'jsonb_exists', prorettype => 'bool', proargtypes => 'jsonb text',
prosrc => 'jsonb_exists' },
{ oid => '4048',
proname => 'jsonb_exists_any', prorettype => 'bool',
proargtypes => 'jsonb _text', prosrc => 'jsonb_exists_any' },
{ oid => '4049',
proname => 'jsonb_exists_all', prorettype => 'bool',
proargtypes => 'jsonb _text', prosrc => 'jsonb_exists_all' },
{ oid => '4050',
proname => 'jsonb_contained', prorettype => 'bool',
proargtypes => 'jsonb jsonb', prosrc => 'jsonb_contained' },
{ oid => '3480', descr => 'GIN support',
proname => 'gin_compare_jsonb', prorettype => 'int4',
proargtypes => 'text text', prosrc => 'gin_compare_jsonb' },
{ oid => '3482', descr => 'GIN support',
proname => 'gin_extract_jsonb', prorettype => 'internal',
proargtypes => 'jsonb internal internal', prosrc => 'gin_extract_jsonb' },
{ oid => '3483', descr => 'GIN support',
proname => 'gin_extract_jsonb_query', prorettype => 'internal',
proargtypes => 'jsonb internal int2 internal internal internal internal',
prosrc => 'gin_extract_jsonb_query' },
{ oid => '3484', descr => 'GIN support',
proname => 'gin_consistent_jsonb', prorettype => 'bool',
proargtypes => 'internal int2 jsonb int4 internal internal internal internal',
prosrc => 'gin_consistent_jsonb' },
{ oid => '3488', descr => 'GIN support',
proname => 'gin_triconsistent_jsonb', prorettype => 'char',
proargtypes => 'internal int2 jsonb int4 internal internal internal',
prosrc => 'gin_triconsistent_jsonb' },
{ oid => '3485', descr => 'GIN support',
proname => 'gin_extract_jsonb_path', prorettype => 'internal',
proargtypes => 'jsonb internal internal',
prosrc => 'gin_extract_jsonb_path' },
{ oid => '3486', descr => 'GIN support',
proname => 'gin_extract_jsonb_query_path', prorettype => 'internal',
proargtypes => 'jsonb internal int2 internal internal internal internal',
prosrc => 'gin_extract_jsonb_query_path' },
{ oid => '3487', descr => 'GIN support',
proname => 'gin_consistent_jsonb_path', prorettype => 'bool',
proargtypes => 'internal int2 jsonb int4 internal internal internal internal',
prosrc => 'gin_consistent_jsonb_path' },
{ oid => '3489', descr => 'GIN support',
proname => 'gin_triconsistent_jsonb_path', prorettype => 'char',
proargtypes => 'internal int2 jsonb int4 internal internal internal',
prosrc => 'gin_triconsistent_jsonb_path' },
{ oid => '3301',
proname => 'jsonb_concat', prorettype => 'jsonb',
proargtypes => 'jsonb jsonb', prosrc => 'jsonb_concat' },
{ oid => '3302',
proname => 'jsonb_delete', prorettype => 'jsonb', proargtypes => 'jsonb text',
prosrc => 'jsonb_delete' },
{ oid => '3303',
proname => 'jsonb_delete', prorettype => 'jsonb', proargtypes => 'jsonb int4',
prosrc => 'jsonb_delete_idx' },
{ oid => '3343',
proname => 'jsonb_delete', provariadic => 'text', prorettype => 'jsonb',
proargtypes => 'jsonb _text', proallargtypes => '{jsonb,_text}',
proargmodes => '{i,v}', proargnames => '{from_json,path_elems}',
prosrc => 'jsonb_delete_array' },
{ oid => '3304',
proname => 'jsonb_delete_path', prorettype => 'jsonb',
proargtypes => 'jsonb _text', prosrc => 'jsonb_delete_path' },
{ oid => '5054', descr => 'Set part of a jsonb, handle NULL value',
proname => 'jsonb_set_lax', proisstrict => 'f', prorettype => 'jsonb',
proargtypes => 'jsonb _text jsonb bool text', prosrc => 'jsonb_set_lax' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3305', descr => 'Set part of a jsonb',
proname => 'jsonb_set', prorettype => 'jsonb',
proargtypes => 'jsonb _text jsonb bool', prosrc => 'jsonb_set' },
{ oid => '3306', descr => 'Indented text from jsonb',
proname => 'jsonb_pretty', prorettype => 'text', proargtypes => 'jsonb',
prosrc => 'jsonb_pretty' },
{ oid => '3579', descr => 'Insert value into a jsonb',
proname => 'jsonb_insert', prorettype => 'jsonb',
proargtypes => 'jsonb _text jsonb bool', prosrc => 'jsonb_insert' },
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
# jsonpath
{ oid => '4001', descr => 'I/O',
proname => 'jsonpath_in', prorettype => 'jsonpath', proargtypes => 'cstring',
prosrc => 'jsonpath_in' },
{ oid => '4002', descr => 'I/O',
proname => 'jsonpath_recv', prorettype => 'jsonpath',
proargtypes => 'internal', prosrc => 'jsonpath_recv' },
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
{ oid => '4003', descr => 'I/O',
proname => 'jsonpath_out', prorettype => 'cstring', proargtypes => 'jsonpath',
prosrc => 'jsonpath_out' },
{ oid => '4004', descr => 'I/O',
proname => 'jsonpath_send', prorettype => 'bytea', proargtypes => 'jsonpath',
prosrc => 'jsonpath_send' },
{ oid => '4005', descr => 'jsonpath exists test',
proname => 'jsonb_path_exists', prorettype => 'bool',
proargtypes => 'jsonb jsonpath jsonb bool', prosrc => 'jsonb_path_exists' },
{ oid => '4006', descr => 'jsonpath query',
proname => 'jsonb_path_query', prorows => '1000', proretset => 't',
prorettype => 'jsonb', proargtypes => 'jsonb jsonpath jsonb bool',
prosrc => 'jsonb_path_query' },
{ oid => '4007', descr => 'jsonpath query wrapped into array',
proname => 'jsonb_path_query_array', prorettype => 'jsonb',
proargtypes => 'jsonb jsonpath jsonb bool',
prosrc => 'jsonb_path_query_array' },
{ oid => '4008', descr => 'jsonpath query first item',
proname => 'jsonb_path_query_first', prorettype => 'jsonb',
proargtypes => 'jsonb jsonpath jsonb bool',
prosrc => 'jsonb_path_query_first' },
{ oid => '4009', descr => 'jsonpath match',
proname => 'jsonb_path_match', prorettype => 'bool',
proargtypes => 'jsonb jsonpath jsonb bool', prosrc => 'jsonb_path_match' },
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
Implement jsonpath .datetime() method This commit implements jsonpath .datetime() method as it's specified in SQL/JSON standard. There are no-argument and single-argument versions of this method. No-argument version selects first of ISO datetime formats matching input string. Single-argument version accepts template string as its argument. Additionally to .datetime() method itself this commit also implements comparison ability of resulting date and time values. There is some difficulty because exising jsonb_path_*() functions are immutable, while comparison of timezoned and non-timezoned types involves current timezone. At first, current timezone could be changes in session. Moreover, timezones themselves are not immutable and could be updated. This is why we let existing immutable functions throw errors on such non-immutable comparison. In the same time this commit provides jsonb_path_*_tz() functions which are stable and support operations involving timezones. As new functions are added to the system catalog, catversion is bumped. Support of .datetime() method was the only blocker prevents T832 from being marked as supported. sql_features.txt is updated correspondingly. Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov. Heavily revised by me. Comments were adjusted by Liudmila Mantrova. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 20:54:14 +02:00
{ oid => '1177', descr => 'jsonpath exists test with timezone',
Stabilize the results of pg_notification_queue_usage(). This function wasn't touched in commit 51004c717, but that turns out to be a bad idea, because its results now include any dead space that exists in the NOTIFY queue on account of our being lazy about advancing the queue tail. Notably, the isolation tests now fail if run twice without a server restart between, because async-notify's first test of the function will already show a positive value. It seems likely that end users would be equally unhappy about the result's instability. To fix, just make the function call asyncQueueAdvanceTail before computing its result. That should end in producing the same value as before, and it's hard to believe that there's any practical use-case where pg_notification_queue_usage() is called so often as to create a performance degradation, especially compared to what we did before. Out of paranoia, also mark this function parallel-restricted (it was volatile, but parallel-safe by default, before). Although the code seems to work fine when run in a parallel worker, that's outside the design scope of async.c, and it's a bit scary to have intentional side-effects happening in a parallel worker. There seems no plausible use-case where it'd be important to try to parallelize this, so let's not take any risk of introducing new bugs. In passing, re-pgindent async.c and run reformat-dat-files on pg_proc.dat, just because I'm a neatnik. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
2019-11-24 20:09:33 +01:00
proname => 'jsonb_path_exists_tz', provolatile => 's', prorettype => 'bool',
proargtypes => 'jsonb jsonpath jsonb bool',
Implement jsonpath .datetime() method This commit implements jsonpath .datetime() method as it's specified in SQL/JSON standard. There are no-argument and single-argument versions of this method. No-argument version selects first of ISO datetime formats matching input string. Single-argument version accepts template string as its argument. Additionally to .datetime() method itself this commit also implements comparison ability of resulting date and time values. There is some difficulty because exising jsonb_path_*() functions are immutable, while comparison of timezoned and non-timezoned types involves current timezone. At first, current timezone could be changes in session. Moreover, timezones themselves are not immutable and could be updated. This is why we let existing immutable functions throw errors on such non-immutable comparison. In the same time this commit provides jsonb_path_*_tz() functions which are stable and support operations involving timezones. As new functions are added to the system catalog, catversion is bumped. Support of .datetime() method was the only blocker prevents T832 from being marked as supported. sql_features.txt is updated correspondingly. Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov. Heavily revised by me. Comments were adjusted by Liudmila Mantrova. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 20:54:14 +02:00
prosrc => 'jsonb_path_exists_tz' },
{ oid => '1179', descr => 'jsonpath query with timezone',
Stabilize the results of pg_notification_queue_usage(). This function wasn't touched in commit 51004c717, but that turns out to be a bad idea, because its results now include any dead space that exists in the NOTIFY queue on account of our being lazy about advancing the queue tail. Notably, the isolation tests now fail if run twice without a server restart between, because async-notify's first test of the function will already show a positive value. It seems likely that end users would be equally unhappy about the result's instability. To fix, just make the function call asyncQueueAdvanceTail before computing its result. That should end in producing the same value as before, and it's hard to believe that there's any practical use-case where pg_notification_queue_usage() is called so often as to create a performance degradation, especially compared to what we did before. Out of paranoia, also mark this function parallel-restricted (it was volatile, but parallel-safe by default, before). Although the code seems to work fine when run in a parallel worker, that's outside the design scope of async.c, and it's a bit scary to have intentional side-effects happening in a parallel worker. There seems no plausible use-case where it'd be important to try to parallelize this, so let's not take any risk of introducing new bugs. In passing, re-pgindent async.c and run reformat-dat-files on pg_proc.dat, just because I'm a neatnik. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
2019-11-24 20:09:33 +01:00
proname => 'jsonb_path_query_tz', prorows => '1000', proretset => 't',
provolatile => 's', prorettype => 'jsonb',
proargtypes => 'jsonb jsonpath jsonb bool', prosrc => 'jsonb_path_query_tz' },
Implement jsonpath .datetime() method This commit implements jsonpath .datetime() method as it's specified in SQL/JSON standard. There are no-argument and single-argument versions of this method. No-argument version selects first of ISO datetime formats matching input string. Single-argument version accepts template string as its argument. Additionally to .datetime() method itself this commit also implements comparison ability of resulting date and time values. There is some difficulty because exising jsonb_path_*() functions are immutable, while comparison of timezoned and non-timezoned types involves current timezone. At first, current timezone could be changes in session. Moreover, timezones themselves are not immutable and could be updated. This is why we let existing immutable functions throw errors on such non-immutable comparison. In the same time this commit provides jsonb_path_*_tz() functions which are stable and support operations involving timezones. As new functions are added to the system catalog, catversion is bumped. Support of .datetime() method was the only blocker prevents T832 from being marked as supported. sql_features.txt is updated correspondingly. Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov. Heavily revised by me. Comments were adjusted by Liudmila Mantrova. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 20:54:14 +02:00
{ oid => '1180', descr => 'jsonpath query wrapped into array with timezone',
proname => 'jsonb_path_query_array_tz', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'jsonb jsonpath jsonb bool',
prosrc => 'jsonb_path_query_array_tz' },
{ oid => '2023', descr => 'jsonpath query first item with timezone',
proname => 'jsonb_path_query_first_tz', provolatile => 's',
prorettype => 'jsonb', proargtypes => 'jsonb jsonpath jsonb bool',
prosrc => 'jsonb_path_query_first_tz' },
{ oid => '2030', descr => 'jsonpath match with timezone',
Stabilize the results of pg_notification_queue_usage(). This function wasn't touched in commit 51004c717, but that turns out to be a bad idea, because its results now include any dead space that exists in the NOTIFY queue on account of our being lazy about advancing the queue tail. Notably, the isolation tests now fail if run twice without a server restart between, because async-notify's first test of the function will already show a positive value. It seems likely that end users would be equally unhappy about the result's instability. To fix, just make the function call asyncQueueAdvanceTail before computing its result. That should end in producing the same value as before, and it's hard to believe that there's any practical use-case where pg_notification_queue_usage() is called so often as to create a performance degradation, especially compared to what we did before. Out of paranoia, also mark this function parallel-restricted (it was volatile, but parallel-safe by default, before). Although the code seems to work fine when run in a parallel worker, that's outside the design scope of async.c, and it's a bit scary to have intentional side-effects happening in a parallel worker. There seems no plausible use-case where it'd be important to try to parallelize this, so let's not take any risk of introducing new bugs. In passing, re-pgindent async.c and run reformat-dat-files on pg_proc.dat, just because I'm a neatnik. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
2019-11-24 20:09:33 +01:00
proname => 'jsonb_path_match_tz', provolatile => 's', prorettype => 'bool',
proargtypes => 'jsonb jsonpath jsonb bool', prosrc => 'jsonb_path_match_tz' },
Implement jsonpath .datetime() method This commit implements jsonpath .datetime() method as it's specified in SQL/JSON standard. There are no-argument and single-argument versions of this method. No-argument version selects first of ISO datetime formats matching input string. Single-argument version accepts template string as its argument. Additionally to .datetime() method itself this commit also implements comparison ability of resulting date and time values. There is some difficulty because exising jsonb_path_*() functions are immutable, while comparison of timezoned and non-timezoned types involves current timezone. At first, current timezone could be changes in session. Moreover, timezones themselves are not immutable and could be updated. This is why we let existing immutable functions throw errors on such non-immutable comparison. In the same time this commit provides jsonb_path_*_tz() functions which are stable and support operations involving timezones. As new functions are added to the system catalog, catversion is bumped. Support of .datetime() method was the only blocker prevents T832 from being marked as supported. sql_features.txt is updated correspondingly. Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov. Heavily revised by me. Comments were adjusted by Liudmila Mantrova. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 20:54:14 +02:00
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
{ oid => '4010', descr => 'implementation of @? operator',
proname => 'jsonb_path_exists_opr', prorettype => 'bool',
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
proargtypes => 'jsonb jsonpath', prosrc => 'jsonb_path_exists_opr' },
{ oid => '4011', descr => 'implementation of @@ operator',
proname => 'jsonb_path_match_opr', prorettype => 'bool',
Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 10:15:37 +01:00
proargtypes => 'jsonb jsonpath', prosrc => 'jsonb_path_match_opr' },
# historical int8/txid_snapshot variants of xid8 functions
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2939', descr => 'I/O',
proname => 'txid_snapshot_in', prorettype => 'txid_snapshot',
proargtypes => 'cstring', prosrc => 'pg_snapshot_in' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2940', descr => 'I/O',
proname => 'txid_snapshot_out', prorettype => 'cstring',
proargtypes => 'txid_snapshot', prosrc => 'pg_snapshot_out' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2941', descr => 'I/O',
proname => 'txid_snapshot_recv', prorettype => 'txid_snapshot',
proargtypes => 'internal', prosrc => 'pg_snapshot_recv' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2942', descr => 'I/O',
proname => 'txid_snapshot_send', prorettype => 'bytea',
proargtypes => 'txid_snapshot', prosrc => 'pg_snapshot_send' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2943', descr => 'get current transaction ID',
proname => 'txid_current', provolatile => 's', proparallel => 'u',
prorettype => 'int8', proargtypes => '', prosrc => 'pg_current_xact_id' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3348', descr => 'get current transaction ID',
proname => 'txid_current_if_assigned', provolatile => 's', proparallel => 'u',
prorettype => 'int8', proargtypes => '',
prosrc => 'pg_current_xact_id_if_assigned' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2944', descr => 'get current snapshot',
proname => 'txid_current_snapshot', provolatile => 's',
prorettype => 'txid_snapshot', proargtypes => '',
prosrc => 'pg_current_snapshot' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2945', descr => 'get xmin of snapshot',
proname => 'txid_snapshot_xmin', prorettype => 'int8',
proargtypes => 'txid_snapshot', prosrc => 'pg_snapshot_xmin' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2946', descr => 'get xmax of snapshot',
proname => 'txid_snapshot_xmax', prorettype => 'int8',
proargtypes => 'txid_snapshot', prosrc => 'pg_snapshot_xmax' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2947', descr => 'get set of in-progress txids in snapshot',
proname => 'txid_snapshot_xip', prorows => '50', proretset => 't',
prorettype => 'int8', proargtypes => 'txid_snapshot',
prosrc => 'pg_snapshot_xip' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '2948', descr => 'is txid visible in snapshot?',
proname => 'txid_visible_in_snapshot', prorettype => 'bool',
proargtypes => 'int8 txid_snapshot', prosrc => 'pg_visible_in_snapshot' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3360', descr => 'commit status of transaction',
proname => 'txid_status', provolatile => 'v', prorettype => 'text',
proargtypes => 'int8', prosrc => 'pg_xact_status' },
# pg_snapshot functions
{ oid => '5055', descr => 'I/O',
proname => 'pg_snapshot_in', prorettype => 'pg_snapshot',
proargtypes => 'cstring', prosrc => 'pg_snapshot_in' },
{ oid => '5056', descr => 'I/O',
proname => 'pg_snapshot_out', prorettype => 'cstring',
proargtypes => 'pg_snapshot', prosrc => 'pg_snapshot_out' },
{ oid => '5057', descr => 'I/O',
proname => 'pg_snapshot_recv', prorettype => 'pg_snapshot',
proargtypes => 'internal', prosrc => 'pg_snapshot_recv' },
{ oid => '5058', descr => 'I/O',
proname => 'pg_snapshot_send', prorettype => 'bytea',
proargtypes => 'pg_snapshot', prosrc => 'pg_snapshot_send' },
{ oid => '5061', descr => 'get current snapshot',
proname => 'pg_current_snapshot', provolatile => 's',
prorettype => 'pg_snapshot', proargtypes => '',
prosrc => 'pg_current_snapshot' },
{ oid => '5062', descr => 'get xmin of snapshot',
proname => 'pg_snapshot_xmin', prorettype => 'xid8',
proargtypes => 'pg_snapshot', prosrc => 'pg_snapshot_xmin' },
{ oid => '5063', descr => 'get xmax of snapshot',
proname => 'pg_snapshot_xmax', prorettype => 'xid8',
proargtypes => 'pg_snapshot', prosrc => 'pg_snapshot_xmax' },
{ oid => '5064', descr => 'get set of in-progress transactions in snapshot',
proname => 'pg_snapshot_xip', prorows => '50', proretset => 't',
prorettype => 'xid8', proargtypes => 'pg_snapshot',
prosrc => 'pg_snapshot_xip' },
{ oid => '5065', descr => 'is xid8 visible in snapshot?',
proname => 'pg_visible_in_snapshot', prorettype => 'bool',
proargtypes => 'xid8 pg_snapshot', prosrc => 'pg_visible_in_snapshot' },
# transaction ID and status functions
{ oid => '5059', descr => 'get current transaction ID',
proname => 'pg_current_xact_id', provolatile => 's', proparallel => 'u',
prorettype => 'xid8', proargtypes => '', prosrc => 'pg_current_xact_id' },
{ oid => '5060', descr => 'get current transaction ID',
proname => 'pg_current_xact_id_if_assigned', provolatile => 's',
proparallel => 'u', prorettype => 'xid8', proargtypes => '',
prosrc => 'pg_current_xact_id_if_assigned' },
{ oid => '5066', descr => 'commit status of transaction',
proname => 'pg_xact_status', provolatile => 'v', prorettype => 'text',
proargtypes => 'xid8', prosrc => 'pg_xact_status' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# record comparison using normal comparison rules
{ oid => '2981',
proname => 'record_eq', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_eq' },
{ oid => '2982',
proname => 'record_ne', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_ne' },
{ oid => '2983',
proname => 'record_lt', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_lt' },
{ oid => '2984',
proname => 'record_gt', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_gt' },
{ oid => '2985',
proname => 'record_le', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_le' },
{ oid => '2986',
proname => 'record_ge', prorettype => 'bool', proargtypes => 'record record',
prosrc => 'record_ge' },
{ oid => '2987', descr => 'less-equal-greater',
proname => 'btrecordcmp', prorettype => 'int4',
proargtypes => 'record record', prosrc => 'btrecordcmp' },
{ oid => '6192', descr => 'hash',
proname => 'hash_record', prorettype => 'int4', proargtypes => 'record',
prosrc => 'hash_record' },
{ oid => '6193', descr => 'hash',
proname => 'hash_record_extended', prorettype => 'int8',
proargtypes => 'record int8', prosrc => 'hash_record_extended' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# record comparison using raw byte images
{ oid => '3181',
proname => 'record_image_eq', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_eq' },
{ oid => '3182',
proname => 'record_image_ne', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_ne' },
{ oid => '3183',
proname => 'record_image_lt', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_lt' },
{ oid => '3184',
proname => 'record_image_gt', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_gt' },
{ oid => '3185',
proname => 'record_image_le', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_le' },
{ oid => '3186',
proname => 'record_image_ge', prorettype => 'bool',
proargtypes => 'record record', prosrc => 'record_image_ge' },
{ oid => '3187', descr => 'less-equal-greater based on byte images',
proname => 'btrecordimagecmp', prorettype => 'int4',
proargtypes => 'record record', prosrc => 'btrecordimagecmp' },
{ oid => '5051', descr => 'equal image',
proname => 'btequalimage', prorettype => 'bool', proargtypes => 'oid',
prosrc => 'btequalimage' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# Extensions
{ oid => '3082', descr => 'list available extensions',
proname => 'pg_available_extensions', procost => '10', prorows => '100',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => '', proallargtypes => '{name,text,text}',
proargmodes => '{o,o,o}', proargnames => '{name,default_version,comment}',
prosrc => 'pg_available_extensions' },
{ oid => '3083', descr => 'list available extension versions',
proname => 'pg_available_extension_versions', procost => '10',
prorows => '100', proretset => 't', provolatile => 's',
prorettype => 'record', proargtypes => '',
Invent "trusted" extensions, and remove the pg_pltemplate catalog. This patch creates a new extension property, "trusted". An extension that's marked that way in its control file can be installed by a non-superuser who has the CREATE privilege on the current database, even if the extension contains objects that normally would have to be created by a superuser. The objects within the extension will (by default) be owned by the bootstrap superuser, but the extension itself will be owned by the calling user. This allows replicating the old behavior around trusted procedural languages, without all the special-case logic in CREATE LANGUAGE. We have, however, chosen to loosen the rules slightly: formerly, only a database owner could take advantage of the special case that allowed installation of a trusted language, but now anyone who has CREATE privilege can do so. Having done that, we can delete the pg_pltemplate catalog, moving the knowledge it contained into the extension script files for the various PLs. This ends up being no change at all for the in-core PLs, but it is a large step forward for external PLs: they can now have the same ease of installation as core PLs do. The old "trusted PL" behavior was only available to PLs that had entries in pg_pltemplate, but now any extension can be marked trusted if appropriate. This also removes one of the stumbling blocks for our Python 2 -> 3 migration, since the association of "plpythonu" with Python 2 is no longer hard-wired into pg_pltemplate's initial contents. Exactly where we go from here on that front remains to be settled, but one problem is fixed. Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others. Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-30 00:42:43 +01:00
proallargtypes => '{name,text,bool,bool,bool,name,_name,text}',
proargmodes => '{o,o,o,o,o,o,o,o}',
proargnames => '{name,version,superuser,trusted,relocatable,schema,requires,comment}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_available_extension_versions' },
{ oid => '3084', descr => 'list an extension\'s version update paths',
proname => 'pg_extension_update_paths', procost => '10', prorows => '100',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => 'name', proallargtypes => '{name,text,text,text}',
proargmodes => '{i,o,o,o}', proargnames => '{name,source,target,path}',
prosrc => 'pg_extension_update_paths' },
{ oid => '3086',
descr => 'flag an extension\'s table contents to be emitted by pg_dump',
proname => 'pg_extension_config_dump', provolatile => 'v', proparallel => 'u',
prorettype => 'void', proargtypes => 'regclass text',
prosrc => 'pg_extension_config_dump' },
# SQL-spec window functions
{ oid => '3100', descr => 'row number within partition',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'row_number', prosupport => 'window_row_number_support',
prokind => 'w', proisstrict => 'f', prorettype => 'int8', proargtypes => '',
prosrc => 'window_row_number' },
2022-12-23 00:43:52 +01:00
{ oid => '6233', descr => 'planner support for row_number',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'window_row_number_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_row_number_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3101', descr => 'integer rank with gaps',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'rank', prosupport => 'window_rank_support', prokind => 'w',
proisstrict => 'f', prorettype => 'int8', proargtypes => '',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'window_rank' },
2022-12-23 00:43:52 +01:00
{ oid => '6234', descr => 'planner support for rank',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'window_rank_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_rank_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3102', descr => 'integer rank without gaps',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'dense_rank', prosupport => 'window_dense_rank_support',
prokind => 'w', proisstrict => 'f', prorettype => 'int8', proargtypes => '',
prosrc => 'window_dense_rank' },
2022-12-23 00:43:52 +01:00
{ oid => '6235', descr => 'planner support for dense_rank',
Teach planner and executor about monotonic window funcs Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
2022-04-08 00:34:36 +02:00
proname => 'window_dense_rank_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_dense_rank_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3103', descr => 'fractional rank within partition',
proname => 'percent_rank', prosupport => 'window_percent_rank_support',
prokind => 'w', proisstrict => 'f', prorettype => 'float8', proargtypes => '',
prosrc => 'window_percent_rank' },
{ oid => '6306', descr => 'planner support for percent_rank',
2022-12-23 00:43:52 +01:00
proname => 'window_percent_rank_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_percent_rank_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3104', descr => 'fractional row number within partition',
proname => 'cume_dist', prosupport => 'window_cume_dist_support',
prokind => 'w', proisstrict => 'f', prorettype => 'float8', proargtypes => '',
prosrc => 'window_cume_dist' },
{ oid => '6307', descr => 'planner support for cume_dist',
2022-12-23 00:43:52 +01:00
proname => 'window_cume_dist_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_cume_dist_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3105', descr => 'split rows into N groups',
proname => 'ntile', prosupport => 'window_ntile_support', prokind => 'w',
prorettype => 'int4', proargtypes => 'int4', prosrc => 'window_ntile' },
{ oid => '6308', descr => 'planner support for ntile',
2022-12-23 00:43:52 +01:00
proname => 'window_ntile_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'window_ntile_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3106', descr => 'fetch the preceding row value',
proname => 'lag', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement', prosrc => 'window_lag' },
{ oid => '3107', descr => 'fetch the Nth preceding row value',
proname => 'lag', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement int4', prosrc => 'window_lag_with_offset' },
{ oid => '3108', descr => 'fetch the Nth preceding row value with default',
proname => 'lag', prokind => 'w', prorettype => 'anycompatible',
proargtypes => 'anycompatible int4 anycompatible',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'window_lag_with_offset_and_default' },
{ oid => '3109', descr => 'fetch the following row value',
proname => 'lead', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement', prosrc => 'window_lead' },
{ oid => '3110', descr => 'fetch the Nth following row value',
proname => 'lead', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement int4', prosrc => 'window_lead_with_offset' },
{ oid => '3111', descr => 'fetch the Nth following row value with default',
proname => 'lead', prokind => 'w', prorettype => 'anycompatible',
proargtypes => 'anycompatible int4 anycompatible',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'window_lead_with_offset_and_default' },
{ oid => '3112', descr => 'fetch the first row value',
proname => 'first_value', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement', prosrc => 'window_first_value' },
{ oid => '3113', descr => 'fetch the last row value',
proname => 'last_value', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement', prosrc => 'window_last_value' },
{ oid => '3114', descr => 'fetch the Nth row value',
proname => 'nth_value', prokind => 'w', prorettype => 'anyelement',
proargtypes => 'anyelement int4', prosrc => 'window_nth_value' },
# functions for range types
{ oid => '3832', descr => 'I/O',
proname => 'anyrange_in', provolatile => 's', prorettype => 'anyrange',
proargtypes => 'cstring oid int4', prosrc => 'anyrange_in' },
{ oid => '3833', descr => 'I/O',
proname => 'anyrange_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyrange', prosrc => 'anyrange_out' },
{ oid => '3834', descr => 'I/O',
proname => 'range_in', provolatile => 's', prorettype => 'anyrange',
proargtypes => 'cstring oid int4', prosrc => 'range_in' },
{ oid => '3835', descr => 'I/O',
proname => 'range_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anyrange', prosrc => 'range_out' },
{ oid => '3836', descr => 'I/O',
proname => 'range_recv', provolatile => 's', prorettype => 'anyrange',
proargtypes => 'internal oid int4', prosrc => 'range_recv' },
{ oid => '3837', descr => 'I/O',
proname => 'range_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'anyrange', prosrc => 'range_send' },
{ oid => '3848', descr => 'lower bound of range',
proname => 'lower', prorettype => 'anyelement', proargtypes => 'anyrange',
prosrc => 'range_lower' },
{ oid => '3849', descr => 'upper bound of range',
proname => 'upper', prorettype => 'anyelement', proargtypes => 'anyrange',
prosrc => 'range_upper' },
{ oid => '3850', descr => 'is the range empty?',
proname => 'isempty', prorettype => 'bool', proargtypes => 'anyrange',
prosrc => 'range_empty' },
{ oid => '3851', descr => 'is the range\'s lower bound inclusive?',
proname => 'lower_inc', prorettype => 'bool', proargtypes => 'anyrange',
prosrc => 'range_lower_inc' },
{ oid => '3852', descr => 'is the range\'s upper bound inclusive?',
proname => 'upper_inc', prorettype => 'bool', proargtypes => 'anyrange',
prosrc => 'range_upper_inc' },
{ oid => '3853', descr => 'is the range\'s lower bound infinite?',
proname => 'lower_inf', prorettype => 'bool', proargtypes => 'anyrange',
prosrc => 'range_lower_inf' },
{ oid => '3854', descr => 'is the range\'s upper bound infinite?',
proname => 'upper_inf', prorettype => 'bool', proargtypes => 'anyrange',
prosrc => 'range_upper_inf' },
{ oid => '3855',
proname => 'range_eq', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_eq' },
{ oid => '3856',
proname => 'range_ne', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_ne' },
{ oid => '3857',
proname => 'range_overlaps', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_overlaps' },
{ oid => '3858',
proname => 'range_contains_elem', prosupport => 'range_contains_elem_support',
prorettype => 'bool', proargtypes => 'anyrange anyelement',
prosrc => 'range_contains_elem' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3859',
proname => 'range_contains', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_contains' },
{ oid => '3860',
proname => 'elem_contained_by_range',
prosupport => 'elem_contained_by_range_support', prorettype => 'bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proargtypes => 'anyelement anyrange', prosrc => 'elem_contained_by_range' },
{ oid => '3861',
proname => 'range_contained_by', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_contained_by' },
{ oid => '3862',
proname => 'range_adjacent', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_adjacent' },
{ oid => '3863',
proname => 'range_before', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_before' },
{ oid => '3864',
proname => 'range_after', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_after' },
{ oid => '3865',
proname => 'range_overleft', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_overleft' },
{ oid => '3866',
proname => 'range_overright', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_overright' },
{ oid => '3867',
proname => 'range_union', prorettype => 'anyrange',
proargtypes => 'anyrange anyrange', prosrc => 'range_union' },
{ oid => '9998', descr => 'planner support for range_contains_elem',
proname => 'range_contains_elem_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'range_contains_elem_support' },
{ oid => '9999', descr => 'planner support for elem_contained_by_range',
proname => 'elem_contained_by_range_support', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'elem_contained_by_range_support' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '4057',
descr => 'the smallest range which includes both of the given ranges',
proname => 'range_merge', prorettype => 'anyrange',
proargtypes => 'anyrange anyrange', prosrc => 'range_merge' },
2020-12-20 05:20:33 +01:00
{ oid => '4228',
descr => 'the smallest range which includes the whole multirange',
proname => 'range_merge', prorettype => 'anyrange',
proargtypes => 'anymultirange', prosrc => 'range_merge_from_multirange' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3868',
proname => 'range_intersect', prorettype => 'anyrange',
proargtypes => 'anyrange anyrange', prosrc => 'range_intersect' },
{ oid => '3869',
proname => 'range_minus', prorettype => 'anyrange',
proargtypes => 'anyrange anyrange', prosrc => 'range_minus' },
{ oid => '3870', descr => 'less-equal-greater',
proname => 'range_cmp', prorettype => 'int4',
proargtypes => 'anyrange anyrange', prosrc => 'range_cmp' },
{ oid => '3871',
proname => 'range_lt', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_lt' },
{ oid => '3872',
proname => 'range_le', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_le' },
{ oid => '3873',
proname => 'range_ge', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_ge' },
{ oid => '3874',
proname => 'range_gt', prorettype => 'bool',
proargtypes => 'anyrange anyrange', prosrc => 'range_gt' },
{ oid => '3875', descr => 'GiST support',
proname => 'range_gist_consistent', prorettype => 'bool',
proargtypes => 'internal anyrange int2 oid internal',
prosrc => 'range_gist_consistent' },
{ oid => '3876', descr => 'GiST support',
proname => 'range_gist_union', prorettype => 'anyrange',
proargtypes => 'internal internal', prosrc => 'range_gist_union' },
{ oid => '3879', descr => 'GiST support',
proname => 'range_gist_penalty', prorettype => 'internal',
proargtypes => 'internal internal internal', prosrc => 'range_gist_penalty' },
{ oid => '3880', descr => 'GiST support',
proname => 'range_gist_picksplit', prorettype => 'internal',
proargtypes => 'internal internal', prosrc => 'range_gist_picksplit' },
{ oid => '3881', descr => 'GiST support',
proname => 'range_gist_same', prorettype => 'internal',
proargtypes => 'anyrange anyrange internal', prosrc => 'range_gist_same' },
{ oid => '6154', descr => 'GiST support',
proname => 'multirange_gist_consistent', prorettype => 'bool',
proargtypes => 'internal anymultirange int2 oid internal',
prosrc => 'multirange_gist_consistent' },
{ oid => '6156', descr => 'GiST support',
proname => 'multirange_gist_compress', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'multirange_gist_compress' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3902', descr => 'hash a range',
proname => 'hash_range', prorettype => 'int4', proargtypes => 'anyrange',
prosrc => 'hash_range' },
{ oid => '3417', descr => 'hash a range',
proname => 'hash_range_extended', prorettype => 'int8',
proargtypes => 'anyrange int8', prosrc => 'hash_range_extended' },
{ oid => '3916', descr => 'range typanalyze',
proname => 'range_typanalyze', provolatile => 's', prorettype => 'bool',
proargtypes => 'internal', prosrc => 'range_typanalyze' },
{ oid => '3169', descr => 'restriction selectivity for range operators',
proname => 'rangesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'rangesel' },
2020-12-20 05:20:33 +01:00
{ oid => '4401', descr => 'range aggregate by intersecting',
proname => 'range_intersect_agg_transfn', prorettype => 'anyrange',
proargtypes => 'anyrange anyrange', prosrc => 'range_intersect_agg_transfn' },
{ oid => '4450', descr => 'range aggregate by intersecting',
proname => 'range_intersect_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anyrange', proargtypes => 'anyrange',
prosrc => 'aggregate_dummy' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3914', descr => 'convert an int4 range to canonical form',
proname => 'int4range_canonical', prorettype => 'int4range',
proargtypes => 'int4range', prosrc => 'int4range_canonical' },
{ oid => '3928', descr => 'convert an int8 range to canonical form',
proname => 'int8range_canonical', prorettype => 'int8range',
proargtypes => 'int8range', prosrc => 'int8range_canonical' },
{ oid => '3915', descr => 'convert a date range to canonical form',
proname => 'daterange_canonical', prorettype => 'daterange',
proargtypes => 'daterange', prosrc => 'daterange_canonical' },
{ oid => '3922', descr => 'float8 difference of two int4 values',
proname => 'int4range_subdiff', prorettype => 'float8',
proargtypes => 'int4 int4', prosrc => 'int4range_subdiff' },
{ oid => '3923', descr => 'float8 difference of two int8 values',
proname => 'int8range_subdiff', prorettype => 'float8',
proargtypes => 'int8 int8', prosrc => 'int8range_subdiff' },
{ oid => '3924', descr => 'float8 difference of two numeric values',
proname => 'numrange_subdiff', prorettype => 'float8',
proargtypes => 'numeric numeric', prosrc => 'numrange_subdiff' },
{ oid => '3925', descr => 'float8 difference of two date values',
proname => 'daterange_subdiff', prorettype => 'float8',
proargtypes => 'date date', prosrc => 'daterange_subdiff' },
{ oid => '3929', descr => 'float8 difference of two timestamp values',
proname => 'tsrange_subdiff', prorettype => 'float8',
proargtypes => 'timestamp timestamp', prosrc => 'tsrange_subdiff' },
{ oid => '3930',
descr => 'float8 difference of two timestamp with time zone values',
proname => 'tstzrange_subdiff', prorettype => 'float8',
proargtypes => 'timestamptz timestamptz', prosrc => 'tstzrange_subdiff' },
{ oid => '3840', descr => 'int4range constructor',
proname => 'int4range', proisstrict => 'f', prorettype => 'int4range',
proargtypes => 'int4 int4', prosrc => 'range_constructor2' },
{ oid => '3841', descr => 'int4range constructor',
proname => 'int4range', proisstrict => 'f', prorettype => 'int4range',
proargtypes => 'int4 int4 text', prosrc => 'range_constructor3' },
{ oid => '3844', descr => 'numrange constructor',
proname => 'numrange', proisstrict => 'f', prorettype => 'numrange',
proargtypes => 'numeric numeric', prosrc => 'range_constructor2' },
{ oid => '3845', descr => 'numrange constructor',
proname => 'numrange', proisstrict => 'f', prorettype => 'numrange',
proargtypes => 'numeric numeric text', prosrc => 'range_constructor3' },
{ oid => '3933', descr => 'tsrange constructor',
proname => 'tsrange', proisstrict => 'f', prorettype => 'tsrange',
proargtypes => 'timestamp timestamp', prosrc => 'range_constructor2' },
{ oid => '3934', descr => 'tsrange constructor',
proname => 'tsrange', proisstrict => 'f', prorettype => 'tsrange',
proargtypes => 'timestamp timestamp text', prosrc => 'range_constructor3' },
{ oid => '3937', descr => 'tstzrange constructor',
proname => 'tstzrange', proisstrict => 'f', prorettype => 'tstzrange',
proargtypes => 'timestamptz timestamptz', prosrc => 'range_constructor2' },
{ oid => '3938', descr => 'tstzrange constructor',
proname => 'tstzrange', proisstrict => 'f', prorettype => 'tstzrange',
proargtypes => 'timestamptz timestamptz text',
prosrc => 'range_constructor3' },
{ oid => '3941', descr => 'daterange constructor',
proname => 'daterange', proisstrict => 'f', prorettype => 'daterange',
proargtypes => 'date date', prosrc => 'range_constructor2' },
{ oid => '3942', descr => 'daterange constructor',
proname => 'daterange', proisstrict => 'f', prorettype => 'daterange',
proargtypes => 'date date text', prosrc => 'range_constructor3' },
{ oid => '3945', descr => 'int8range constructor',
proname => 'int8range', proisstrict => 'f', prorettype => 'int8range',
proargtypes => 'int8 int8', prosrc => 'range_constructor2' },
{ oid => '3946', descr => 'int8range constructor',
proname => 'int8range', proisstrict => 'f', prorettype => 'int8range',
proargtypes => 'int8 int8 text', prosrc => 'range_constructor3' },
2020-12-20 05:20:33 +01:00
# functions for multiranges
{ oid => '4229', descr => 'I/O',
proname => 'anymultirange_in', provolatile => 's',
prorettype => 'anymultirange', proargtypes => 'cstring oid int4',
prosrc => 'anymultirange_in' },
{ oid => '4230', descr => 'I/O',
proname => 'anymultirange_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anymultirange', prosrc => 'anymultirange_out' },
{ oid => '4231', descr => 'I/O',
proname => 'multirange_in', provolatile => 's', prorettype => 'anymultirange',
proargtypes => 'cstring oid int4', prosrc => 'multirange_in' },
{ oid => '4232', descr => 'I/O',
proname => 'multirange_out', provolatile => 's', prorettype => 'cstring',
proargtypes => 'anymultirange', prosrc => 'multirange_out' },
{ oid => '4233', descr => 'I/O',
proname => 'multirange_recv', provolatile => 's',
prorettype => 'anymultirange', proargtypes => 'internal oid int4',
prosrc => 'multirange_recv' },
{ oid => '4234', descr => 'I/O',
proname => 'multirange_send', provolatile => 's', prorettype => 'bytea',
proargtypes => 'anymultirange', prosrc => 'multirange_send' },
{ oid => '4235', descr => 'lower bound of multirange',
proname => 'lower', prorettype => 'anyelement',
proargtypes => 'anymultirange', prosrc => 'multirange_lower' },
{ oid => '4236', descr => 'upper bound of multirange',
proname => 'upper', prorettype => 'anyelement',
proargtypes => 'anymultirange', prosrc => 'multirange_upper' },
{ oid => '4237', descr => 'is the multirange empty?',
proname => 'isempty', prorettype => 'bool', proargtypes => 'anymultirange',
prosrc => 'multirange_empty' },
{ oid => '4238', descr => 'is the multirange\'s lower bound inclusive?',
proname => 'lower_inc', prorettype => 'bool', proargtypes => 'anymultirange',
prosrc => 'multirange_lower_inc' },
{ oid => '4239', descr => 'is the multirange\'s upper bound inclusive?',
proname => 'upper_inc', prorettype => 'bool', proargtypes => 'anymultirange',
prosrc => 'multirange_upper_inc' },
{ oid => '4240', descr => 'is the multirange\'s lower bound infinite?',
proname => 'lower_inf', prorettype => 'bool', proargtypes => 'anymultirange',
prosrc => 'multirange_lower_inf' },
{ oid => '4241', descr => 'is the multirange\'s upper bound infinite?',
proname => 'upper_inf', prorettype => 'bool', proargtypes => 'anymultirange',
prosrc => 'multirange_upper_inf' },
{ oid => '4242', descr => 'multirange typanalyze',
proname => 'multirange_typanalyze', provolatile => 's', prorettype => 'bool',
proargtypes => 'internal', prosrc => 'multirange_typanalyze' },
{ oid => '4243', descr => 'restriction selectivity for multirange operators',
proname => 'multirangesel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'multirangesel' },
{ oid => '4244',
proname => 'multirange_eq', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_eq' },
{ oid => '4245',
proname => 'multirange_ne', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_ne' },
{ oid => '4246',
proname => 'range_overlaps_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_overlaps_multirange' },
{ oid => '4247',
proname => 'multirange_overlaps_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_overlaps_range' },
{ oid => '4248',
proname => 'multirange_overlaps_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_overlaps_multirange' },
{ oid => '4249',
proname => 'multirange_contains_elem', prorettype => 'bool',
proargtypes => 'anymultirange anyelement',
prosrc => 'multirange_contains_elem' },
{ oid => '4250',
proname => 'multirange_contains_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_contains_range' },
{ oid => '4251',
proname => 'multirange_contains_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_contains_multirange' },
{ oid => '4252',
proname => 'elem_contained_by_multirange', prorettype => 'bool',
proargtypes => 'anyelement anymultirange',
prosrc => 'elem_contained_by_multirange' },
{ oid => '4253',
proname => 'range_contained_by_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_contained_by_multirange' },
{ oid => '4541',
proname => 'range_contains_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_contains_multirange' },
{ oid => '4542',
proname => 'multirange_contained_by_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_contained_by_range' },
2020-12-20 05:20:33 +01:00
{ oid => '4254',
proname => 'multirange_contained_by_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_contained_by_multirange' },
{ oid => '4255',
proname => 'range_adjacent_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_adjacent_multirange' },
{ oid => '4256',
proname => 'multirange_adjacent_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_adjacent_multirange' },
{ oid => '4257',
proname => 'multirange_adjacent_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_adjacent_range' },
{ oid => '4258',
proname => 'range_before_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_before_multirange' },
{ oid => '4259',
proname => 'multirange_before_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_before_range' },
{ oid => '4260',
proname => 'multirange_before_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_before_multirange' },
{ oid => '4261',
proname => 'range_after_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange', prosrc => 'range_after_multirange' },
{ oid => '4262',
proname => 'multirange_after_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange', prosrc => 'multirange_after_range' },
{ oid => '4263',
proname => 'multirange_after_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_after_multirange' },
{ oid => '4264',
proname => 'range_overleft_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_overleft_multirange' },
{ oid => '4265',
proname => 'multirange_overleft_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_overleft_range' },
{ oid => '4266',
proname => 'multirange_overleft_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_overleft_multirange' },
{ oid => '4267',
proname => 'range_overright_multirange', prorettype => 'bool',
proargtypes => 'anyrange anymultirange',
prosrc => 'range_overright_multirange' },
{ oid => '4268',
proname => 'multirange_overright_range', prorettype => 'bool',
proargtypes => 'anymultirange anyrange',
prosrc => 'multirange_overright_range' },
{ oid => '4269',
proname => 'multirange_overright_multirange', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_overright_multirange' },
{ oid => '4270',
proname => 'multirange_union', prorettype => 'anymultirange',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_union' },
{ oid => '4271',
proname => 'multirange_minus', prorettype => 'anymultirange',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_minus' },
{ oid => '4272',
proname => 'multirange_intersect', prorettype => 'anymultirange',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_intersect' },
{ oid => '4273', descr => 'less-equal-greater',
proname => 'multirange_cmp', prorettype => 'int4',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_cmp' },
{ oid => '4274',
proname => 'multirange_lt', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_lt' },
{ oid => '4275',
proname => 'multirange_le', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_le' },
{ oid => '4276',
proname => 'multirange_ge', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_ge' },
{ oid => '4277',
proname => 'multirange_gt', prorettype => 'bool',
proargtypes => 'anymultirange anymultirange', prosrc => 'multirange_gt' },
{ oid => '4278', descr => 'hash a multirange',
proname => 'hash_multirange', prorettype => 'int4',
proargtypes => 'anymultirange', prosrc => 'hash_multirange' },
{ oid => '4279', descr => 'hash a multirange',
proname => 'hash_multirange_extended', prorettype => 'int8',
proargtypes => 'anymultirange int8', prosrc => 'hash_multirange_extended' },
{ oid => '4280', descr => 'int4multirange constructor',
proname => 'int4multirange', prorettype => 'int4multirange',
proargtypes => '', prosrc => 'multirange_constructor0' },
{ oid => '4281', descr => 'int4multirange constructor',
proname => 'int4multirange', prorettype => 'int4multirange',
proargtypes => 'int4range', prosrc => 'multirange_constructor1' },
{ oid => '4282', descr => 'int4multirange constructor',
proname => 'int4multirange', provariadic => 'int4range',
2020-12-20 05:20:33 +01:00
prorettype => 'int4multirange', proargtypes => '_int4range',
proallargtypes => '{_int4range}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4283', descr => 'nummultirange constructor',
proname => 'nummultirange', prorettype => 'nummultirange', proargtypes => '',
2020-12-20 05:20:33 +01:00
prosrc => 'multirange_constructor0' },
{ oid => '4284', descr => 'nummultirange constructor',
proname => 'nummultirange', prorettype => 'nummultirange',
proargtypes => 'numrange', prosrc => 'multirange_constructor1' },
{ oid => '4285', descr => 'nummultirange constructor',
proname => 'nummultirange', provariadic => 'numrange',
2020-12-20 05:20:33 +01:00
prorettype => 'nummultirange', proargtypes => '_numrange',
proallargtypes => '{_numrange}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4286', descr => 'tsmultirange constructor',
proname => 'tsmultirange', prorettype => 'tsmultirange', proargtypes => '',
2020-12-20 05:20:33 +01:00
prosrc => 'multirange_constructor0' },
{ oid => '4287', descr => 'tsmultirange constructor',
proname => 'tsmultirange', prorettype => 'tsmultirange',
proargtypes => 'tsrange', prosrc => 'multirange_constructor1' },
{ oid => '4288', descr => 'tsmultirange constructor',
proname => 'tsmultirange', provariadic => 'tsrange',
2020-12-20 05:20:33 +01:00
prorettype => 'tsmultirange', proargtypes => '_tsrange',
proallargtypes => '{_tsrange}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4289', descr => 'tstzmultirange constructor',
proname => 'tstzmultirange', prorettype => 'tstzmultirange',
proargtypes => '', prosrc => 'multirange_constructor0' },
{ oid => '4290', descr => 'tstzmultirange constructor',
proname => 'tstzmultirange', prorettype => 'tstzmultirange',
proargtypes => 'tstzrange', prosrc => 'multirange_constructor1' },
{ oid => '4291', descr => 'tstzmultirange constructor',
proname => 'tstzmultirange', provariadic => 'tstzrange',
2020-12-20 05:20:33 +01:00
prorettype => 'tstzmultirange', proargtypes => '_tstzrange',
proallargtypes => '{_tstzrange}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4292', descr => 'datemultirange constructor',
proname => 'datemultirange', prorettype => 'datemultirange',
proargtypes => '', prosrc => 'multirange_constructor0' },
{ oid => '4293', descr => 'datemultirange constructor',
proname => 'datemultirange', prorettype => 'datemultirange',
proargtypes => 'daterange', prosrc => 'multirange_constructor1' },
{ oid => '4294', descr => 'datemultirange constructor',
proname => 'datemultirange', provariadic => 'daterange',
2020-12-20 05:20:33 +01:00
prorettype => 'datemultirange', proargtypes => '_daterange',
proallargtypes => '{_daterange}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4295', descr => 'int8multirange constructor',
proname => 'int8multirange', prorettype => 'int8multirange',
proargtypes => '', prosrc => 'multirange_constructor0' },
{ oid => '4296', descr => 'int8multirange constructor',
proname => 'int8multirange', prorettype => 'int8multirange',
proargtypes => 'int8range', prosrc => 'multirange_constructor1' },
{ oid => '4297', descr => 'int8multirange constructor',
proname => 'int8multirange', provariadic => 'int8range',
2020-12-20 05:20:33 +01:00
prorettype => 'int8multirange', proargtypes => '_int8range',
proallargtypes => '{_int8range}', proargmodes => '{v}',
prosrc => 'multirange_constructor2' },
{ oid => '4298', descr => 'anymultirange cast',
proname => 'multirange', prorettype => 'anymultirange',
proargtypes => 'anyrange', prosrc => 'multirange_constructor1' },
{ oid => '4299', descr => 'aggregate transition function',
proname => 'range_agg_transfn', proisstrict => 'f', prorettype => 'internal',
proargtypes => 'internal anyrange', prosrc => 'range_agg_transfn' },
{ oid => '4300', descr => 'aggregate final function',
proname => 'range_agg_finalfn', proisstrict => 'f',
prorettype => 'anymultirange', proargtypes => 'internal anyrange',
prosrc => 'range_agg_finalfn' },
{ oid => '4301', descr => 'combine aggregate input into a multirange',
proname => 'range_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anymultirange', proargtypes => 'anyrange',
prosrc => 'aggregate_dummy' },
{ oid => '6225', descr => 'aggregate transition function',
proname => 'multirange_agg_transfn', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal anymultirange',
prosrc => 'multirange_agg_transfn' },
{ oid => '6226', descr => 'aggregate final function',
proname => 'multirange_agg_finalfn', proisstrict => 'f',
prorettype => 'anymultirange', proargtypes => 'internal anymultirange',
prosrc => 'range_agg_finalfn' },
{ oid => '6227', descr => 'combine aggregate input into a multirange',
proname => 'range_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anymultirange', proargtypes => 'anymultirange',
prosrc => 'aggregate_dummy' },
2020-12-20 05:20:33 +01:00
{ oid => '4388', descr => 'range aggregate by intersecting',
proname => 'multirange_intersect_agg_transfn', prorettype => 'anymultirange',
proargtypes => 'anymultirange anymultirange',
prosrc => 'multirange_intersect_agg_transfn' },
{ oid => '4389', descr => 'range aggregate by intersecting',
proname => 'range_intersect_agg', prokind => 'a', proisstrict => 'f',
prorettype => 'anymultirange', proargtypes => 'anymultirange',
prosrc => 'aggregate_dummy' },
{ oid => '1293', descr => 'expand multirange to set of ranges',
proname => 'unnest', prorows => '100', proretset => 't',
prorettype => 'anyrange', proargtypes => 'anymultirange',
prosrc => 'multirange_unnest' },
2020-12-20 05:20:33 +01:00
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# date, time, timestamp constructors
{ oid => '3846', descr => 'construct date',
proname => 'make_date', prorettype => 'date', proargtypes => 'int4 int4 int4',
proargnames => '{year,month,day}', prosrc => 'make_date' },
{ oid => '3847', descr => 'construct time',
proname => 'make_time', prorettype => 'time',
proargtypes => 'int4 int4 float8', proargnames => '{hour,min,sec}',
prosrc => 'make_time' },
{ oid => '3461', descr => 'construct timestamp',
proname => 'make_timestamp', prorettype => 'timestamp',
proargtypes => 'int4 int4 int4 int4 int4 float8',
proargnames => '{year,month,mday,hour,min,sec}', prosrc => 'make_timestamp' },
{ oid => '3462', descr => 'construct timestamp with time zone',
proname => 'make_timestamptz', provolatile => 's',
prorettype => 'timestamptz', proargtypes => 'int4 int4 int4 int4 int4 float8',
proargnames => '{year,month,mday,hour,min,sec}',
prosrc => 'make_timestamptz' },
{ oid => '3463', descr => 'construct timestamp with time zone',
proname => 'make_timestamptz', provolatile => 's',
prorettype => 'timestamptz',
proargtypes => 'int4 int4 int4 int4 int4 float8 text',
proargnames => '{year,month,mday,hour,min,sec,timezone}',
prosrc => 'make_timestamptz_at_timezone' },
{ oid => '3464', descr => 'construct interval',
proname => 'make_interval', prorettype => 'interval',
proargtypes => 'int4 int4 int4 int4 int4 int4 float8',
proargnames => '{years,months,weeks,days,hours,mins,secs}',
prosrc => 'make_interval' },
# spgist opclasses
{ oid => '4018', descr => 'SP-GiST support for quad tree over point',
proname => 'spg_quad_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_quad_config' },
{ oid => '4019', descr => 'SP-GiST support for quad tree over point',
proname => 'spg_quad_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_quad_choose' },
{ oid => '4020', descr => 'SP-GiST support for quad tree over point',
proname => 'spg_quad_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_quad_picksplit' },
{ oid => '4021', descr => 'SP-GiST support for quad tree over point',
proname => 'spg_quad_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_quad_inner_consistent' },
{ oid => '4022',
descr => 'SP-GiST support for quad tree and k-d tree over point',
proname => 'spg_quad_leaf_consistent', prorettype => 'bool',
proargtypes => 'internal internal', prosrc => 'spg_quad_leaf_consistent' },
{ oid => '4023', descr => 'SP-GiST support for k-d tree over point',
proname => 'spg_kd_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_kd_config' },
{ oid => '4024', descr => 'SP-GiST support for k-d tree over point',
proname => 'spg_kd_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_kd_choose' },
{ oid => '4025', descr => 'SP-GiST support for k-d tree over point',
proname => 'spg_kd_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_kd_picksplit' },
{ oid => '4026', descr => 'SP-GiST support for k-d tree over point',
proname => 'spg_kd_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_kd_inner_consistent' },
{ oid => '4027', descr => 'SP-GiST support for radix tree over text',
proname => 'spg_text_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_text_config' },
{ oid => '4028', descr => 'SP-GiST support for radix tree over text',
proname => 'spg_text_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_text_choose' },
{ oid => '4029', descr => 'SP-GiST support for radix tree over text',
proname => 'spg_text_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_text_picksplit' },
{ oid => '4030', descr => 'SP-GiST support for radix tree over text',
proname => 'spg_text_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_text_inner_consistent' },
{ oid => '4031', descr => 'SP-GiST support for radix tree over text',
proname => 'spg_text_leaf_consistent', prorettype => 'bool',
proargtypes => 'internal internal', prosrc => 'spg_text_leaf_consistent' },
{ oid => '3469', descr => 'SP-GiST support for quad tree over range',
proname => 'spg_range_quad_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_range_quad_config' },
{ oid => '3470', descr => 'SP-GiST support for quad tree over range',
proname => 'spg_range_quad_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_range_quad_choose' },
{ oid => '3471', descr => 'SP-GiST support for quad tree over range',
proname => 'spg_range_quad_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_range_quad_picksplit' },
{ oid => '3472', descr => 'SP-GiST support for quad tree over range',
proname => 'spg_range_quad_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal',
prosrc => 'spg_range_quad_inner_consistent' },
{ oid => '3473', descr => 'SP-GiST support for quad tree over range',
proname => 'spg_range_quad_leaf_consistent', prorettype => 'bool',
proargtypes => 'internal internal',
prosrc => 'spg_range_quad_leaf_consistent' },
{ oid => '5012', descr => 'SP-GiST support for quad tree over box',
proname => 'spg_box_quad_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_box_quad_config' },
{ oid => '5013', descr => 'SP-GiST support for quad tree over box',
proname => 'spg_box_quad_choose', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_box_quad_choose' },
{ oid => '5014', descr => 'SP-GiST support for quad tree over box',
proname => 'spg_box_quad_picksplit', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_box_quad_picksplit' },
{ oid => '5015', descr => 'SP-GiST support for quad tree over box',
proname => 'spg_box_quad_inner_consistent', prorettype => 'void',
proargtypes => 'internal internal',
prosrc => 'spg_box_quad_inner_consistent' },
{ oid => '5016', descr => 'SP-GiST support for quad tree over box',
proname => 'spg_box_quad_leaf_consistent', prorettype => 'bool',
proargtypes => 'internal internal',
prosrc => 'spg_box_quad_leaf_consistent' },
{ oid => '5010',
descr => 'SP-GiST support for quad tree over 2-D types represented by their bounding boxes',
proname => 'spg_bbox_quad_config', prorettype => 'void',
proargtypes => 'internal internal', prosrc => 'spg_bbox_quad_config' },
{ oid => '5011', descr => 'SP-GiST support for quad tree over polygons',
proname => 'spg_poly_quad_compress', prorettype => 'box',
proargtypes => 'polygon', prosrc => 'spg_poly_quad_compress' },
# replication slots
{ oid => '3779', descr => 'create a physical replication slot',
proname => 'pg_create_physical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name bool bool',
proallargtypes => '{name,bool,bool,name,pg_lsn}',
proargmodes => '{i,i,i,o,o}',
proargnames => '{slot_name,immediately_reserve,temporary,slot_name,lsn}',
prosrc => 'pg_create_physical_replication_slot' },
{ oid => '4220',
descr => 'copy a physical replication slot, changing temporality',
proname => 'pg_copy_physical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name name bool',
proallargtypes => '{name,name,bool,name,pg_lsn}',
proargmodes => '{i,i,i,o,o}',
proargnames => '{src_slot_name,dst_slot_name,temporary,slot_name,lsn}',
prosrc => 'pg_copy_physical_replication_slot_a' },
{ oid => '4221', descr => 'copy a physical replication slot',
proname => 'pg_copy_physical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name name',
proallargtypes => '{name,name,name,pg_lsn}', proargmodes => '{i,i,o,o}',
proargnames => '{src_slot_name,dst_slot_name,slot_name,lsn}',
prosrc => 'pg_copy_physical_replication_slot_b' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3780', descr => 'drop a replication slot',
proname => 'pg_drop_replication_slot', provolatile => 'v', proparallel => 'u',
prorettype => 'void', proargtypes => 'name',
prosrc => 'pg_drop_replication_slot' },
{ oid => '3781',
descr => 'information about replication slots currently in use',
proname => 'pg_get_replication_slots', prorows => '10', proisstrict => 'f',
proretset => 't', provolatile => 's', prorettype => 'record',
proargtypes => '',
proallargtypes => '{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8,bool,bool,text,bool,bool}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size,two_phase,conflicting,invalidation_reason,failover,synced}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_get_replication_slots' },
{ oid => '3786', descr => 'set up a logical replication slot',
proname => 'pg_create_logical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record',
proargtypes => 'name name bool bool bool',
proallargtypes => '{name,name,bool,bool,bool,name,pg_lsn}',
proargmodes => '{i,i,i,i,i,o,o}',
proargnames => '{slot_name,plugin,temporary,twophase,failover,slot_name,lsn}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_create_logical_replication_slot' },
{ oid => '4222',
descr => 'copy a logical replication slot, changing temporality and plugin',
proname => 'pg_copy_logical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record',
proargtypes => 'name name bool name',
proallargtypes => '{name,name,bool,name,name,pg_lsn}',
proargmodes => '{i,i,i,i,o,o}',
proargnames => '{src_slot_name,dst_slot_name,temporary,plugin,slot_name,lsn}',
prosrc => 'pg_copy_logical_replication_slot_a' },
{ oid => '4223',
descr => 'copy a logical replication slot, changing temporality',
proname => 'pg_copy_logical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name name bool',
proallargtypes => '{name,name,bool,name,pg_lsn}',
proargmodes => '{i,i,i,o,o}',
proargnames => '{src_slot_name,dst_slot_name,temporary,slot_name,lsn}',
prosrc => 'pg_copy_logical_replication_slot_b' },
{ oid => '4224', descr => 'copy a logical replication slot',
proname => 'pg_copy_logical_replication_slot', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name name',
proallargtypes => '{name,name,name,pg_lsn}', proargmodes => '{i,i,o,o}',
proargnames => '{src_slot_name,dst_slot_name,slot_name,lsn}',
prosrc => 'pg_copy_logical_replication_slot_c' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3782', descr => 'get changes from replication slot',
proname => 'pg_logical_slot_get_changes', procost => '1000',
prorows => '1000', provariadic => 'text', proisstrict => 'f',
proretset => 't', provolatile => 'v', proparallel => 'u',
prorettype => 'record', proargtypes => 'name pg_lsn int4 _text',
proallargtypes => '{name,pg_lsn,int4,_text,pg_lsn,xid,text}',
proargmodes => '{i,i,i,v,o,o,o}',
proargnames => '{slot_name,upto_lsn,upto_nchanges,options,lsn,xid,data}',
prosrc => 'pg_logical_slot_get_changes' },
{ oid => '3783', descr => 'get binary changes from replication slot',
proname => 'pg_logical_slot_get_binary_changes', procost => '1000',
prorows => '1000', provariadic => 'text', proisstrict => 'f',
proretset => 't', provolatile => 'v', proparallel => 'u',
prorettype => 'record', proargtypes => 'name pg_lsn int4 _text',
proallargtypes => '{name,pg_lsn,int4,_text,pg_lsn,xid,bytea}',
proargmodes => '{i,i,i,v,o,o,o}',
proargnames => '{slot_name,upto_lsn,upto_nchanges,options,lsn,xid,data}',
prosrc => 'pg_logical_slot_get_binary_changes' },
{ oid => '3784', descr => 'peek at changes from replication slot',
proname => 'pg_logical_slot_peek_changes', procost => '1000',
prorows => '1000', provariadic => 'text', proisstrict => 'f',
proretset => 't', provolatile => 'v', proparallel => 'u',
prorettype => 'record', proargtypes => 'name pg_lsn int4 _text',
proallargtypes => '{name,pg_lsn,int4,_text,pg_lsn,xid,text}',
proargmodes => '{i,i,i,v,o,o,o}',
proargnames => '{slot_name,upto_lsn,upto_nchanges,options,lsn,xid,data}',
prosrc => 'pg_logical_slot_peek_changes' },
{ oid => '3785', descr => 'peek at binary changes from replication slot',
proname => 'pg_logical_slot_peek_binary_changes', procost => '1000',
prorows => '1000', provariadic => 'text', proisstrict => 'f',
proretset => 't', provolatile => 'v', proparallel => 'u',
prorettype => 'record', proargtypes => 'name pg_lsn int4 _text',
proallargtypes => '{name,pg_lsn,int4,_text,pg_lsn,xid,bytea}',
proargmodes => '{i,i,i,v,o,o,o}',
proargnames => '{slot_name,upto_lsn,upto_nchanges,options,lsn,xid,data}',
prosrc => 'pg_logical_slot_peek_binary_changes' },
{ oid => '3878', descr => 'advance logical replication slot',
proname => 'pg_replication_slot_advance', provolatile => 'v',
proparallel => 'u', prorettype => 'record', proargtypes => 'name pg_lsn',
proallargtypes => '{name,pg_lsn,name,pg_lsn}', proargmodes => '{i,i,o,o}',
proargnames => '{slot_name,upto_lsn,slot_name,end_lsn}',
prosrc => 'pg_replication_slot_advance' },
{ oid => '3577', descr => 'emit a textual logical decoding message',
proname => 'pg_logical_emit_message', provolatile => 'v', proparallel => 'u',
prorettype => 'pg_lsn', proargtypes => 'bool text text bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_logical_emit_message_text' },
{ oid => '3578', descr => 'emit a binary logical decoding message',
proname => 'pg_logical_emit_message', provolatile => 'v', proparallel => 'u',
prorettype => 'pg_lsn', proargtypes => 'bool text bytea bool',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_logical_emit_message_bytea' },
Add a slot synchronization function. This commit introduces a new SQL function pg_sync_replication_slots() which is used to synchronize the logical replication slots from the primary server to the physical standby so that logical replication can be resumed after a failover or planned switchover. A new 'synced' flag is introduced in pg_replication_slots view, indicating whether the slot has been synchronized from the primary server. On a standby, synced slots cannot be dropped or consumed, and any attempt to perform logical decoding on them will result in an error. The logical replication slots on the primary can be synchronized to the hot standby by using the 'failover' parameter of pg-create-logical-replication-slot(), or by using the 'failover' option of CREATE SUBSCRIPTION during slot creation, and then calling pg_sync_replication_slots() on standby. For the synchronization to work, it is mandatory to have a physical replication slot between the primary and the standby aka 'primary_slot_name' should be configured on the standby, and 'hot_standby_feedback' must be enabled on the standby. It is also necessary to specify a valid 'dbname' in the 'primary_conninfo'. If a logical slot is invalidated on the primary, then that slot on the standby is also invalidated. If a logical slot on the primary is valid but is invalidated on the standby, then that slot is dropped but will be recreated on the standby in the next pg_sync_replication_slots() call provided the slot still exists on the primary server. It is okay to recreate such slots as long as these are not consumable on standby (which is the case currently). This situation may occur due to the following reasons: - The 'max_slot_wal_keep_size' on the standby is insufficient to retain WAL records from the restart_lsn of the slot. - 'primary_slot_name' is temporarily reset to null and the physical slot is removed. The slot synchronization status on the standby can be monitored using the 'synced' column of pg_replication_slots view. A functionality to automatically synchronize slots by a background worker and allow logical walsenders to wait for the physical will be done in subsequent commits. Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
{ oid => '9929', descr => 'sync replication slots from the primary to the standby',
proname => 'pg_sync_replication_slots', provolatile => 'v', proparallel => 'u',
prorettype => 'void', proargtypes => '',
prosrc => 'pg_sync_replication_slots' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# event triggers
{ oid => '3566', descr => 'list objects dropped by the current command',
proname => 'pg_event_trigger_dropped_objects', procost => '10',
prorows => '100', proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{oid,oid,int4,bool,bool,bool,text,text,text,text,_text,_text}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{classid, objid, objsubid, original, normal, is_temporary, object_type, schema_name, object_name, object_identity, address_names, address_args}',
prosrc => 'pg_event_trigger_dropped_objects' },
{ oid => '4566', descr => 'return Oid of the table getting rewritten',
proname => 'pg_event_trigger_table_rewrite_oid', provolatile => 's',
proparallel => 'r', prorettype => 'oid', proargtypes => '',
proallargtypes => '{oid}', proargmodes => '{o}', proargnames => '{oid}',
prosrc => 'pg_event_trigger_table_rewrite_oid' },
{ oid => '4567', descr => 'return reason code for table getting rewritten',
proname => 'pg_event_trigger_table_rewrite_reason', provolatile => 's',
proparallel => 'r', prorettype => 'int4', proargtypes => '',
prosrc => 'pg_event_trigger_table_rewrite_reason' },
{ oid => '4568',
descr => 'list DDL actions being executed by the current command',
proname => 'pg_event_trigger_ddl_commands', procost => '10', prorows => '100',
proretset => 't', provolatile => 's', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{oid,oid,int4,text,text,text,text,bool,pg_ddl_command}',
proargmodes => '{o,o,o,o,o,o,o,o,o}',
proargnames => '{classid, objid, objsubid, command_tag, object_type, schema_name, object_identity, in_extension, command}',
prosrc => 'pg_event_trigger_ddl_commands' },
# generic transition functions for ordered-set aggregates
{ oid => '3970', descr => 'aggregate transition function',
proname => 'ordered_set_transition', proisstrict => 'f',
prorettype => 'internal', proargtypes => 'internal any',
prosrc => 'ordered_set_transition' },
{ oid => '3971', descr => 'aggregate transition function',
proname => 'ordered_set_transition_multi', provariadic => 'any',
proisstrict => 'f', prorettype => 'internal', proargtypes => 'internal any',
proallargtypes => '{internal,any}', proargmodes => '{i,v}',
prosrc => 'ordered_set_transition_multi' },
# inverse distribution aggregates (and their support functions)
{ oid => '3972', descr => 'discrete percentile',
proname => 'percentile_disc', prokind => 'a', proisstrict => 'f',
prorettype => 'anyelement', proargtypes => 'float8 anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '3973', descr => 'aggregate final function',
proname => 'percentile_disc_final', proisstrict => 'f',
prorettype => 'anyelement', proargtypes => 'internal float8 anyelement',
prosrc => 'percentile_disc_final' },
{ oid => '3974', descr => 'continuous distribution percentile',
proname => 'percentile_cont', prokind => 'a', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '3975', descr => 'aggregate final function',
proname => 'percentile_cont_float8_final', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'internal float8',
prosrc => 'percentile_cont_float8_final' },
{ oid => '3976', descr => 'continuous distribution percentile',
proname => 'percentile_cont', prokind => 'a', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'float8 interval',
prosrc => 'aggregate_dummy' },
{ oid => '3977', descr => 'aggregate final function',
proname => 'percentile_cont_interval_final', proisstrict => 'f',
prorettype => 'interval', proargtypes => 'internal float8',
prosrc => 'percentile_cont_interval_final' },
{ oid => '3978', descr => 'multiple discrete percentiles',
proname => 'percentile_disc', prokind => 'a', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => '_float8 anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '3979', descr => 'aggregate final function',
proname => 'percentile_disc_multi_final', proisstrict => 'f',
prorettype => 'anyarray', proargtypes => 'internal _float8 anyelement',
prosrc => 'percentile_disc_multi_final' },
{ oid => '3980', descr => 'multiple continuous percentiles',
proname => 'percentile_cont', prokind => 'a', proisstrict => 'f',
prorettype => '_float8', proargtypes => '_float8 float8',
prosrc => 'aggregate_dummy' },
{ oid => '3981', descr => 'aggregate final function',
proname => 'percentile_cont_float8_multi_final', proisstrict => 'f',
prorettype => '_float8', proargtypes => 'internal _float8',
prosrc => 'percentile_cont_float8_multi_final' },
{ oid => '3982', descr => 'multiple continuous percentiles',
proname => 'percentile_cont', prokind => 'a', proisstrict => 'f',
prorettype => '_interval', proargtypes => '_float8 interval',
prosrc => 'aggregate_dummy' },
{ oid => '3983', descr => 'aggregate final function',
proname => 'percentile_cont_interval_multi_final', proisstrict => 'f',
prorettype => '_interval', proargtypes => 'internal _float8',
prosrc => 'percentile_cont_interval_multi_final' },
{ oid => '3984', descr => 'most common value',
proname => 'mode', prokind => 'a', proisstrict => 'f',
prorettype => 'anyelement', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '3985', descr => 'aggregate final function',
proname => 'mode_final', proisstrict => 'f', prorettype => 'anyelement',
proargtypes => 'internal anyelement', prosrc => 'mode_final' },
# hypothetical-set aggregates (and their support functions)
{ oid => '3986', descr => 'rank of hypothetical row',
proname => 'rank', provariadic => 'any', prokind => 'a', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'any', proallargtypes => '{any}',
proargmodes => '{v}', prosrc => 'aggregate_dummy' },
{ oid => '3987', descr => 'aggregate final function',
proname => 'rank_final', provariadic => 'any', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'internal any',
proallargtypes => '{internal,any}', proargmodes => '{i,v}',
prosrc => 'hypothetical_rank_final' },
{ oid => '3988', descr => 'fractional rank of hypothetical row',
proname => 'percent_rank', provariadic => 'any', prokind => 'a',
proisstrict => 'f', prorettype => 'float8', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'aggregate_dummy' },
{ oid => '3989', descr => 'aggregate final function',
proname => 'percent_rank_final', provariadic => 'any', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'internal any',
proallargtypes => '{internal,any}', proargmodes => '{i,v}',
prosrc => 'hypothetical_percent_rank_final' },
{ oid => '3990', descr => 'cumulative distribution of hypothetical row',
proname => 'cume_dist', provariadic => 'any', prokind => 'a',
proisstrict => 'f', prorettype => 'float8', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'aggregate_dummy' },
{ oid => '3991', descr => 'aggregate final function',
proname => 'cume_dist_final', provariadic => 'any', proisstrict => 'f',
prorettype => 'float8', proargtypes => 'internal any',
proallargtypes => '{internal,any}', proargmodes => '{i,v}',
prosrc => 'hypothetical_cume_dist_final' },
{ oid => '3992', descr => 'rank of hypothetical row without gaps',
proname => 'dense_rank', provariadic => 'any', prokind => 'a',
proisstrict => 'f', prorettype => 'int8', proargtypes => 'any',
proallargtypes => '{any}', proargmodes => '{v}',
prosrc => 'aggregate_dummy' },
{ oid => '3993', descr => 'aggregate final function',
proname => 'dense_rank_final', provariadic => 'any', proisstrict => 'f',
prorettype => 'int8', proargtypes => 'internal any',
proallargtypes => '{internal,any}', proargmodes => '{i,v}',
prosrc => 'hypothetical_dense_rank_final' },
# pg_upgrade support
{ oid => '3582', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_pg_type_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_pg_type_oid' },
{ oid => '3584', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_array_pg_type_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_array_pg_type_oid' },
2020-12-20 05:20:33 +01:00
{ oid => '4390', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_multirange_pg_type_oid',
provolatile => 'v', proparallel => 'r', prorettype => 'void',
proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_multirange_pg_type_oid' },
{ oid => '4391', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_multirange_array_pg_type_oid',
provolatile => 'v', proparallel => 'r', prorettype => 'void',
proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_multirange_array_pg_type_oid' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '3586', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_heap_pg_class_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_heap_pg_class_oid' },
{ oid => '3587', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_index_pg_class_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_index_pg_class_oid' },
{ oid => '3588', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_toast_pg_class_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_toast_pg_class_oid' },
{ oid => '3589', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_pg_enum_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_pg_enum_oid' },
{ oid => '3590', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_pg_authid_oid', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_pg_authid_oid' },
{ oid => '3591', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_create_empty_extension', proisstrict => 'f',
provolatile => 'v', proparallel => 'u', prorettype => 'void',
proargtypes => 'text text bool text _oid _text _text',
prosrc => 'binary_upgrade_create_empty_extension' },
{ oid => '4083', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_record_init_privs', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'bool',
prosrc => 'binary_upgrade_set_record_init_privs' },
{ oid => '4101', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_missing_value', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'oid text text',
prosrc => 'binary_upgrade_set_missing_value' },
{ oid => '4545', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_heap_relfilenode', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_heap_relfilenode' },
{ oid => '4546', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_index_relfilenode', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_index_relfilenode' },
{ oid => '4547', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_toast_relfilenode', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_toast_relfilenode' },
{ oid => '4548', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_set_next_pg_tablespace_oid', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'oid',
prosrc => 'binary_upgrade_set_next_pg_tablespace_oid' },
{ oid => '8046', descr => 'for use by pg_upgrade',
proname => 'binary_upgrade_logical_slot_has_caught_up', provolatile => 'v',
proparallel => 'u', prorettype => 'bool', proargtypes => 'name',
prosrc => 'binary_upgrade_logical_slot_has_caught_up' },
Allow upgrades to preserve the full subscription's state. This feature will allow us to replicate the changes on subscriber nodes after the upgrade. Previously, only the subscription metadata information was preserved. Without the list of relations and their state, it's not possible to re-enable the subscriptions without missing some records as the list of relations can only be refreshed after enabling the subscription (and therefore starting the apply worker). Even if we added a way to refresh the subscription while enabling a publication, we still wouldn't know which relations are new on the publication side, and therefore should be fully synced, and which shouldn't. To preserve the subscription relations, this patch teaches pg_dump to restore the content of pg_subscription_rel from the old cluster by using binary_upgrade_add_sub_rel_state SQL function. This is supported only in binary upgrade mode. The subscription's replication origin is needed to ensure that we don't replicate anything twice. To preserve the replication origins, this patch teaches pg_dump to update the replication origin along with creating a subscription by using binary_upgrade_replorigin_advance SQL function to restore the underlying replication origin remote LSN. This is supported only in binary upgrade mode. pg_upgrade will check that all the subscription relations are in 'i' (init) or in 'r' (ready) state and will error out if that's not the case, logging the reason for the failure. This helps to avoid the risk of any dangling slot or origin after the upgrade. Author: Vignesh C, Julien Rouhaud, Shlok Kyal Reviewed-by: Peter Smith, Masahiko Sawada, Michael Paquier, Amit Kapila, Hayato Kuroda Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud
2024-01-02 03:38:46 +01:00
{ oid => '8404', descr => 'for use by pg_upgrade (relation for pg_subscription_rel)',
proname => 'binary_upgrade_add_sub_rel_state', proisstrict => 'f',
provolatile => 'v', proparallel => 'u', prorettype => 'void',
proargtypes => 'text oid char pg_lsn',
prosrc => 'binary_upgrade_add_sub_rel_state' },
{ oid => '8405', descr => 'for use by pg_upgrade (remote_lsn for origin)',
proname => 'binary_upgrade_replorigin_advance', proisstrict => 'f',
provolatile => 'v', proparallel => 'u', prorettype => 'void',
proargtypes => 'text pg_lsn',
prosrc => 'binary_upgrade_replorigin_advance' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# conversion functions
{ oid => '4302',
descr => 'internal conversion function for KOI8R to MULE_INTERNAL',
proname => 'koi8r_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8r_to_mic', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4303',
descr => 'internal conversion function for MULE_INTERNAL to KOI8R',
proname => 'mic_to_koi8r', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_koi8r', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4304',
descr => 'internal conversion function for ISO-8859-5 to MULE_INTERNAL',
proname => 'iso_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool', prosrc => 'iso_to_mic',
probin => '$libdir/cyrillic_and_mic' },
{ oid => '4305',
descr => 'internal conversion function for MULE_INTERNAL to ISO-8859-5',
proname => 'mic_to_iso', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool', prosrc => 'mic_to_iso',
probin => '$libdir/cyrillic_and_mic' },
{ oid => '4306',
descr => 'internal conversion function for WIN1251 to MULE_INTERNAL',
proname => 'win1251_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1251_to_mic', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4307',
descr => 'internal conversion function for MULE_INTERNAL to WIN1251',
proname => 'mic_to_win1251', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_win1251', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4308',
descr => 'internal conversion function for WIN866 to MULE_INTERNAL',
proname => 'win866_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win866_to_mic', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4309',
descr => 'internal conversion function for MULE_INTERNAL to WIN866',
proname => 'mic_to_win866', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_win866', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4310', descr => 'internal conversion function for KOI8R to WIN1251',
proname => 'koi8r_to_win1251', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8r_to_win1251', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4311', descr => 'internal conversion function for WIN1251 to KOI8R',
proname => 'win1251_to_koi8r', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1251_to_koi8r', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4312', descr => 'internal conversion function for KOI8R to WIN866',
proname => 'koi8r_to_win866', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8r_to_win866', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4313', descr => 'internal conversion function for WIN866 to KOI8R',
proname => 'win866_to_koi8r', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win866_to_koi8r', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4314',
descr => 'internal conversion function for WIN866 to WIN1251',
proname => 'win866_to_win1251', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win866_to_win1251', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4315',
descr => 'internal conversion function for WIN1251 to WIN866',
proname => 'win1251_to_win866', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1251_to_win866', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4316',
descr => 'internal conversion function for ISO-8859-5 to KOI8R',
proname => 'iso_to_koi8r', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'iso_to_koi8r', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4317',
descr => 'internal conversion function for KOI8R to ISO-8859-5',
proname => 'koi8r_to_iso', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8r_to_iso', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4318',
descr => 'internal conversion function for ISO-8859-5 to WIN1251',
proname => 'iso_to_win1251', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'iso_to_win1251', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4319',
descr => 'internal conversion function for WIN1251 to ISO-8859-5',
proname => 'win1251_to_iso', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1251_to_iso', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4320',
descr => 'internal conversion function for ISO-8859-5 to WIN866',
proname => 'iso_to_win866', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'iso_to_win866', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4321',
descr => 'internal conversion function for WIN866 to ISO-8859-5',
proname => 'win866_to_iso', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win866_to_iso', probin => '$libdir/cyrillic_and_mic' },
{ oid => '4322',
descr => 'internal conversion function for EUC_CN to MULE_INTERNAL',
proname => 'euc_cn_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_cn_to_mic', probin => '$libdir/euc_cn_and_mic' },
{ oid => '4323',
descr => 'internal conversion function for MULE_INTERNAL to EUC_CN',
proname => 'mic_to_euc_cn', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_euc_cn', probin => '$libdir/euc_cn_and_mic' },
{ oid => '4324', descr => 'internal conversion function for EUC_JP to SJIS',
proname => 'euc_jp_to_sjis', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_jp_to_sjis', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4325', descr => 'internal conversion function for SJIS to EUC_JP',
proname => 'sjis_to_euc_jp', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'sjis_to_euc_jp', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4326',
descr => 'internal conversion function for EUC_JP to MULE_INTERNAL',
proname => 'euc_jp_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_jp_to_mic', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4327',
descr => 'internal conversion function for SJIS to MULE_INTERNAL',
proname => 'sjis_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'sjis_to_mic', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4328',
descr => 'internal conversion function for MULE_INTERNAL to EUC_JP',
proname => 'mic_to_euc_jp', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_euc_jp', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4329',
descr => 'internal conversion function for MULE_INTERNAL to SJIS',
proname => 'mic_to_sjis', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_sjis', probin => '$libdir/euc_jp_and_sjis' },
{ oid => '4330',
descr => 'internal conversion function for EUC_KR to MULE_INTERNAL',
proname => 'euc_kr_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_kr_to_mic', probin => '$libdir/euc_kr_and_mic' },
{ oid => '4331',
descr => 'internal conversion function for MULE_INTERNAL to EUC_KR',
proname => 'mic_to_euc_kr', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_euc_kr', probin => '$libdir/euc_kr_and_mic' },
{ oid => '4332', descr => 'internal conversion function for EUC_TW to BIG5',
proname => 'euc_tw_to_big5', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_tw_to_big5', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4333', descr => 'internal conversion function for BIG5 to EUC_TW',
proname => 'big5_to_euc_tw', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'big5_to_euc_tw', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4334',
descr => 'internal conversion function for EUC_TW to MULE_INTERNAL',
proname => 'euc_tw_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_tw_to_mic', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4335',
descr => 'internal conversion function for BIG5 to MULE_INTERNAL',
proname => 'big5_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'big5_to_mic', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4336',
descr => 'internal conversion function for MULE_INTERNAL to EUC_TW',
proname => 'mic_to_euc_tw', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_euc_tw', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4337',
descr => 'internal conversion function for MULE_INTERNAL to BIG5',
proname => 'mic_to_big5', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_big5', probin => '$libdir/euc_tw_and_big5' },
{ oid => '4338',
descr => 'internal conversion function for LATIN2 to MULE_INTERNAL',
proname => 'latin2_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'latin2_to_mic', probin => '$libdir/latin2_and_win1250' },
{ oid => '4339',
descr => 'internal conversion function for MULE_INTERNAL to LATIN2',
proname => 'mic_to_latin2', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_latin2', probin => '$libdir/latin2_and_win1250' },
{ oid => '4340',
descr => 'internal conversion function for WIN1250 to MULE_INTERNAL',
proname => 'win1250_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1250_to_mic', probin => '$libdir/latin2_and_win1250' },
{ oid => '4341',
descr => 'internal conversion function for MULE_INTERNAL to WIN1250',
proname => 'mic_to_win1250', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_win1250', probin => '$libdir/latin2_and_win1250' },
{ oid => '4342',
descr => 'internal conversion function for LATIN2 to WIN1250',
proname => 'latin2_to_win1250', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'latin2_to_win1250', probin => '$libdir/latin2_and_win1250' },
{ oid => '4343',
descr => 'internal conversion function for WIN1250 to LATIN2',
proname => 'win1250_to_latin2', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win1250_to_latin2', probin => '$libdir/latin2_and_win1250' },
{ oid => '4344',
descr => 'internal conversion function for LATIN1 to MULE_INTERNAL',
proname => 'latin1_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'latin1_to_mic', probin => '$libdir/latin_and_mic' },
{ oid => '4345',
descr => 'internal conversion function for MULE_INTERNAL to LATIN1',
proname => 'mic_to_latin1', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_latin1', probin => '$libdir/latin_and_mic' },
{ oid => '4346',
descr => 'internal conversion function for LATIN3 to MULE_INTERNAL',
proname => 'latin3_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'latin3_to_mic', probin => '$libdir/latin_and_mic' },
{ oid => '4347',
descr => 'internal conversion function for MULE_INTERNAL to LATIN3',
proname => 'mic_to_latin3', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_latin3', probin => '$libdir/latin_and_mic' },
{ oid => '4348',
descr => 'internal conversion function for LATIN4 to MULE_INTERNAL',
proname => 'latin4_to_mic', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'latin4_to_mic', probin => '$libdir/latin_and_mic' },
{ oid => '4349',
descr => 'internal conversion function for MULE_INTERNAL to LATIN4',
proname => 'mic_to_latin4', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'mic_to_latin4', probin => '$libdir/latin_and_mic' },
{ oid => '4352', descr => 'internal conversion function for BIG5 to UTF8',
proname => 'big5_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'big5_to_utf8', probin => '$libdir/utf8_and_big5' },
{ oid => '4353', descr => 'internal conversion function for UTF8 to BIG5',
proname => 'utf8_to_big5', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_big5', probin => '$libdir/utf8_and_big5' },
{ oid => '4354', descr => 'internal conversion function for UTF8 to KOI8R',
proname => 'utf8_to_koi8r', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_koi8r', probin => '$libdir/utf8_and_cyrillic' },
{ oid => '4355', descr => 'internal conversion function for KOI8R to UTF8',
proname => 'koi8r_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8r_to_utf8', probin => '$libdir/utf8_and_cyrillic' },
{ oid => '4356', descr => 'internal conversion function for UTF8 to KOI8U',
proname => 'utf8_to_koi8u', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_koi8u', probin => '$libdir/utf8_and_cyrillic' },
{ oid => '4357', descr => 'internal conversion function for KOI8U to UTF8',
proname => 'koi8u_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'koi8u_to_utf8', probin => '$libdir/utf8_and_cyrillic' },
{ oid => '4358', descr => 'internal conversion function for UTF8 to WIN',
proname => 'utf8_to_win', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_win', probin => '$libdir/utf8_and_win' },
{ oid => '4359', descr => 'internal conversion function for WIN to UTF8',
proname => 'win_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'win_to_utf8', probin => '$libdir/utf8_and_win' },
{ oid => '4360', descr => 'internal conversion function for EUC_CN to UTF8',
proname => 'euc_cn_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_cn_to_utf8', probin => '$libdir/utf8_and_euc_cn' },
{ oid => '4361', descr => 'internal conversion function for UTF8 to EUC_CN',
proname => 'utf8_to_euc_cn', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_euc_cn', probin => '$libdir/utf8_and_euc_cn' },
{ oid => '4362', descr => 'internal conversion function for EUC_JP to UTF8',
proname => 'euc_jp_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_jp_to_utf8', probin => '$libdir/utf8_and_euc_jp' },
{ oid => '4363', descr => 'internal conversion function for UTF8 to EUC_JP',
proname => 'utf8_to_euc_jp', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_euc_jp', probin => '$libdir/utf8_and_euc_jp' },
{ oid => '4364', descr => 'internal conversion function for EUC_KR to UTF8',
proname => 'euc_kr_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_kr_to_utf8', probin => '$libdir/utf8_and_euc_kr' },
{ oid => '4365', descr => 'internal conversion function for UTF8 to EUC_KR',
proname => 'utf8_to_euc_kr', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_euc_kr', probin => '$libdir/utf8_and_euc_kr' },
{ oid => '4366', descr => 'internal conversion function for EUC_TW to UTF8',
proname => 'euc_tw_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_tw_to_utf8', probin => '$libdir/utf8_and_euc_tw' },
{ oid => '4367', descr => 'internal conversion function for UTF8 to EUC_TW',
proname => 'utf8_to_euc_tw', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_euc_tw', probin => '$libdir/utf8_and_euc_tw' },
{ oid => '4368', descr => 'internal conversion function for GB18030 to UTF8',
proname => 'gb18030_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'gb18030_to_utf8', probin => '$libdir/utf8_and_gb18030' },
{ oid => '4369', descr => 'internal conversion function for UTF8 to GB18030',
proname => 'utf8_to_gb18030', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_gb18030', probin => '$libdir/utf8_and_gb18030' },
{ oid => '4370', descr => 'internal conversion function for GBK to UTF8',
proname => 'gbk_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'gbk_to_utf8', probin => '$libdir/utf8_and_gbk' },
{ oid => '4371', descr => 'internal conversion function for UTF8 to GBK',
proname => 'utf8_to_gbk', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_gbk', probin => '$libdir/utf8_and_gbk' },
{ oid => '4372',
descr => 'internal conversion function for UTF8 to ISO-8859 2-16',
proname => 'utf8_to_iso8859', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_iso8859', probin => '$libdir/utf8_and_iso8859' },
{ oid => '4373',
descr => 'internal conversion function for ISO-8859 2-16 to UTF8',
proname => 'iso8859_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'iso8859_to_utf8', probin => '$libdir/utf8_and_iso8859' },
{ oid => '4374', descr => 'internal conversion function for LATIN1 to UTF8',
proname => 'iso8859_1_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'iso8859_1_to_utf8', probin => '$libdir/utf8_and_iso8859_1' },
{ oid => '4375', descr => 'internal conversion function for UTF8 to LATIN1',
proname => 'utf8_to_iso8859_1', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_iso8859_1', probin => '$libdir/utf8_and_iso8859_1' },
{ oid => '4376', descr => 'internal conversion function for JOHAB to UTF8',
proname => 'johab_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'johab_to_utf8', probin => '$libdir/utf8_and_johab' },
{ oid => '4377', descr => 'internal conversion function for UTF8 to JOHAB',
proname => 'utf8_to_johab', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_johab', probin => '$libdir/utf8_and_johab' },
{ oid => '4378', descr => 'internal conversion function for SJIS to UTF8',
proname => 'sjis_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'sjis_to_utf8', probin => '$libdir/utf8_and_sjis' },
{ oid => '4379', descr => 'internal conversion function for UTF8 to SJIS',
proname => 'utf8_to_sjis', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_sjis', probin => '$libdir/utf8_and_sjis' },
{ oid => '4380', descr => 'internal conversion function for UHC to UTF8',
proname => 'uhc_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'uhc_to_utf8', probin => '$libdir/utf8_and_uhc' },
{ oid => '4381', descr => 'internal conversion function for UTF8 to UHC',
proname => 'utf8_to_uhc', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_uhc', probin => '$libdir/utf8_and_uhc' },
{ oid => '4382',
descr => 'internal conversion function for EUC_JIS_2004 to UTF8',
proname => 'euc_jis_2004_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_jis_2004_to_utf8', probin => '$libdir/utf8_and_euc2004' },
{ oid => '4383',
descr => 'internal conversion function for UTF8 to EUC_JIS_2004',
proname => 'utf8_to_euc_jis_2004', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_euc_jis_2004', probin => '$libdir/utf8_and_euc2004' },
{ oid => '4384',
descr => 'internal conversion function for SHIFT_JIS_2004 to UTF8',
proname => 'shift_jis_2004_to_utf8', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'shift_jis_2004_to_utf8', probin => '$libdir/utf8_and_sjis2004' },
{ oid => '4385',
descr => 'internal conversion function for UTF8 to SHIFT_JIS_2004',
proname => 'utf8_to_shift_jis_2004', prolang => 'c', prorettype => 'int4',
proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'utf8_to_shift_jis_2004', probin => '$libdir/utf8_and_sjis2004' },
{ oid => '4386',
descr => 'internal conversion function for EUC_JIS_2004 to SHIFT_JIS_2004',
proname => 'euc_jis_2004_to_shift_jis_2004', prolang => 'c',
prorettype => 'int4', proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'euc_jis_2004_to_shift_jis_2004',
probin => '$libdir/euc2004_sjis2004' },
{ oid => '4387',
descr => 'internal conversion function for SHIFT_JIS_2004 to EUC_JIS_2004',
proname => 'shift_jis_2004_to_euc_jis_2004', prolang => 'c',
prorettype => 'int4', proargtypes => 'int4 int4 cstring internal int4 bool',
prosrc => 'shift_jis_2004_to_euc_jis_2004',
probin => '$libdir/euc2004_sjis2004' },
{ oid => '5040',
Improve selectivity estimation for assorted match-style operators. Quite a few matching operators such as JSONB's @> used "contsel" and "contjoinsel" as their selectivity estimators. That was a bad idea, because (a) contsel is only a stub, yielding a fixed default estimate, and (b) that default is 0.001, meaning we estimate these operators as five times more selective than equality, which is surely pretty silly. There's a good model for improving this in ltree's ltreeparentsel(): for any "var OP constant" query, we can try applying the operator to all of the column's MCV and histogram values, taking the latter as being a random sample of the non-MCV values. That code is actually 100% generic, except for the question of exactly what default selectivity ought to be plugged in when we don't have stats. Hence, migrate the guts of ltreeparentsel() into the core code, provide wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate default estimate, and use those for the non-geometric operators that formerly used contsel (mostly JSONB containment operators and tsquery matching). Also apply this code to some match-like operators in hstore, ltree, and pg_trgm, including the former users of ltreeparentsel as well as ones that improperly used contsel. Since commit 911e70207 just created new versions of those extensions that we haven't released yet, we can sneak this change into those new versions instead of having to create an additional generation of update scripts. Patch by me, reviewed by Alexey Bashtanov Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
2020-04-01 16:32:33 +02:00
descr => 'restriction selectivity for generic matching operators',
proname => 'matchingsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int4', prosrc => 'matchingsel' },
{ oid => '5041', descr => 'join selectivity for generic matching operators',
Improve selectivity estimation for assorted match-style operators. Quite a few matching operators such as JSONB's @> used "contsel" and "contjoinsel" as their selectivity estimators. That was a bad idea, because (a) contsel is only a stub, yielding a fixed default estimate, and (b) that default is 0.001, meaning we estimate these operators as five times more selective than equality, which is surely pretty silly. There's a good model for improving this in ltree's ltreeparentsel(): for any "var OP constant" query, we can try applying the operator to all of the column's MCV and histogram values, taking the latter as being a random sample of the non-MCV values. That code is actually 100% generic, except for the question of exactly what default selectivity ought to be plugged in when we don't have stats. Hence, migrate the guts of ltreeparentsel() into the core code, provide wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate default estimate, and use those for the non-geometric operators that formerly used contsel (mostly JSONB containment operators and tsquery matching). Also apply this code to some match-like operators in hstore, ltree, and pg_trgm, including the former users of ltreeparentsel as well as ones that improperly used contsel. Since commit 911e70207 just created new versions of those extensions that we haven't released yet, we can sneak this change into those new versions instead of having to create an additional generation of update scripts. Patch by me, reviewed by Alexey Bashtanov Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
2020-04-01 16:32:33 +02:00
proname => 'matchingjoinsel', provolatile => 's', prorettype => 'float8',
proargtypes => 'internal oid internal int2 internal',
prosrc => 'matchingjoinsel' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# replication/origin.h
{ oid => '6003', descr => 'create a replication origin',
proname => 'pg_replication_origin_create', provolatile => 'v',
proparallel => 'u', prorettype => 'oid', proargtypes => 'text',
prosrc => 'pg_replication_origin_create' },
{ oid => '6004', descr => 'drop replication origin identified by its name',
proname => 'pg_replication_origin_drop', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'text',
prosrc => 'pg_replication_origin_drop' },
{ oid => '6005',
descr => 'translate the replication origin\'s name to its id',
proname => 'pg_replication_origin_oid', provolatile => 's',
prorettype => 'oid', proargtypes => 'text',
prosrc => 'pg_replication_origin_oid' },
{ oid => '6006',
descr => 'configure session to maintain replication progress tracking for the passed in origin',
proname => 'pg_replication_origin_session_setup', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'text',
prosrc => 'pg_replication_origin_session_setup' },
{ oid => '6007', descr => 'teardown configured replication progress tracking',
proname => 'pg_replication_origin_session_reset', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => '',
prosrc => 'pg_replication_origin_session_reset' },
{ oid => '6008',
descr => 'is a replication origin configured in this session',
proname => 'pg_replication_origin_session_is_setup', provolatile => 'v',
proparallel => 'r', prorettype => 'bool', proargtypes => '',
prosrc => 'pg_replication_origin_session_is_setup' },
{ oid => '6009',
descr => 'get the replication progress of the current session',
proname => 'pg_replication_origin_session_progress', provolatile => 'v',
proparallel => 'u', prorettype => 'pg_lsn', proargtypes => 'bool',
prosrc => 'pg_replication_origin_session_progress' },
{ oid => '6010', descr => 'setup the transaction\'s origin lsn and timestamp',
proname => 'pg_replication_origin_xact_setup', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => 'pg_lsn timestamptz',
prosrc => 'pg_replication_origin_xact_setup' },
{ oid => '6011', descr => 'reset the transaction\'s origin lsn and timestamp',
proname => 'pg_replication_origin_xact_reset', provolatile => 'v',
proparallel => 'r', prorettype => 'void', proargtypes => '',
prosrc => 'pg_replication_origin_xact_reset' },
{ oid => '6012', descr => 'advance replication origin to specific location',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
proname => 'pg_replication_origin_advance', provolatile => 'v',
proparallel => 'u', prorettype => 'void', proargtypes => 'text pg_lsn',
prosrc => 'pg_replication_origin_advance' },
{ oid => '6013',
descr => 'get an individual replication origin\'s replication progress',
proname => 'pg_replication_origin_progress', provolatile => 'v',
proparallel => 'u', prorettype => 'pg_lsn', proargtypes => 'text bool',
prosrc => 'pg_replication_origin_progress' },
{ oid => '6014', descr => 'get progress for all replication origins',
proname => 'pg_show_replication_origin_status', prorows => '100',
proisstrict => 'f', proretset => 't', provolatile => 'v', proparallel => 'r',
prorettype => 'record', proargtypes => '',
proallargtypes => '{oid,text,pg_lsn,pg_lsn}', proargmodes => '{o,o,o,o}',
proargnames => '{local_id, external_id, remote_lsn, local_lsn}',
prosrc => 'pg_show_replication_origin_status' },
# publications
{ oid => '6119',
descr => 'get information of the tables that are part of the specified publications',
proname => 'pg_get_publication_tables', prorows => '1000',
provariadic => 'text', proretset => 't', provolatile => 's',
prorettype => 'record', proargtypes => '_text',
proallargtypes => '{_text,oid,oid,int2vector,pg_node_tree}',
proargmodes => '{v,o,o,o,o}',
proargnames => '{pubname,pubid,relid,attrs,qual}',
prosrc => 'pg_get_publication_tables' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
prorettype => 'bool', proargtypes => 'regclass',
prosrc => 'pg_relation_is_publishable' },
# rls
{ oid => '3298',
descr => 'row security for current context active on table by table oid',
proname => 'row_security_active', provolatile => 's', prorettype => 'bool',
proargtypes => 'oid', prosrc => 'row_security_active' },
{ oid => '3299',
descr => 'row security for current context active on table by table name',
proname => 'row_security_active', provolatile => 's', prorettype => 'bool',
proargtypes => 'text', prosrc => 'row_security_active_name' },
# pg_config
{ oid => '3400', descr => 'pg_config binary as a function',
proname => 'pg_config', prorows => '23', proretset => 't', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,text}', proargmodes => '{o,o}',
proargnames => '{name,setting}', prosrc => 'pg_config' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# pg_controldata related functions
{ oid => '3441',
descr => 'pg_controldata general state information as a function',
proname => 'pg_control_system', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{int4,int4,int8,timestamptz}',
proargmodes => '{o,o,o,o}',
proargnames => '{pg_control_version,catalog_version_no,system_identifier,pg_control_last_modified}',
prosrc => 'pg_control_system' },
{ oid => '3442',
descr => 'pg_controldata checkpoint state information as a function',
proname => 'pg_control_checkpoint', provolatile => 'v',
prorettype => 'record', proargtypes => '',
proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{checkpoint_lsn,redo_lsn,redo_wal_file,timeline_id,prev_timeline_id,full_page_writes,next_xid,next_oid,next_multixact_id,next_multi_offset,oldest_xid,oldest_xid_dbid,oldest_active_xid,oldest_multi_xid,oldest_multi_dbid,oldest_commit_ts_xid,newest_commit_ts_xid,checkpoint_time}',
prosrc => 'pg_control_checkpoint' },
{ oid => '3443',
descr => 'pg_controldata recovery state information as a function',
proname => 'pg_control_recovery', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{pg_lsn,int4,pg_lsn,pg_lsn,bool}',
proargmodes => '{o,o,o,o,o}',
proargnames => '{min_recovery_end_lsn,min_recovery_end_timeline,backup_start_lsn,backup_end_lsn,end_of_backup_record_required}',
prosrc => 'pg_control_recovery' },
{ oid => '3444',
descr => 'pg_controldata init state information as a function',
proname => 'pg_control_init', provolatile => 'v', prorettype => 'record',
proargtypes => '',
proallargtypes => '{int4,int4,int4,int4,int4,int4,int4,int4,int4,bool,int4}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o,o}',
proargnames => '{max_data_alignment,database_block_size,blocks_per_segment,wal_block_size,bytes_per_wal_segment,max_identifier_length,max_index_columns,max_toast_chunk_size,large_object_chunk_size,float8_pass_by_value,data_page_checksum_version}',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
prosrc => 'pg_control_init' },
Support subscripting of arbitrary types, not only arrays. This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
# subscripting support for built-in types
{ oid => '6179', descr => 'standard array subscripting support',
Support subscripting of arbitrary types, not only arrays. This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
proname => 'array_subscript_handler', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'array_subscript_handler' },
{ oid => '6180', descr => 'raw array subscripting support',
Support subscripting of arbitrary types, not only arrays. This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
proname => 'raw_array_subscript_handler', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'raw_array_subscript_handler' },
# type subscripting support
{ oid => '6098', descr => 'jsonb subscripting logic',
proname => 'jsonb_subscript_handler', prorettype => 'internal',
proargtypes => 'internal', prosrc => 'jsonb_subscript_handler' },
Support subscripting of arbitrary types, not only arrays. This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 18:40:37 +01:00
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# collation management functions
{ oid => '3445', descr => 'import collations from operating system',
proname => 'pg_import_system_collations', procost => '100',
provolatile => 'v', proparallel => 'u', prorettype => 'int4',
proargtypes => 'regnamespace', prosrc => 'pg_import_system_collations' },
{ oid => '3448',
descr => 'get actual version of collation from operating system',
proname => 'pg_collation_actual_version', procost => '100',
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
provolatile => 'v', prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_collation_actual_version' },
{ oid => '6249',
descr => 'get actual version of database collation from operating system',
proname => 'pg_database_collation_actual_version', procost => '100',
provolatile => 'v', prorettype => 'text', proargtypes => 'oid',
prosrc => 'pg_database_collation_actual_version' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# system management/monitoring related functions
{ oid => '3353', descr => 'list files in the log directory',
proname => 'pg_ls_logdir', procost => '10', prorows => '20', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int8,timestamptz}', proargmodes => '{o,o,o}',
proargnames => '{name,size,modification}', prosrc => 'pg_ls_logdir' },
{ oid => '3354', descr => 'list of files in the WAL directory',
proname => 'pg_ls_waldir', procost => '10', prorows => '20', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int8,timestamptz}', proargmodes => '{o,o,o}',
proargnames => '{name,size,modification}', prosrc => 'pg_ls_waldir' },
{ oid => '5031', descr => 'list of files in the archive_status directory',
proname => 'pg_ls_archive_statusdir', procost => '10', prorows => '20',
proretset => 't', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,int8,timestamptz}',
proargmodes => '{o,o,o}', proargnames => '{name,size,modification}',
prosrc => 'pg_ls_archive_statusdir' },
{ oid => '5029', descr => 'list files in the pgsql_tmp directory',
proname => 'pg_ls_tmpdir', procost => '10', prorows => '20', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => '',
proallargtypes => '{text,int8,timestamptz}', proargmodes => '{o,o,o}',
proargnames => '{name,size,modification}', prosrc => 'pg_ls_tmpdir_noargs' },
{ oid => '5030', descr => 'list files in the pgsql_tmp directory',
proname => 'pg_ls_tmpdir', procost => '10', prorows => '20', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => 'oid',
proallargtypes => '{oid,text,int8,timestamptz}', proargmodes => '{i,o,o,o}',
proargnames => '{tablespace,name,size,modification}',
prosrc => 'pg_ls_tmpdir_1arg' },
{ oid => '6270',
descr => 'list of files in the pg_logical/snapshots directory',
proname => 'pg_ls_logicalsnapdir', procost => '10', prorows => '20',
proretset => 't', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,int8,timestamptz}',
proargmodes => '{o,o,o}', proargnames => '{name,size,modification}',
prosrc => 'pg_ls_logicalsnapdir' },
{ oid => '6271',
descr => 'list of files in the pg_logical/mappings directory',
proname => 'pg_ls_logicalmapdir', procost => '10', prorows => '20',
proretset => 't', provolatile => 'v', prorettype => 'record',
proargtypes => '', proallargtypes => '{text,int8,timestamptz}',
proargmodes => '{o,o,o}', proargnames => '{name,size,modification}',
prosrc => 'pg_ls_logicalmapdir' },
{ oid => '6272',
descr => 'list of files in the pg_replslot/slot_name directory',
proname => 'pg_ls_replslotdir', procost => '10', prorows => '20',
proretset => 't', provolatile => 'v', prorettype => 'record',
proargtypes => 'text', proallargtypes => '{text,text,int8,timestamptz}',
proargmodes => '{i,o,o,o}',
proargnames => '{slot_name,name,size,modification}',
prosrc => 'pg_ls_replslotdir' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
# hash partitioning constraint function
{ oid => '5028', descr => 'hash partition CHECK constraint',
proname => 'satisfies_hash_partition', provariadic => 'any',
proisstrict => 'f', prorettype => 'bool', proargtypes => 'oid int4 int4 any',
proargmodes => '{i,i,i,v}', prosrc => 'satisfies_hash_partition' },
# information about a partition tree
{ oid => '3423', descr => 'view partition tree tables',
proname => 'pg_partition_tree', prorows => '1000', proretset => 't',
provolatile => 'v', prorettype => 'record', proargtypes => 'regclass',
proallargtypes => '{regclass,regclass,regclass,bool,int4}',
proargmodes => '{i,o,o,o,o}',
proargnames => '{rootrelid,relid,parentrelid,isleaf,level}',
prosrc => 'pg_partition_tree' },
{ oid => '3425', descr => 'view ancestors of the partition',
proname => 'pg_partition_ancestors', prorows => '10', proretset => 't',
provolatile => 'v', prorettype => 'regclass', proargtypes => 'regclass',
proallargtypes => '{regclass,regclass}', proargmodes => '{i,o}',
proargnames => '{partitionid,relid}', prosrc => 'pg_partition_ancestors' },
# function to get the top-most partition root parent
{ oid => '3424', descr => 'get top-most partition root parent',
proname => 'pg_partition_root', prorettype => 'regclass',
proargtypes => 'regclass', prosrc => 'pg_partition_root' },
{ oid => '4549', descr => 'Unicode version used by Postgres',
proname => 'unicode_version', prorettype => 'text', proargtypes => '',
prosrc => 'unicode_version' },
{ oid => '6099', descr => 'Unicode version used by ICU, if enabled',
proname => 'icu_unicode_version', prorettype => 'text', proargtypes => '',
prosrc => 'icu_unicode_version' },
{ oid => '6105', descr => 'check valid Unicode',
proname => 'unicode_assigned', prorettype => 'bool', proargtypes => 'text',
prosrc => 'unicode_assigned' },
{ oid => '4350', descr => 'Unicode normalization',
proname => 'normalize', prorettype => 'text', proargtypes => 'text text',
prosrc => 'unicode_normalize_func' },
{ oid => '4351', descr => 'check Unicode normalization',
proname => 'is_normalized', prorettype => 'bool', proargtypes => 'text text',
prosrc => 'unicode_is_normalized' },
{ oid => '6198', descr => 'unescape Unicode characters',
proname => 'unistr', prorettype => 'text', proargtypes => 'text',
prosrc => 'unistr' },
BRIN bloom indexes Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes using the new opclasses allow only equality queries (similar to hash indexes), but that works fine for data like UUID, MAC addresses etc. for which range queries are not very common. This also means the indexes work for data that is not well correlated to physical location within the table, or perhaps even entirely random (which is a common issue with existing BRIN minmax opclasses). It's possible to specify opclass parameters with the usual Bloom filter parameters, i.e. the desired false-positive rate and the expected number of distinct values per page range. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05, n_distinct_per_range = 100)); The opclasses do not operate on the indexed values directly, but compute a 32-bit hash first, and the Bloom filter is built on the hash value. Collisions should not be a huge issue though, as the number of distinct values in a page ranges is usually fairly small. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: Nico Williams <nico@cryptonector.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:35:29 +01:00
{ oid => '4596', descr => 'I/O',
proname => 'brin_bloom_summary_in', prorettype => 'pg_brin_bloom_summary',
proargtypes => 'cstring', prosrc => 'brin_bloom_summary_in' },
{ oid => '4597', descr => 'I/O',
proname => 'brin_bloom_summary_out', prorettype => 'cstring',
proargtypes => 'pg_brin_bloom_summary', prosrc => 'brin_bloom_summary_out' },
{ oid => '4598', descr => 'I/O',
proname => 'brin_bloom_summary_recv', provolatile => 's',
prorettype => 'pg_brin_bloom_summary', proargtypes => 'internal',
prosrc => 'brin_bloom_summary_recv' },
{ oid => '4599', descr => 'I/O',
proname => 'brin_bloom_summary_send', provolatile => 's',
prorettype => 'bytea', proargtypes => 'pg_brin_bloom_summary',
prosrc => 'brin_bloom_summary_send' },
{ oid => '4638', descr => 'I/O',
proname => 'brin_minmax_multi_summary_in',
prorettype => 'pg_brin_minmax_multi_summary', proargtypes => 'cstring',
prosrc => 'brin_minmax_multi_summary_in' },
{ oid => '4639', descr => 'I/O',
proname => 'brin_minmax_multi_summary_out', prorettype => 'cstring',
proargtypes => 'pg_brin_minmax_multi_summary',
prosrc => 'brin_minmax_multi_summary_out' },
{ oid => '4640', descr => 'I/O',
proname => 'brin_minmax_multi_summary_recv', provolatile => 's',
prorettype => 'pg_brin_minmax_multi_summary', proargtypes => 'internal',
prosrc => 'brin_minmax_multi_summary_recv' },
{ oid => '4641', descr => 'I/O',
proname => 'brin_minmax_multi_summary_send', provolatile => 's',
prorettype => 'bytea', proargtypes => 'pg_brin_minmax_multi_summary',
prosrc => 'brin_minmax_multi_summary_send' },
{ oid => '6291', descr => 'arbitrary value from among input values',
proname => 'any_value', prokind => 'a', proisstrict => 'f',
prorettype => 'anyelement', proargtypes => 'anyelement',
prosrc => 'aggregate_dummy' },
{ oid => '6292', descr => 'aggregate transition function',
proname => 'any_value_transfn', prorettype => 'anyelement',
proargtypes => 'anyelement anyelement', prosrc => 'any_value_transfn' },
{ oid => '8436',
descr => 'list of available WAL summary files',
proname => 'pg_available_wal_summaries', prorows => '100',
proretset => 't', provolatile => 'v', proparallel => 's',
prorettype => 'record', proargtypes => '',
proallargtypes => '{int8,pg_lsn,pg_lsn}',
proargmodes => '{o,o,o}',
proargnames => '{tli,start_lsn,end_lsn}',
prosrc => 'pg_available_wal_summaries' },
{ oid => '8437',
descr => 'contents of a WAL summary file',
proname => 'pg_wal_summary_contents', prorows => '100',
proretset => 't', provolatile => 'v', proparallel => 's',
prorettype => 'record', proargtypes => 'int8 pg_lsn pg_lsn',
proallargtypes => '{int8,pg_lsn,pg_lsn,oid,oid,oid,int2,int8,bool}',
proargmodes => '{i,i,i,o,o,o,o,o,o}',
proargnames => '{tli,start_lsn,end_lsn,relfilenode,reltablespace,reldatabase,relforknumber,relblocknumber,is_limit_block}',
prosrc => 'pg_wal_summary_contents' },
{ oid => '8438',
descr => 'WAL summarizer state',
proname => 'pg_get_wal_summarizer_state',
provolatile => 'v', proparallel => 's',
prorettype => 'record', proargtypes => '',
proallargtypes => '{int8,pg_lsn,pg_lsn,int4}',
proargmodes => '{o,o,o,o}',
proargnames => '{summarized_tli,summarized_lsn,pending_lsn,summarizer_pid}',
prosrc => 'pg_get_wal_summarizer_state' },
# GiST stratnum implementations
{ oid => '8047', descr => 'GiST support',
proname => 'gist_stratnum_identity', prorettype => 'int2',
proargtypes => 'int2',
prosrc => 'gist_stratnum_identity' },
Replace our traditional initial-catalog-data format with a better design. Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 19:16:50 +02:00
]