Commit Graph

13786 Commits

Author SHA1 Message Date
Tom Lane f34f0e4c58 Last-minute updates for release notes.
The set of functions that need parallel-safety adjustments isn't the
same in 9.6 as 10, so I shouldn't have blindly back-patched that list.
Adjust as needed.  Also, provide examples of the commands to issue.
2018-05-07 13:13:27 -04:00
Tom Lane b56d5f230f Last-minute updates for release notes.
Security: CVE-2018-1115
2018-05-07 11:50:15 -04:00
Peter Eisentraut a43a4509f8 doc: Improve spelling and wording a bit 2018-05-07 11:05:19 -04:00
Peter Eisentraut baf21b922a doc: Fix minor markup issue
There shouldn't be a line break between two adjacent tags, because that
will appear as whitespace in the output.  (The rendering engine might in
turn collapse that whitespace away, so it might not actually make a
difference, but it's more correct this way.)
2018-05-07 10:21:47 -04:00
Robert Haas f955d7ee16 Documentation updates for partitioning.
Takayuki Tsunakawa

Discussion: http://postgr.es/m/0A3221C70F24FB45833433255569204D1F965627@G01JPEXMBYT05
2018-05-07 09:48:47 -04:00
Tom Lane 2667e019c6 Release notes for 10.4, 9.6.9, 9.5.13, 9.4.18, 9.3.23. 2018-05-06 15:30:44 -04:00
Tom Lane d160882a17 Fix bootstrap parser so that its keywords are unreserved words.
Mark Dilger pointed out that the bootstrap parser does not allow
any of its keywords to appear as column values unless they're quoted,
and proposed dealing with that by quoting such values in genbki.pl.
Looking closer, though, we also have that problem with respect to table,
column, and type names appearing in the .bki file: the parser would fail
if any of those matched any of its keywords.  While so far there have
been no conflicts (that I've heard of), this seems like a booby trap
waiting to catch somebody.  Rather than clutter genbki.pl with enough
quoting logic to handle all that, let's make the bootstrap parser grow
up a little bit and treat its keywords as unreserved.

Experimentation shows that it's fairly easy to do so with the exception
of _null_, which I don't have a big problem with keeping as a reserved
word.  The only change needed is that we can't have the "close" command
take an optional table name: it has to either require or forbid the
table name to avoid shift/reduce conflicts.  genbki.pl has historically
always included the table name, so I took that option.

The implementation has bootscanner.l passing forward the string value
of each keyword, in case bootparse.y needs that.  This avoids needing to
know the precise spelling of each keyword in bootparse.y, which is good
because that's not always obvious from the token name.

Discussion: https://postgr.es/m/3024FC91-DB6D-4732-B31C-DF772DF039A0@gmail.com
2018-05-05 16:23:07 -04:00
Tom Lane 488ccfe40a First-draft release notes for 10.4.
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
2018-05-04 18:56:50 -04:00
Peter Eisentraut bcded2609a doc: Correct update on limitations of partitions
Amit Langote
2018-05-02 12:06:25 -04:00
Heikki Linnakangas f66912b0a0 Remove remaining references to version-0 calling convention in docs.
Support for version-0 calling convention was removed in PostgreSQL v10.
Change the SPI example to use version 1 convention, so that it actually
works.

Author: John Naylor
Discussion: https://www.postgresql.org/message-id/CAJVSVGVydmhLBdm80Rw3G8Oq5TnA7eCxUv065yoZfNfLbF1tzA@mail.gmail.com
2018-05-02 17:51:11 +03:00
Bruce Momjian 7f6570b3a8 docs: Remove tabs recently introduced by me. 2018-05-02 08:33:36 -04:00
Bruce Momjian 81ff9ec8f8 doc comments: rendering engines are another UTF8 restriction 2018-05-01 10:17:55 -04:00
Bruce Momjian 3960fa5f63 docs comments: clarify why not to use UTF8 still in docs
Back branches still are SGML.
2018-05-01 09:26:11 -04:00
Peter Eisentraut 5a6ab0a1b1 doc: Update limitations of partitions
David Rowley, Amit Langote
2018-05-01 07:48:51 -04:00
Tom Lane 84549ebd4c Tweak reformat_dat_file.pl to make it more easily hand-invokable.
Use the same code we already applied in duplicate_oids and unused_oids
to let this script find Catalog.pm without help.  This removes the need
to supply a -I switch in most cases.

Also, mark the script executable, again to follow the precedent of
duplicate_oids and unused_oids.  Now you can just do
"./reformat_dat_file.pl pg_proc.dat"
if you want to reformat only one or a few .dat files rather than all.

It'd be possible to remove the -I switches in the Makefile's convenience
targets, but I chose to leave them: they don't hurt anything, and it's
possible that in weird VPATH situations they might be of value.
2018-04-28 16:09:03 -04:00
Tom Lane 4094031dd3 Assorted minor doc/comment fixes.
Identify pg_replication_origin as a shared catalog in catalogs.sgml,
using the same boilerplate wording used for most other shared catalogs
(and tweak another place where someone had randomly deviated from
that boilerplate).

Make an example in mmgr/README more consistent with surrounding text.

Update an obsolete cross-reference in a comment in storage/block.h.

Zhuo Ql

Discussion: https://postgr.es/m/44296255.1819230.1524889719001@mail.yahoo.com
2018-04-28 11:46:15 -04:00
Tom Lane 2e83e6bd74 Adjust hints and docs to suggest CREATE EXTENSION not CREATE LANGUAGE.
The core PLs have been extension-ified for seven years now, and we can
reasonably hope that all out-of-core PLs have been too.  So adjust a few
places that were still recommending CREATE LANGUAGE as the user-level
way to install a PL.

Discussion: https://postgr.es/m/CA+TgmoaJTUDMSuSCg4k08Dv8vhbrJq9nP3ZfPbmysVz_616qxw@mail.gmail.com
2018-04-27 13:42:03 -04:00
Tom Lane a0854f1072 Avoid parsing catalog data twice during BKI file construction.
In the wake of commit 5602265f7, we were doing duplicate-OID detection
quite inefficiently, by invoking duplicate_oids which does all the same
parsing of catalog headers and .dat files as genbki.pl does.  That adds
under half a second on modern machines, but quite a bit more on slow
buildfarm critters, so it seems worth avoiding.  Let's just extend
genbki.pl a little so it can also detect duplicate OIDs, and remove
the duplicate_oids call from the build process.

(This also means that duplicate OID detection will happen during
Windows builds, which AFAICS it didn't before.)

This makes the use-case for duplicate_oids a bit dubious, but it's
possible that people will still want to run that check without doing
a whole build run, so let's keep that script.

In passing, move down genbki.pl's creation of its temp output files
so that it doesn't happen until after we've done parsing and validation
of the input.  This avoids leaving a lot of clutter around after a
failure.

John Naylor and Tom Lane

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-26 13:22:27 -04:00
Bruce Momjian 1900365c1e docs: remove "III" version text from pgAdmin link
Reported-by: vodevsh@gmail.com

Discussion: https://postgr.es/m/152404286919.19366.7988650271505173666@wrigleys.postgresql.org

Backpatch-through: 9.3
2018-04-26 11:10:43 -04:00
Tom Lane f04d4ac919 Reindent Perl files with perltidy version 20170521.
Discussion: https://postgr.es/m/CABUevEzK3cNiHZQ18f5tK0guoT+cN_jWeVzhYYxY=r+1Q3SmoA@mail.gmail.com
2018-04-25 14:00:19 -04:00
Teodor Sigaev 4eaf7eaccb Add missing and dangling downlink checks to amcheck
When bt_index_parent_check() is called with the heapallindexed option,
allocate a second Bloom filter to fingerprint block numbers that appear
in the downlinks of internal pages.  Use Bloom filter probes when
walking the B-Tree to detect missing downlinks.  This can detect subtle
problems with page deletion/VACUUM, such as corruption caused by the bug
just fixed in commit 6db4b499.

The downlink Bloom filter is bound in size by work_mem.  Its optimal
size is typically far smaller than that of the regular heapallindexed
Bloom filter, especially when the index has high fan-out.

Author: Peter Geoghegan
Reviewer: Teodor Sigaev
Discussion: https://postgr.es/m/CAH2-WznUzY4fWTjm1tBB3JpVz8cCfz7k_qVp5BhuPyhivmWJFg@mail.gmail.com
2018-04-25 18:02:55 +03:00
Magnus Hagander 7f58f666cd Fix typo
Author: Michael Paquier
2018-04-25 09:29:50 +02:00
Alvaro Herrera 055fb8d33d Add GUC enable_partition_pruning
This controls both plan-time and execution-time new-style partition
pruning.  While finer-grain control is possible (maybe using an enum GUC
instead of boolean), there doesn't seem to be much need for that.

This new parameter controls partition pruning for all queries:
trivially, SELECT queries that affect partitioned tables are naturally
under its control since they are using the new technology.  However,
while UPDATE/DELETE queries do not use the new code, we make the new GUC
control their behavior also (stealing control from
constraint_exclusion), because it is more natural, and it leads to a
more natural transition to the future in which those queries will also
use the new pruning code.

Constraint exclusion still controls pruning for regular inheritance
situations (those not involving partitioned tables).

Author: David Rowley
Review: Amit Langote, Ashutosh Bapat, Justin Pryzby, David G. Johnston
Discussion: https://postgr.es/m/CAKJS1f_0HwsxJG9m+nzU+CizxSdGtfe6iF_ykPYBiYft302DCw@mail.gmail.com
2018-04-23 17:57:43 -03:00
Tom Lane 4df58f7ed7 Fix handling of partition bounds for boolean partitioning columns.
Previously, you could partition by a boolean column as long as you
spelled the bound values as string literals, for instance FOR VALUES
IN ('t').  The trouble with this is that ruleutils.c printed that as
FOR VALUES IN (TRUE), which is reasonable syntax but wasn't accepted by
the grammar.  That results in dump-and-reload failures for such cases.

Apply a minimal fix that just causes TRUE and FALSE to be converted to
strings 'true' and 'false'.  This is pretty grotty, but it's too late for
a more principled fix in v11 (to say nothing of v10).  We should revisit
the whole issue of how partition bound values are parsed for v12.

Amit Langote

Discussion: https://postgr.es/m/e05c5162-1103-7e37-d1ab-6de3e0afaf70@lab.ntt.co.jp
2018-04-23 15:29:11 -04:00
Teodor Sigaev 9975c128a1 Update trigram example in docs to correct state
Author: Liudmila Mantrova
2018-04-23 16:55:13 +03:00
Magnus Hagander 9cad926eb8 Add missing documentation for BGWORKER_BYPASS_ALLOWCONN
This was missed in eed1ce72e1.

Reported by Michael Paquier
2018-04-22 14:03:36 +02:00
Peter Eisentraut 56811e5732 doc: Restructure authentication methods sections
Move the authentication methods sections up to sect1, so they are easier
to navigate in HTML.
2018-04-21 10:17:23 -04:00
Heikki Linnakangas fe7fc52645 Improve docs for the new INCLUDE directive in CREATE/ALTER TABLE.
Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/20180411082020.GD19732%40paquier.xyz
2018-04-18 05:45:32 -04:00
Tom Lane 55d26ff638 Rationalize handling of single and double quotes in bootstrap data.
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)

Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly.  This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.

Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.

Per an earlier discussion with John Naylor.

Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
2018-04-17 19:53:50 -04:00
Tatsuo Ishii 03030512d1 Add more infinite recursion detection while locking a view.
Also add regression test cases for detecting infinite recursion in
locking view tests.  Some document enhancements. Patch by Yugo Nagata.
2018-04-17 16:59:17 +09:00
Magnus Hagander 90372729f4 Fix build of pg_verify_checksum docs
They were accidentally excluded when reverting the backend online
checksum functionality, and since they weren't built the incorrect
reference to a removed section also did not trigger a problem.

Author: Christoph Berg
2018-04-15 13:57:02 +02:00
Magnus Hagander 645387927f Clarify pg_verify_checksum documentation
Make it clear that a cluster has to be shut down cleanly before
pg_verify_checksum can be run against it.

Author: Michael Paquier
Review: Daniel Gustafsson
2018-04-15 13:52:57 +02:00
Magnus Hagander 44e2df461f Remove -f option from pg_verify_checksums
This option makes no sense when the cluster checksum state cannot be
changed, and should have been removed in the revert.

Author: Daniel Gustafsson
Review: Michael Paquier
2018-04-15 13:52:48 +02:00
Simon Riggs 08ea7a2291 Revert MERGE patch
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.

While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.

Thanks again to all reviewers and bug reporters.

List of commits reverted, in reverse chronological order:

 f1464c5380 Improve parse representation for MERGE
 ddb4158579 MERGE syntax diagram correction
 530e69e59b Allow cpluspluscheck to pass by renaming variable
 01b88b4df5 MERGE minor errata
 3af7b2b0d4 MERGE fix variable warning in non-assert builds
 a5d86181ec MERGE INSERT allows only one VALUES clause
 4b2d44031f MERGE post-commit review
 4923550c20 Tab completion for MERGE
 aa3faa3c7a WITH support in MERGE
 83454e3c2b New files for MERGE
 d204ef6377 MERGE SQL Command following SQL:2016

Author: Pavan Deolasee
Reviewed-by: Michael Paquier
2018-04-12 11:22:56 +01:00
Peter Eisentraut f1f537cb46 doc: Add more information about logical replication privileges
In particular, the requirement to have SELECT privilege for the initial
table copy was previously not documented.

Author: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
2018-04-11 09:01:57 -04:00
Peter Eisentraut 036ca6f7bb doc: Fix typos in pgbench documentation
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Edmund Horner <ejrh00@gmail.com>
2018-04-11 08:34:30 -04:00
Tom Lane 3b8f6e75f3 Fix partial-build problems introduced by having more generated headers.
Commit 372728b0d created some problems for usages like building a
subdirectory without having first done "make all" at the top level,
or for proceeding directly to "make install" without "make all".
The only reasonably clean way to fix this seems to be to force the
submake-generated-headers rule to fire in *any* "make all" or "make
install" command anywhere in the tree.  To avoid lots of redundant work,
as well as parallel make jobs possibly clobbering each others' output, we
still need to be sure that the rule fires only once in a recursive build.
For that, adopt the same MAKELEVEL hack previously used for "temp-install".
But try to document it a bit better.

The submake-errcodes mechanism previously used in src/port/ and src/common/
is subsumed by this, so we can get rid of those special cases.  It was
inadequate for src/common/ anyway after the aforesaid commit, and it always
risked parallel attempts to build errcodes.h.

Discussion: https://postgr.es/m/E1f5FAB-0006LU-MB@gemulon.postgresql.org
2018-04-09 16:42:10 -04:00
Tom Lane 2cdf359fc4 Make reformat_dat_file.pl preserve all blank lines.
In its original form, reformat_dat_file.pl smashed consecutive blank
lines to a single blank line, which was helpful for mopping up excess
whitespace during the bootstrap data format conversion.  But going
forward, there seems little reason to do that; if developers want to
put in multiple blank lines, let 'em.  This makes it conform to the
documentation I (tgl) wrote, too.

In passing, clean up some sloppy markup choices in bki.sgml.

John Naylor

Discussion: https://postgr.es/m/28827.1523039259@sss.pgh.pa.us
2018-04-09 14:58:39 -04:00
Magnus Hagander a228cc13ae Revert "Allow on-line enabling and disabling of data checksums"
This reverts the backend sides of commit 1fde38beaa.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
2018-04-09 19:03:42 +02:00
Teodor Sigaev 03c11796a9 Improve covering index documentation
Add missed description of pg_constraint.conincluding

Shinoda, Noriyoshi and Alexander Korotkov
2018-04-09 17:53:42 +03:00
Tom Lane 893e9e6540 Doc: clarify explanation of pg_dump usage.
This section confusingly used both "infile" and "outfile" to refer
to the same file, i.e. the textual output of pg_dump.  Use "dumpfile"
for both cases, per suggestion from Jonathan Katz.

Discussion: https://postgr.es/m/152311295239.31235.6487236091906987117@wrigleys.postgresql.org
2018-04-08 16:35:42 -04:00
Tom Lane 372728b0d4 Replace our traditional initial-catalog-data format with a better design.
Historically, the initial catalog data to be installed during bootstrap
has been written in DATA() lines in the catalog header files.  This had
lots of disadvantages: the format was badly underdocumented, it was
very difficult to edit the data in any mechanized way, and due to the
lack of any abstraction the data was verbose, hard to read/understand,
and easy to get wrong.

Hence, move this data into separate ".dat" files and represent it in a way
that can easily be read and rewritten by Perl scripts.  The new format is
essentially "key => value" for each column; while it's a bit repetitive,
explicit labeling of each value makes the data far more readable and less
error-prone.  Provide a way to abbreviate entries by omitting field values
that match a specified default value for their column.  This allows removal
of a large amount of repetitive boilerplate and also lowers the barrier to
adding new columns.

Also teach genbki.pl how to translate symbolic OID references into
numeric OIDs for more cases than just "regproc"-like pg_proc references.
It can now do that for regprocedure-like references (thus solving the
problem that regproc is ambiguous for overloaded functions), operators,
types, opfamilies, opclasses, and access methods.  Use this to turn
nearly all OID cross-references in the initial data into symbolic form.
This represents a very large step forward in readability and error
resistance of the initial catalog data.  It should also reduce the
difficulty of renumbering OID assignments in uncommitted patches.

Also, solve the longstanding problem that frontend code that would like to
use OID macros and other information from the catalog headers often had
difficulty with backend-only code in the headers.  To do this, arrange for
all generated macros, plus such other declarations as we deem fit, to be
placed in "derived" header files that are safe for frontend inclusion.
(Once clients migrate to using these pg_*_d.h headers, it will be possible
to get rid of the pg_*_fn.h headers, which only exist to quarantine code
away from clients.  That is left for follow-on patches, however.)

The now-automatically-generated macros include the Anum_xxx and Natts_xxx
constants that we used to have to update by hand when adding or removing
catalog columns.

Replace the former manual method of generating OID macros for pg_type
entries with an automatic method, ensuring that all built-in types have
OID macros.  (But note that this patch does not change the way that
OID macros for pg_proc entries are built and used.  It's not clear that
making that match the other catalogs would be worth extra code churn.)

Add SGML documentation explaining what the new data format is and how to
work with it.

Despite being a very large change in the catalog headers, there is no
catversion bump here, because postgres.bki and related output files
haven't changed at all.

John Naylor, based on ideas from various people; review and minor
additional coding by me; previous review by Alvaro Herrera

Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 13:17:27 -04:00
Andrew Gierth 49b0e300f7 Support index INCLUDE in the AM properties interface.
This rectifies an oversight in commit 8224de4f4, by adding a new
property 'can_include' for pg_indexam_has_property, and adjusting the
results of pg_index_column_has_property to give more appropriate
results for INCLUDEd columns.
2018-04-08 06:02:05 +01:00
Stephen Frost c37b3d08ca Allow group access on PGDATA
Allow the cluster to be optionally init'd with read access for the
group.

This means a relatively non-privileged user can perform a backup of the
cluster without requiring write privileges, which enhances security.

The mode of PGDATA is used to determine whether group permissions are
enabled for directory and file creates.  This method was chosen as it's
simple and works well for the various utilities that write into PGDATA.

Changing the mode of PGDATA manually will not automatically change the
mode of all the files contained therein.  If the user would like to
enable group access on an existing cluster then changing the mode of all
the existing files will be required.  Note that pg_upgrade will
automatically change the mode of all migrated files if the new cluster
is init'd with the -g option.

Tests are included for the backend and all the utilities which operate
on the PG data directory to ensure that the correct mode is set based on
the data directory permissions.

Author: David Steele <david@pgmasters.net>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
2018-04-07 17:45:39 -04:00
Alvaro Herrera 499be013de Support partition pruning at execution time
Existing partition pruning is only able to work at plan time, for query
quals that appear in the parsed query.  This is good but limiting, as
there can be parameters that appear later that can be usefully used to
further prune partitions.

This commit adds support for pruning subnodes of Append which cannot
possibly contain any matching tuples, during execution, by evaluating
Params to determine the minimum set of subnodes that can possibly match.
We support more than just simple Params in WHERE clauses. Support
additionally includes:

1. Parameterized Nested Loop Joins: The parameter from the outer side of the
   join can be used to determine the minimum set of inner side partitions to
   scan.

2. Initplans: Once an initplan has been executed we can then determine which
   partitions match the value from the initplan.

Partition pruning is performed in two ways.  When Params external to the plan
are found to match the partition key we attempt to prune away unneeded Append
subplans during the initialization of the executor.  This allows us to bypass
the initialization of non-matching subplans meaning they won't appear in the
EXPLAIN or EXPLAIN ANALYZE output.

For parameters whose value is only known during the actual execution
then the pruning of these subplans must wait.  Subplans which are
eliminated during this stage of pruning are still visible in the EXPLAIN
output.  In order to determine if pruning has actually taken place, the
EXPLAIN ANALYZE must be viewed.  If a certain Append subplan was never
executed due to the elimination of the partition then the execution
timing area will state "(never executed)".  Whereas, if, for example in
the case of parameterized nested loops, the number of loops stated in
the EXPLAIN ANALYZE output for certain subplans may appear lower than
others due to the subplan having been scanned fewer times.  This is due
to the list of matching subnodes having to be evaluated whenever a
parameter which was found to match the partition key changes.

This commit required some additional infrastructure that permits the
building of a data structure which is able to perform the translation of
the matching partition IDs, as returned by get_matching_partitions, into
the list index of a subpaths list, as exist in node types such as
Append, MergeAppend and ModifyTable.  This allows us to translate a list
of clauses into a Bitmapset of all the subpath indexes which must be
included to satisfy the clause list.

Author: David Rowley, based on an earlier effort by Beena Emerson
Reviewers: Amit Langote, Robert Haas, Amul Sul, Rajkumar Raghuwanshi,
Jesper Pedersen
Discussion: https://postgr.es/m/CAOG9ApE16ac-_VVZVvv0gePSgkg_BwYEV1NBqZFqDR2bBE0X0A@mail.gmail.com
2018-04-07 17:54:39 -03:00
Teodor Sigaev 8224de4f42 Indexes with INCLUDE columns and their support in B-tree
This patch introduces INCLUDE clause to index definition.  This clause
specifies a list of columns which will be included as a non-key part in
the index.  The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans.  Also, such columns don't need to have
appropriate operator classes.  Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.

Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine.  For now, only B-tree indexes support INCLUDE clause.

In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys).  Therefore, B-tree indexes
now might have variable number of attributes.  This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid.  Free 13th bit of t_info is used for indicating
that.  This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.

Bump catalog version

Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
			 David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
2018-04-07 23:00:39 +03:00
Teodor Sigaev 1c1791e000 Add json(b)_to_tsvector function
Jsonb has a complex nature so there isn't best-for-everything way to convert it
to tsvector for full text search. Current to_tsvector(json(b)) suggests to
convert only string values, but it's possible to index keys, numerics and even
booleans value. To solve that json(b)_to_tsvector has a second required
argument contained a list of desired types of json fields. Second argument is
a jsonb scalar or array right now with possibility to add new options in a
future.

Bump catalog version

Author: Dmitry Dolgov with some editorization by me
Reviewed by: Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
2018-04-07 20:58:03 +03:00
Peter Eisentraut 039eb6e92f Logical replication support for TRUNCATE
Update the built-in logical replication system to make use of the
previously added logical decoding for TRUNCATE support.  Add the
required truncate callback to pgoutput and a new logical replication
protocol message.

Publications get a new attribute to determine whether to replicate
truncate actions.  When updating a publication via pg_dump from an older
version, this is not set, thus preserving the previous behavior.

Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-04-07 11:34:11 -04:00
Peter Eisentraut 5dfd1e5a66 Logical decoding of TRUNCATE
Add a new WAL record type for TRUNCATE, which is only used when
wal_level >= logical.  (For physical replication, TRUNCATE is already
replicated via SMGR records.)  Add new callback for logical decoding
output plugins to receive TRUNCATE actions.

Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-04-07 11:34:10 -04:00
Tom Lane eb2a0e00b1 Doc: fix broken markup.
Commit 3d956d956 was apparently not checked against HEAD's doc toolchain.
Per buildfarm.
2018-04-06 20:54:52 -04:00