Commit Graph

6204 Commits

Author SHA1 Message Date
Peter Eisentraut feb270d100 pg_dump: Fix dump of generated columns in partitions
The previous fix for dumping of inherited generated columns
(0bf83648a5) must not be applied to
partitions, since, unlike normal inherited tables, they are always
dumped separately and reattached.

Reported-by: Santosh Udupi <email@hitha.net>
Discussion: https://www.postgresql.org/message-id/flat/CACLRvHZ4a-%2BSM_159%2BtcrHdEqxFrG%3DW4gwTRnwf7Oj0UNj5R2A%40mail.gmail.com
2021-05-04 14:18:16 +02:00
Tom Lane c9c37ae03f Improve wording of some pg_upgrade failure reports.
Don't advocate dropping a whole table when dropping a column would
serve.  While at it, try to make the layout of these messages a
bit cleaner and more consistent.

Per suggestion from Daniel Gustafsson.  No back-patch, as we generally
don't like to churn translatable messages in released branches.

Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
2021-04-29 15:40:34 -04:00
Tom Lane 57c081de0a Fix some more omissions in pg_upgrade's tests for non-upgradable types.
Commits 29aeda6e4 et al closed up some oversights involving not checking
for non-upgradable types within container types, such as arrays and
ranges.  However, I only looked at version.c, failing to notice that
there were substantially-equivalent tests in check.c.  (The division
of responsibility between those files is less than clear...)

In addition, because genbki.pl does not guarantee that auto-generated
rowtype OIDs will hold still across versions, we need to consider that
the composite type associated with a system catalog or view is
non-upgradable.  It seems unlikely that someone would have a user
column declared that way, but if they did, trying to read it in another
PG version would likely draw "no such pg_type OID" failures, thanks
to the type OID embedded in composite Datums.

To support the composite and reg*-type cases, extend the recursive
query that does the search to allow any base query that returns
a column of pg_type OIDs, rather than limiting it to exactly one
starting type.

As before, back-patch to all supported branches.

Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
2021-04-29 15:24:37 -04:00
Alvaro Herrera 6dd1042eda
psql: tab-complete ALTER ... DETACH CONCURRENTLY / FINALIZE
New keywords per 71f4c8c6f7.

Discussion: https://postgr.es/m/20210422204035.GA25929@alvherre.pgsql
2021-04-26 16:14:10 -04:00
Peter Eisentraut 38c9a5938a Fix pg_upgrade test on Cygwin
The verification of permissions doesn't succeed on Cygwin, because the
required feature is not implemented for Cygwin at the moment.  So skip
this part of the test, like MinGW already does.
2021-04-26 12:10:46 +02:00
Peter Eisentraut 3cbea581c7 Remove unused function argument
This was already unused in the initial commit
257836a755.  Apparently, it was used in
an earlier proposed patch version.
2021-04-26 07:05:21 +02:00
Andrew Dunstan b859d94c63 Provide pg_amcheck with an --install-missing option
This will install amcheck in the database if not present. The default
schema is for the extension is pg_catalog, but this can be overridden by
providing a value for the option.

Mark Dilger, slightly editorialized by me.

(rather divergent)
Discussion: https://postgr.es/m/bdc0f7c2-09e3-ee57-8471-569dfb509234@dunslane.net
2021-04-24 10:13:07 -04:00
Peter Eisentraut 82c3cd9741 Factor out system call names from error messages
Instead, put them in via a format placeholder.  This reduces the
number of distinct translatable messages and also reduces the chances
of typos during translation.  We already did this for the system call
arguments in a number of cases, so this is just the same thing taken a
bit further.

Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
2021-04-23 14:21:37 +02:00
Peter Eisentraut add5fad78a pg_amcheck: Use logging functions
This was already mostly done, but some error messages were printed the
long way.
2021-04-23 09:55:23 +02:00
Michael Paquier 62aa2bb293 Remove use of [U]INT64_FORMAT in some translatable strings
%lld with (long long), or %llu with (unsigned long long) are more
adapted.  This is similar to 3286065.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210421.200000.1462448394029407895.horikyota.ntt@gmail.com
2021-04-23 13:25:49 +09:00
Peter Eisentraut 39d0928a0e Use correct format placeholder for timeline IDs
Should be %u rather than %d.
2021-04-21 08:26:18 +02:00
Peter Eisentraut 3286065651 Don't use INT64_FORMAT inside message strings
Use %lld and cast to long long int instead.
2021-04-21 08:07:37 +02:00
Michael Paquier 22b2dec31b Add CURRENT_ROLE to list of roles for tab completion of GRANT in psql
This compatibility has been added in 45b9805, but psql forgot the call.

Author: Wei Wang
Reviewed-by: Aleksander Alekseev
Discussion: https://postgr.es/m/OS3PR01MB6275935F62E161BCD393D6559E489@OS3PR01MB6275.jpnprd01.prod.outlook.com
2021-04-21 10:34:43 +09:00
Michael Paquier 7ef8b52cf0 Fix typos and grammar in comments and docs
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210416070310.GG3315@telsasoft.com
2021-04-19 11:32:30 +09:00
Michael Paquier c731f9187b Replace magic constants for seek() calls in perl scripts
A couple of tests have been using 0 as magic constant while SEEK_SET can
be used instead.  This makes the code easier to understand, and more
consistent with the changes done in 3c5b068.

Per discussion with Andrew Dunstan.

Discussion: https://postgr.es/m/YHrc24AgJQ6tQ1q0@paquier.xyz
2021-04-19 10:15:35 +09:00
Peter Eisentraut 4ed7f0599a Add missing source files to nls.mk 2021-04-18 16:11:58 +02:00
Tom Lane e809493725 Split function definitions out of system_views.sql into a new file.
Invent system_functions.sql to carry the function definitions that
were formerly in system_views.sql.  The function definitions were
already a quarter of the file and are about to be more, so it seems
appropriate to give them their own home.

In passing, fix an oversight in dfb75e478: it neglected to call
check_input() for system_constraints.sql.

Discussion: https://postgr.es/m/3956760.1618529139@sss.pgh.pa.us
2021-04-16 18:37:02 -04:00
Peter Eisentraut 25593d7d33 psql: Small fixes for better translatability 2021-04-16 11:05:58 +02:00
Tom Lane 1111b2668d Undo decision to allow pg_proc.prosrc to be NULL.
Commit e717a9a18 changed the longstanding rule that prosrc is NOT NULL
because when a SQL-language function is written in SQL-standard style,
we don't currently have anything useful to put there.  This seems a poor
decision though, as it could easily have negative impacts on external
PLs (opening them to crashes they didn't use to have, for instance).
SQL-function-related code can just as easily test "is prosqlbody not
null" as "is prosrc null", so there's no real gain there either.
Hence, revert the NOT NULL marking removal and adjust related logic.

For now, we just put an empty string into prosrc for SQL-standard
functions.  Maybe we'll have a better idea later, although the
history of things like pg_attrdef.adsrc suggests that it's not
easy to maintain a string equivalent of a node tree.

This also adds an assertion that queryDesc->sourceText != NULL
to standard_ExecutorStart.  We'd been silently relying on that
for awhile, so let's make it less silent.

Also fix some overlooked documentation and test cases.

Discussion: https://postgr.es/m/2197698.1617984583@sss.pgh.pa.us
2021-04-15 17:17:20 -04:00
Peter Eisentraut fae65629ce Revert "psql: Show all query results by default"
This reverts commit 3a51306722.

Per discussion, this patch had too many issues to resolve at this
point of the development cycle.  We'll try again in the future.

Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-15 19:42:55 +02:00
Peter Eisentraut cbae8774eb pg_upgrade: Small fix for better translatability of help output 2021-04-15 09:08:18 +02:00
Michael Paquier 344487e2db Tweak behavior of pg_dump --extension with configuration tables
6568cef, that introduced the option, had an inconsistent behavior when
it comes to configuration tables set up by pg_extension_config_dump, as
the data of all configuration tables would included in a dump even for
extensions not listed by a set of --extension switches.

The contents dumped changed depending on the schema where an extension
was installed when an extension was not listed.  For example, an
extension installed under the public schema would have its configuration
data not dumped even when not listed with --extension, which was
inconsistent with the case of an extension installed on a non-public
schema, where the configuration would be dumped.

Per discussion with Noah, we have settled down to the simple rule of
dumping configuration data of an extension if it is listed in
--extension (default is unchanged and backward-compatible, to dump
everything on sight if there are no extensions directly listed).  This
avoids some weird cases where the dumps depended on a --schema for one.

More tests are added to cover the gap, where we cross-check more
behaviors depending on --schema when an extension is not listed.

Reported-by: Noah Misch
Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/20210404220802.GA728316@rfd.leadboat.com
2021-04-15 10:03:46 +09:00
Robert Haas 9acaf1a621 amcheck: Reword some messages and fix an alignment problem.
We don't need to mention the attribute number in these messages, because
there's a dedicated column for that, but we should mention the toast
value ID, because that's really useful for any follow-up troubleshooting
the user wants to do. This also rewords some of the messages to hopefully
read a little better.

Also, use VARATT_EXTERNAL_GET_POINTER in case we're accessing a TOAST
pointer that isn't aligned on a platform that's fussy about alignment,
so that we don't crash while corruption-checking the user's data.

Mark Dilger, reviewed by me.

Discussion: http://postgr.es/m/7D3B9BF6-50D0-4C30-8506-1C1851C7F96F@enterprisedb.com
2021-04-14 12:46:31 -04:00
Peter Eisentraut 6787e53fe5 pg_upgrade: Print OID using %u instead of %d
This could write wrong output into the cluster deletion script if a
database OID exceeds the signed 32-bit range.
2021-04-12 20:29:24 +02:00
Peter Eisentraut fc0fefbfe0 pg_amcheck: Add basic NLS support 2021-04-12 19:04:33 +02:00
Peter Eisentraut 44c8a3d759 Fix files references in nls.mk
broken by 37d2ff3803
2021-04-12 15:44:38 +02:00
Fujii Masao 81e094bdfd Support tab-complete for TRUNCATE on foreign tables.
Commit 8ff1c94649 extended TRUNCATE command so that it can also truncate
foreign tables. But it forgot to support tab-complete for TRUNCATE on
foreign tables. That is, previously tab-complete for TRUNCATE displayed
only the names of regular tables.

This commit improves tab-complete for TRUNCATE so that it displays also
the names of foreign tables.

Author: Fujii Masao
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
2021-04-12 21:34:23 +09:00
Michael Paquier 609b0652af Fix typos and grammar in documentation and code comments
Comment fixes are applied on HEAD, and documentation improvements are
applied on back-branches where needed.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210408164008.GJ6592@telsasoft.com
Backpatch-through: 9.6
2021-04-09 13:53:07 +09:00
Tom Lane d1fcbde579 Add support for tab-completion of type arguments in \df, \do.
Oversight in commit a3027e1e7.
2021-04-08 15:38:26 -04:00
Thomas Munro f003d9f872 Add circular WAL decoding buffer.
Teach xlogreader.c to decode its output into a circular buffer, to
support optimizations based on looking ahead.

 * XLogReadRecord() works as before, consuming records one by one, and
   allowing them to be examined via the traditional XLogRecGetXXX()
   macros.

 * An alternative new interface XLogNextRecord() is added that returns
   pointers to DecodedXLogRecord structs that can be examined directly.

 * XLogReadAhead() provides a second cursor that lets you see
   further ahead, as long as data is available and there is enough space
   in the decoding buffer.  This returns DecodedXLogRecord pointers to the
   caller, but also adds them to a queue of records that will later be
   consumed by XLogNextRecord()/XLogReadRecord().

The buffer's size is controlled with wal_decode_buffer_size.  The buffer
could potentially be placed into shared memory, for future projects.
Large records that don't fit in the circular buffer are called
"oversized" and allocated separately with palloc().

Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
2021-04-08 23:20:42 +12:00
Thomas Munro 323cbe7c7d Remove read_page callback from XLogReader.
Previously, the XLogReader module would fetch new input data using a
callback function.  Redesign the interface so that it tells the caller
to insert more data with a special return value instead.  This API suits
later patches for prefetching, encryption and maybe other future
projects that would otherwise require continually extending the callback
interface.

As incidental cleanup work, move global variables readOff, readLen and
readSegNo inside XlogReaderState.

Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Author: Heikki Linnakangas <hlinnaka@iki.fi> (parts of earlier version)
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Takashi Menjo <takashi.menjo@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20190418.210257.43726183.horiguchi.kyotaro%40lab.ntt.co.jp
2021-04-08 23:20:42 +12:00
Tom Lane a3027e1e7f Allow psql's \df and \do commands to specify argument types.
When dealing with overloaded function or operator names, having
to look through a long list of matches is tedious.  Let's extend
these commands to allow specification of (input) argument types
to let such results be trimmed down.  Each additional argument
is treated the same as the pattern argument of \dT and matched
against the appropriate argument's type name.

While at it, fix \dT (and these new options) to recognize the
usual notation of "foo[]" for "the array type over foo", and
to handle the special abbreviations allowed by the backend
grammar, such as "int" for "integer".

Greg Sabino Mullane, revised rather significantly by me

Discussion: https://postgr.es/m/CAKAnmmLF9Hhu02N+s7uAyLc5J1xZReg72HQUoiKhNiJV3_jACQ@mail.gmail.com
2021-04-07 23:02:21 -04:00
Peter Eisentraut e717a9a18b SQL-standard function body
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.

Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:

    CREATE FUNCTION add(a integer, b integer) RETURNS integer
        LANGUAGE SQL
        RETURN a + b;

or as a block

    CREATE PROCEDURE insert_data(a integer, b integer)
    LANGUAGE SQL
    BEGIN ATOMIC
      INSERT INTO tbl VALUES (a);
      INSERT INTO tbl VALUES (b);
    END;

The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody.  So at run time,
no further parsing is required.

However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.

Dependencies between the function and the objects it uses are fully
tracked.

A new RETURN statement is introduced.  This can only be used inside
function bodies.  Internally, it is treated much like a SELECT
statement.

psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.

Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
2021-04-07 21:47:55 +02:00
Peter Eisentraut dd13ad9d39 Fix use of cursor sensitivity terminology
Documentation and comments in code and tests have been using the terms
sensitive/insensitive cursor incorrectly relative to the SQL standard.
(Cursor sensitivity is only relevant for changes made in the same
transaction as the cursor, not for concurrent changes in other
sessions.)  Moreover, some of the behavior of PostgreSQL is incorrect
according to the SQL standard, confusing the issue further.  (WHERE
CURRENT OF changes are not visible in insensitive cursors, but they
should be.)

This change corrects the terminology and removes the claim that
sensitive cursors are supported.  It also adds a test case that checks
the insensitive behavior in a "correct" way, using a change command
not using WHERE CURRENT OF.  Finally, it adds the ASENSITIVE cursor
option to select the default asensitive behavior, per SQL standard.

There are no changes to cursor behavior in this patch.

Discussion: https://www.postgresql.org/message-id/flat/96ee8b30-9889-9e1b-b053-90e10c050e85%40enterprisedb.com
2021-04-07 08:05:55 +02:00
Peter Eisentraut 3a51306722 psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets.  Now it prints all of them.  The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-06 17:10:24 +02:00
Dean Rasheed 6b258e3d68 pgbench: Function to generate random permutations.
This adds a new function, permute(), that generates pseudorandom
permutations of arbitrary sizes. This can be used to randomly shuffle
a set of values to remove unwanted correlations. For example,
permuting the output from a non-uniform random distribution so that
all the most common values aren't collocated, allowing more realistic
tests to be performed.

Formerly, hash() was recommended for this purpose, but that suffers
from collisions that might alter the distribution, so recommend
permute() for this purpose instead.

Fabien Coelho and Hironobu Suzuki, with additional hacking be me.
Reviewed by Thomas Munro, Alvaro Herrera and Muhammad Usama.

Discussion: https://postgr.es/m/alpine.DEB.2.21.1807280944370.5142@lancre
2021-04-06 11:50:42 +01:00
Peter Eisentraut 82ed7748b7 ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION
At present, if we want to update publications in a subscription, we
can use SET PUBLICATION.  However, it requires supplying all
publications that exists and the new publications.  If we want to add
new publications, it's inconvenient.  The new syntax only supplies the
new publications.  When the refresh is true, it only refreshes the new
publications.

Author: Japin Li <japinli@hotmail.com>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/MEYP282MB166939D0D6C480B7FBE7EFFBB6BC0@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-04-06 11:49:51 +02:00
Tom Lane 55873a00e3 Improve psql's behavior when the editor is exited without saving.
When editing the previous query buffer, if the editor is exited
without modifying the temp file then clear the query buffer,
rather than re-loading (and probably re-executing) the previous
query buffer.  This reduces the probability of accidentally
re-executing something you didn't intend to.

Similarly, in "\e file", if the file isn't actually modified
then don't load it into the query buffer.  And in "\ef" and
"\ev", if no changes are made then clear the query buffer
instead of loading the function or view definition into it.

Cases where we fail to invoke the editor at all, or it returns
a nonzero status, are treated like the no-file-modification case.

Laurenz Albe, reviewed by Jacob Champion

Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at
2021-04-03 17:38:31 -04:00
Fujii Masao 2eb1fc8b1a pg_checksums: Fix progress reporting.
pg_checksums uses two counters, total size and current size,
to calculate the progress. Previously the progress that
pg_checksums reported could not reach 100% at the end.
The cause of this issue was that the sizes of only pages excluding
new ones in each file were counted as the current size
while the size of each file is counted as the total size.
That is, the total size of all new pages could be reported
as the difference between the total size and current size.

This commit fixes this issue by making pg_checksums count
the sizes of all pages including new ones in each file as
the current size.

Back-patch to v12 where progress reporting was added to pg_checksums.

Reported-by: Shinya Kato
Author: Shinya Kato
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com
2021-04-03 00:07:00 +09:00
Tom Lane ec03f2df17 Fix pg_restore's misdesigned code for detecting archive file format.
Despite the clear comments pointing out that the duplicative code
segments in ReadHead() and _discoverArchiveFormat() needed to be
in sync, they were not: the latter did not bother to apply any of
the sanity checks in the former.  We'd missed noticing this partly
because none of those checks would fail in scenarios we customarily
test, and partly because the oversight would be masked if both
segments execute, which they would in cases other than needing to
autodetect the format of a non-seekable stdin source.  However,
in a case meeting all these requirements --- for example, trying
to read a newer-than-supported archive format from non-seekable
stdin --- pg_restore missed applying the version check and would
likely dump core or otherwise misbehave.

The whole thing is silly anyway, because there seems little reason
to duplicate the logic beyond the one-line verification that the
file starts with "PGDMP".  There seems to have been an undocumented
assumption that multiple major formats (major enough to require
separate reader modules) would nonetheless share the first half-dozen
fields of the custom-format header.  This seems unlikely, so let's
fix it by just nuking the duplicate logic in _discoverArchiveFormat().

Also get rid of the pointless attempt to seek back to the start of
the file after successful autodetection.  That wastes cycles and
it means we have four behaviors to verify not two.

Per bug #16951 from Sergey Koposov.  This has been broken for
decades, so back-patch to all supported versions.

Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
2021-04-01 13:34:16 -04:00
Heikki Linnakangas ea1b99a661 Add 'noError' argument to encoding conversion functions.
With the 'noError' argument, you can try to convert a buffer without
knowing the character boundaries beforehand. The functions now need to
return the number of input bytes successfully converted.

This is is a backwards-incompatible change, if you have created a custom
encoding conversion with CREATE CONVERSION. This adds a check to
pg_upgrade for that, refusing the upgrade if there are any user-defined
encoding conversions. Custom conversions are very rare, there are no
commonly used extensions that I know of that uses that feature. No other
objects can depend on conversions, so if you do have one, you can fairly
easily drop it before upgrading, and recreate it after the upgrade with
an updated version.

Add regression tests for built-in encoding conversions. This doesn't cover
every conversion, but it covers all the internal functions in conv.c that
are used to implement the conversions.

Reviewed-by: John Naylor
Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi
2021-04-01 11:45:22 +03:00
Michael Paquier 6568cef26e Add support for --extension in pg_dump
When specified, only extensions matching the given pattern are included
in dumps.  Similarly to --table and --schema, when --strict-names is
used,  a perfect match is required.  Also, like the two other options,
this new option offers no guarantee that dependent objects have been
dumped, so a restore may fail on a clean database.

Tests are added in test_pg_dump/, checking after a set of positive and
negative cases, with or without an extension's contents added to the
dump generated.

Author: Guillaume Lelarge
Reviewed-by: David Fetter, Tom Lane, Michael Paquier, Asif Rehman,
Julien Rouhaud
Discussion: https://postgr.es/m/CAECtzeXOt4cnMU5+XMZzxBPJ_wu76pNy6HZKPRBL-j7yj1E4+g@mail.gmail.com
2021-03-31 09:12:34 +09:00
Tom Lane 54bb91c30e Further tweaking of pg_dump's handling of default_toast_compression.
As committed in bbe0a81db, pg_dump from a pre-v14 server effectively
acts as though you'd said --no-toast-compression.  I think the right
thing is for it to act as though default_toast_compression is set to
"pglz", instead, so that the tables' toast compression behavior is
preserved.  You can always get the other behavior, if you want that,
by giving the switch.

Discussion: https://postgr.es/m/1112852.1616609702@sss.pgh.pa.us
2021-03-30 10:57:57 -04:00
Peter Eisentraut 8df2f37114 Improve consistency of SQL code capitalization 2021-03-27 10:17:12 +01:00
Tomas Vondra a4d75c86bf Extended statistics on expressions
Allow defining extended statistics on expressions, not just just on
simple column references.  With this commit, expressions are supported
by all existing extended statistics kinds, improving the same types of
estimates. A simple example may look like this:

  CREATE TABLE t (a int);
  CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t;
  ANALYZE t;

The collected statistics are useful e.g. to estimate queries with those
expressions in WHERE or GROUP BY clauses:

  SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0;

  SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20);

This introduces new internal statistics kind 'e' (expressions) which is
built automatically when the statistics object definition includes any
expressions. This represents single-expression statistics, as if there
was an expression index (but without the index maintenance overhead).
The statistics is stored in pg_statistics_ext_data as an array of
composite types, which is possible thanks to 79f6a942bd.

CREATE STATISTICS allows building statistics on a single expression, in
which case in which case it's not possible to specify statistics kinds.

A new system view pg_stats_ext_exprs can be used to display expression
statistics, similarly to pg_stats and pg_stats_ext views.

ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it
treats indexes, i.e. it drops and recreates the statistics. This means
all statistics are reset, and we no longer try to preserve at least the
functional dependencies. This should not be a major issue in practice,
as the functional dependencies actually rely on per-column statistics,
which were always reset anyway.

Author: Tomas Vondra
Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu
Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
2021-03-27 00:01:11 +01:00
Noah Misch a14a0118a1 Add "pg_database_owner" default role.
Membership consists, implicitly, of the current database owner.  Expect
use in template databases.  Once pg_database_owner has rights within a
template, each owner of a database instantiated from that template will
exercise those rights.

Reviewed by John Naylor.

Discussion: https://postgr.es/m/20201228043148.GA1053024@rfd.leadboat.com
2021-03-26 10:42:17 -07:00
Alvaro Herrera 71f4c8c6f7
ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY
Allow a partition be detached from its partitioned table without
blocking concurrent queries, by running in two transactions and only
requiring ShareUpdateExclusive in the partitioned table.

Because it runs in two transactions, it cannot be used in a transaction
block.  This is the main reason to use dedicated syntax: so that users
can choose to use the original mode if they need it.  But also, it
doesn't work when a default partition exists (because an exclusive lock
would still need to be obtained on it, in order to change its partition
constraint.)

In case the second transaction is cancelled or a crash occurs, there's
ALTER TABLE .. DETACH PARTITION .. FINALIZE, which executes the final
steps.

The main trick to make this work is the addition of column
pg_inherits.inhdetachpending, initially false; can only be set true in
the first part of this command.  Once that is committed, concurrent
transactions that use a PartitionDirectory will include or ignore
partitions so marked: in optimizer they are ignored if the row is marked
committed for the snapshot; in executor they are always included.  As a
result, and because of the way PartitionDirectory caches partition
descriptors, queries that were planned before the detach will see the
rows in the detached partition and queries that are planned after the
detach, won't.

A CHECK constraint is created that duplicates the partition constraint.
This is probably not strictly necessary, and some users will prefer to
remove it afterwards, but if the partition is re-attached to a
partitioned table, the constraint needn't be rechecked.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200803234854.GA24158@alvherre.pgsql
2021-03-25 18:00:28 -03:00
Amit Kapila 26acb54a13 Revert "Enable parallel SELECT for "INSERT INTO ... SELECT ..."."
To allow inserts in parallel-mode this feature has to ensure that all the
constraints, triggers, etc. are parallel-safe for the partition hierarchy
which is costly and we need to find a better way to do that. Additionally,
we could have used existing cached information in some cases like indexes,
domains, etc. to determine the parallel-safety.

List of commits reverted, in reverse chronological order:

ed62d3737c Doc: Update description for parallel insert reloption.
c8f78b6161 Add a new GUC and a reloption to enable inserts in parallel-mode.
c5be48f092 Improve FK trigger parallel-safety check added by 05c8482f7f.
e2cda3c20a Fix use of relcache TriggerDesc field introduced by commit 05c8482f7f.
e4e87a32cc Fix valgrind issue in commit 05c8482f7f.
05c8482f7f Enable parallel SELECT for "INSERT INTO ... SELECT ...".

Discussion: https://postgr.es/m/E1lMiB9-0001c3-SY@gemulon.postgresql.org
2021-03-24 11:29:15 +05:30
Robert Haas 87d90ac61f Improve pg_amcheck's TAP test 003_check.pl.
Disable autovacuum, because we don't want it to run against
intentionally corrupted tables. Also, before corrupting the tables,
run pg_amcheck and ensure that it passes. Otherwise, if something
unexpected happens when we check the corrupted tables, it's not so
clear whether it would have also happened before we corrupted
them.

Mark Dilger

Discussion: http://postgr.es/m/AA5506CE-7D2A-42E4-A51D-358635E3722D@enterprisedb.com
2021-03-23 14:57:45 -04:00
Tom Lane ea80138545 Fix psql's \connect command some more.
Jasen Betts reported yet another unintended side effect of commit
85c54287a: reconnecting with "\c service=whatever" did not have the
expected results.  The reason is that starting from the output of
PQconndefaults() effectively allows environment variables (such
as PGPORT) to override entries in the service file, whereas the
normal priority is the other way around.

Not using PQconndefaults at all would require yet a third main code
path in do_connect's parameter setup, so I don't really want to fix
it that way.  But we can have the logic effectively ignore all the
default values for just a couple more lines of code.

This patch doesn't change the behavior for "\c -reuse-previous=on
service=whatever".  That remains significantly different from before
85c54287a, because many more parameters will be re-used, and thus
not be possible for service entries to replace.  But I think this
is (mostly?) intentional.  In any case, since libpq does not report
where it got parameter values from, it's hard to do differently.

Per bug #16936 from Jasen Betts.  As with the previous patches,
back-patch to all supported branches.  (9.5 is unfortunately now
out of support, so this won't get fixed there.)

Discussion: https://postgr.es/m/16936-3f524322a53a29f0@postgresql.org
2021-03-23 14:27:50 -04:00
Fujii Masao 51893c8463 pg_waldump: Fix bug in per-record statistics.
pg_waldump --stats=record identifies a record by a combination
of the RmgrId and the four bits of the xl_info field of the record.
But XACT records use the first bit of those four bits for an optional
flag variable, and the following three bits for the opcode to
identify a record. So previously the same type of XACT record
could have different four bits (three bits are the same but the
first one bit is different), and which could cause
pg_waldump --stats=record to show two lines of per-record statistics
for the same XACT record. This is a bug.

This commit changes pg_waldump --stats=record so that it processes
only XACT record differently, i.e., filters the opcode out of xl_info
and uses a combination of the RmgrId and those three bits as
the identifier of a record, only for XACT record. For other records,
the four bits of the xl_info field are still used.

Back-patch to all supported branches.

Author: Kyotaro Horiguchi
Reviewed-by: Shinya Kato, Fujii Masao
Discussion: https://postgr.es/m/2020100913412132258847@highgo.ca
2021-03-23 09:53:08 +09:00
Fujii Masao 8c6eda2d1c pgbench: Improve error-handling in \sleep command.
This commit improves pgbench \sleep command so that it handles
the following three cases more properly.

(1) When only one argument was specified in \sleep command and
    it's not a number, previously pgbench reported a confusing error
    message like "unrecognized time unit, must be us, ms or s".
    This commit fixes this so that more proper error message like
    "invalid sleep time, must be an integer" is reported.

(2) When two arguments were specified in \sleep command and
    the first argument was not a number, previously pgbench treated
    that argument as the sleep time 0. No error was reported in this
    case. This commit fixes this so that an error is thrown in this
    case.

(3) When a variable was specified as the first argument in \sleep
    command and the variable stored non-digit value, previously
    pgbench treated that argument as the sleep time 0. No error
    was reported in this case. This commit fixes this so that
    an error is thrown in this case.

Author: Kota Miyake
Reviewed-by: Hayato Kuroda, Alvaro Herrera, Fujii Masao
Discussion: https://postgr.es/m/23b254daf20cec4332a2d9168505dbc9@oss.nttdata.com
2021-03-22 12:04:48 +09:00
Tom Lane aa25d1089a Fix up pg_dump's handling of per-attribute compression options.
The approach used in commit bbe0a81db would've been disastrous for
portability of dumps.  Instead handle non-default compression options
in separate ALTER TABLE commands.  This reduces chatter for the
common case where most columns are compressed the same way, and it
makes it possible to restore the dump to a server that lacks any
knowledge of per-attribute compression options (so long as you're
willing to ignore syntax errors from the ALTER TABLE commands).

There's a whole lot left to do to mop up after bbe0a81db, but
I'm fast-tracking this part because we need to see if it's
enough to make the buildfarm's cross-version-upgrade tests happy.

Justin Pryzby and Tom Lane

Discussion: https://postgr.es/m/20210119190720.GL8560@telsasoft.com
2021-03-20 15:01:10 -04:00
Robert Haas bbe0a81db6 Allow configurable LZ4 TOAST compression.
There is now a per-column COMPRESSION option which can be set to pglz
(the default, and the only option in up until now) or lz4. Or, if you
like, you can set the new default_toast_compression GUC to lz4, and
then that will be the default for new table columns for which no value
is specified. We don't have lz4 support in the PostgreSQL code, so
to use lz4 compression, PostgreSQL must be built --with-lz4.

In general, TOAST compression means compression of individual column
values, not the whole tuple, and those values can either be compressed
inline within the tuple or compressed and then stored externally in
the TOAST table, so those properties also apply to this feature.

Prior to this commit, a TOAST pointer has two unused bits as part of
the va_extsize field, and a compessed datum has two unused bits as
part of the va_rawsize field. These bits are unused because the length
of a varlena is limited to 1GB; we now use them to indicate the
compression type that was used. This means we only have bit space for
2 more built-in compresison types, but we could work around that
problem, if necessary, by introducing a new vartag_external value for
any further types we end up wanting to add. Hopefully, it won't be
too important to offer a wide selection of algorithms here, since
each one we add not only takes more coding but also adds a build
dependency for every packager. Nevertheless, it seems worth doing
at least this much, because LZ4 gets better compression than PGLZ
with less CPU usage.

It's possible for LZ4-compressed datums to leak into composite type
values stored on disk, just as it is for PGLZ. It's also possible for
LZ4-compressed attributes to be copied into a different table via SQL
commands such as CREATE TABLE AS or INSERT .. SELECT.  It would be
expensive to force such values to be decompressed, so PostgreSQL has
never done so. For the same reasons, we also don't force recompression
of already-compressed values even if the target table prefers a
different compression method than was used for the source data.  These
architectural decisions are perhaps arguable but revisiting them is
well beyond the scope of what seemed possible to do as part of this
project.  However, it's relatively cheap to recompress as part of
VACUUM FULL or CLUSTER, so this commit adjusts those commands to do
so, if the configured compression method of the table happens not to
match what was used for some column value stored therein.

Dilip Kumar. The original patches on which this work was based were
written by Ildus Kurbangaliev, and those were patches were based on
even earlier work by Nikita Glukhov, but the design has since changed
very substantially, since allow a potentially large number of
compression methods that could be added and dropped on a running
system proved too problematic given some of the architectural issues
mentioned above; the choice of which specific compression method to
add first is now different; and a lot of the code has been heavily
refactored.  More recently, Justin Przyby helped quite a bit with
testing and reviewing and this version also includes some code
contributions from him. Other design input and review from Tomas
Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander
Korotkov, and me.

Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain
Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
2021-03-19 15:10:38 -04:00
Tom Lane 27ab1981e7 Blindly try to fix test script's tar invocation for MSYS.
Buildfarm member fairywren doesn't like the test case I added
in commit 081876d75.  I'm guessing the reason is that I shouldn't
be using a perl2host-ified path in the tar command line.
2021-03-18 22:43:11 -04:00
Michael Paquier 5b2266e33f Improve tab completion of IMPORT FOREIGN SCHEMA with \h in psql
Only "IMPORT" was showing as result of the completion, while IMPORT
FOREIGN SCHEMA is the only command using this keyword in first
position.  This changes the completion to show the full command name
instead of just "IMPORT".

Reviewed-by: Georgios Kokolatos, Julien Rouhaud
Discussion: https://postgr.es/m/YFL6JneBiuMWYyoh@paquier.xyz
2021-03-19 09:18:41 +09:00
Amit Kapila c8f78b6161 Add a new GUC and a reloption to enable inserts in parallel-mode.
Commit 05c8482f7f added the implementation of parallel SELECT for
"INSERT INTO ... SELECT ..." which may incur non-negligible overhead in
the additional parallel-safety checks that it performs, even when, in the
end, those checks determine that parallelism can't be used. This is
normally only ever a problem in the case of when the target table has a
large number of partitions.

A new GUC option "enable_parallel_insert" is added, to allow insert in
parallel-mode. The default is on.

In addition to the GUC option, the user may want a mechanism to allow
inserts in parallel-mode with finer granularity at table level. The new
table option "parallel_insert_enabled" allows this. The default is true.

Author: "Hou, Zhijie"
Reviewed-by: Greg Nancarrow, Amit Langote, Takayuki Tsunakawa, Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1K-cW7svLC2D7DHoGHxdAdg3P37BLgebqBOC2ZLc9a6QQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com
2021-03-18 07:25:27 +05:30
Tom Lane 081876d75e Add end-to-end testing of pg_basebackup's tar-format output.
The existing test script does run pg_basebackup with the -Ft option,
but it makes no real attempt to verify the sanity of the results.
We wouldn't know if the output is incompatible with standard "tar"
programs, nor if the server fails to start from the restored output.
Notably, this means that xlog.c's read_tablespace_map() is not being
meaningfully tested, since that code is used only in the tar-format
case.  (We do have reasonable coverage of restoring from plain-format
output, though it's over in src/test/recovery not here.)

Hence, attempt to untar the output and start a server from it,
rather just hoping it's OK.

This test assumes that the local "tar" has the "-C directory"
switch.  Although that's not promised by POSIX, my research
suggests that all non-extinct tar implementations have it.
Should the buildfarm's opinion differ, we can complicate the
test a bit to avoid requiring that.

Possibly this should be back-patched, but I'm unsure about
whether it could work on Windows before d66b23b03.
2021-03-17 14:52:55 -04:00
Robert Haas 4078ce65a0 Fix a confusing amcheck corruption message.
Don't complain about the last TOAST chunk number being different
from what we expected if there are no TOAST chunks at all.
In such a case, saying that the final chunk number is 0 is not
really accurate, and the fact the value is missing from the
TOAST table is reported separately anyway.

Mark Dilger

Discussion: http://postgr.es/m/AA5506CE-7D2A-42E4-A51D-358635E3722D@enterprisedb.com
2021-03-16 15:42:50 -04:00
Tom Lane 1ea396362b Improve logging of bad parameter values in BIND messages.
Since commit ba79cb5dc, values of bind parameters have been logged
during errors in extended query mode.  However, we only did that after
we'd collected and converted all the parameter values, thus failing to
offer any useful localization of invalid-parameter problems.  Add a
separate callback that's used during parameter collection, and have it
print the parameter number, along with the input string if text input
format is used.

Justin Pryzby and Tom Lane

Discussion: https://postgr.es/m/20210104170939.GH9712@telsasoft.com
Discussion: https://postgr.es/m/CANfkH5k-6nNt-4cSv1vPB80nq2BZCzhFVR5O4VznYbsX0wZmow@mail.gmail.com
2021-03-16 11:16:41 -04:00
Alvaro Herrera 9aa491abbf
Add libpq pipeline mode support to pgbench
New metacommands \startpipeline and \endpipeline allow the user to run
queries in libpq pipeline mode.

Author: Daniel Vérité <daniel@manitou-mail.org>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/b4e34135-2bd9-4b8a-94ca-27d760da26d7@manitou-mail.org
2021-03-15 18:33:03 -03:00
Alvaro Herrera acb7e4eb6b
Implement pipeline mode in libpq
Pipeline mode in libpq lets an application avoid the Sync messages in
the FE/BE protocol that are implicit in the old libpq API after each
query.  The application can then insert Sync at its leisure with a new
libpq function PQpipelineSync.  This can lead to substantial reductions
in query latency.

Co-authored-by: Craig Ringer <craig.ringer@enterprisedb.com>
Co-authored-by: Matthieu Garrigues <matthieu.garrigues@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aya Iwata <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Vérité <daniel@manitou-mail.org>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Nikhil Sontakke <nikhils@2ndquadrant.com>
Reviewed-by: Vaishnavi Prabakaran <VaishnaviP@fast.au.fujitsu.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>

Discussion: https://postgr.es/m/CAMsr+YFUjJytRyV4J-16bEoiZyH=4nj+sQ7JP9ajwz=B4dMMZw@mail.gmail.com
Discussion: https://postgr.es/m/CAJkzx4T5E-2cQe3dtv2R78dYFvz+in8PY7A8MArvLhs_pg75gg@mail.gmail.com
2021-03-15 18:13:42 -03:00
Tom Lane 58f57490fa Doc: add note about how to run the pg_amcheck regression tests.
It's not immediately obvious what you have to do to get "make
installcheck" to work here, so document that along the same lines
as we've used elsewhere.
2021-03-13 11:10:30 -05:00
Robert Haas 945d2cb7d0 In pg_amcheck tests, don't depend on perl's Q/q pack code.
It does not work on all versions of perl across all platforms.

To avoid endian-ness issues, pick a new value for column a
that has the same upper 4 bytes as lower 4 bytes. Try to
make it something that isn't likely to occur anywhere nearby
in the page.

Discussion: http://postgr.es/m/29DA079B-0658-4E66-BDAA-0EFD7B64D9C6@enterprisedb.com
2021-03-13 10:57:01 -05:00
Tom Lane 9e294d0f34 pg_amcheck: Keep trying to fix the tests.
Fix another example of non-portable option ordering in the tests.
Oversight in 24189277f.

Mark Dilger

Discussion: https://postgr.es/m/C37D28BA-3BA3-4776-B812-17F05F3472D8@enterprisedb.com
2021-03-13 00:06:56 -05:00
Robert Haas b9164eab20 pg_amcheck: Keep trying to fix the tests.
Commit 24189277f6 managed to remove
one of the two places where we were checking for a "no such user"
error while leaving the other one right next to it. So remove that
too. In fact, remove the entire test, because the whole point of
this test was to see which message we got on a failure.
2021-03-12 21:59:56 -05:00
Robert Haas 24189277f6 pg_amcheck: Try to fix still more test failures.
Avoid use of non-portable option ordering in command_checks_all().
The use of bare command line arguments before switches doesn't work
everywhere.  Per buildfarm members drongo and hoverfly.

Avoid testing for the message "role \"%s\" does not exist", because
some buildfarm machines report a different error. fairywren complains
about "SSPI authentication failed for user \"%s\"", for example.

Mark Dilger

Discussion: http://postgr.es/m/9E76E46A-48B2-4869-BD0C-422204C1F767@enterprisedb.com
Discussion: http://postgr.es/m/F0A1FD70-A2F4-4528-8A03-8650CAEC0554%40enterprisedb.com
2021-03-12 20:11:47 -05:00
Robert Haas f371a4cdba Try to avoid apparent platform-dependency in IPC::Run
It's hard to believe, but buildfarm results from the new pg_amcheck
suggest that command_checks_all() perform shell expansion on some
machines but not others, apparently due to an underlying behavior
difference in IPC::Run. Let's see if we can work around that - and
confirm that it is the real problem - by passing '-S*' as a single
argument rather than '-S' and '*' as two separate ones.

Failures were observed on jacana and hoverfly.

Mark Dilger

Discussion: http://postgr.es/m/9E76E46A-48B2-4869-BD0C-422204C1F767@enterprisedb.com
2021-03-12 19:00:41 -05:00
Robert Haas 6611256127 Fix portability issues in pg_amcheck's 004_verify_heapam.pl.
Test #12 overwrote a 1-byte varlena header to make it look like the
initial byte of a 4-byte varlena header, but the results were
endian-dependent. Also, the byte "abc" that followed the overwritten
byte would be interpreted differently depending on endian-ness.
Overwrite 4 bytes instead, in an endian-aware manner.

Test #13 accidentally managed to depend on TOAST_MAX_CHUNK_SIZE,
which varies slightly depending on MAXIMUM_ALIGNOF. That's not
the point anyway, so make the regexp insensitive to the expected
number of chunks.

Mark Dilger

Discussion: http://postgr.es/m/A80D68F6-E38F-482D-9522-E2FB6AAFE8A1@enterprisedb.com
2021-03-12 17:34:32 -05:00
Robert Haas ac44595585 Move PG_USED_FOR_ASSERTS_ONLY before initializer.
Erik Rijkers reported a compile failure, and I think this is probably
the reason.
2021-03-12 15:04:10 -05:00
Robert Haas 7a1527c02c Adjust perl style.
Per buildfarm member crake.
2021-03-12 14:55:40 -05:00
Robert Haas d60e61de4f Try to fix compiler warnings.
Per report from Peter Geoghegan.

Discussion: http://postgr.es/m/CAH2-WznpwULZ3uJ1_6WXvNMXYbOy8k8tYs3r=qSdGmZeRd6tDw@mail.gmail.com
2021-03-12 14:35:10 -05:00
Robert Haas 9706092839 Add pg_amcheck, a CLI for contrib/amcheck.
This makes it a lot easier to run the corruption checks that are
implemented by contrib/amcheck against lots of relations and get
the result in an easily understandable format. It has a wide variety
of options for choosing which relations to check and which checks
to perform, and it can run checks in parallel if you want.

Mark Dilger, reviewed by Peter Geoghegan and by me.

Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
2021-03-12 13:00:01 -05:00
Tom Lane 48d67fd897 Fix race condition in psql \e's detection of file modification.
psql's editing commands decide whether the user has edited the file
by checking for change of modification timestamp.  This is probably
fine for a pre-existing file, but with a temporary file that is
created within the command, it's possible for a fast typist to
save-and-exit in less than the one-second granularity of stat(2)
timestamps.  On Windows FAT filesystems the granularity is even
worse, 2 seconds, making the race a bit easier to hit.

To fix, try to set the temp file's mod time to be two seconds ago.
It's unlikely this would fail, but then again the race condition
itself is unlikely, so just ignore any error.

Also, we might as well check the file size as well as its mod time.

While this is a difficult bug to hit, it still seems worth
back-patching, to ensure that users' edits aren't lost.

Laurenz Albe, per gripe from Jacob Champion; based on fix suggestions
from Jacob and myself

Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at
2021-03-12 12:20:15 -05:00
Robert Haas f71519e545 Refactor and generalize the ParallelSlot machinery.
Create a wrapper object, ParallelSlotArray, to encapsulate the
number of slots and the slot array itself, plus some other relevant
bits of information. This reduces the number of parameters we have
to pass around all over the place.

Allow for a ParallelSlotArray to contain slots connected to
different databases within a single cluster. The current clients
of this mechanism don't need this, but it is expected to be used
by future patches.

Defer connecting to databases until we actually need the connection
for something. This is a slight behavior change for vacuumdb and
reindexdb. If you specify a number of jobs that is larger than the
number of objects, the extra connections will now not be used.
But, on the other hand, if you specify a number of jobs that is
so large that it's going to fail, the failure would previously have
happened before any operations were actually started, and now it
won't.

Mark Dilger, reviewed by me.

Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
2021-03-11 13:17:46 -05:00
Peter Geoghegan 9f3665fbfc Don't consider newly inserted tuples in nbtree VACUUM.
Remove the entire idea of "stale stats" within nbtree VACUUM (stop
caring about stats involving the number of inserted tuples).  Also
remove the vacuum_cleanup_index_scale_factor GUC/param on the master
branch (though just disable them on postgres 13).

The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM
partially responsible for deciding when pg_class.reltuples stats needed
to be updated.  This seems contrary to the spirit of the index AM API,
though -- it is not actually necessary for an index AM's bulk delete and
cleanup callbacks to provide accurate stats when it happens to be
inconvenient.  The core code owns that.  (Index AMs have the authority
to perform or not perform certain kinds of deferred cleanup based on
their own considerations, such as page deletion and recycling, but that
has little to do with pg_class.reltuples/num_index_tuples.)

This issue was fairly harmless until the introduction of the
autovacuum_vacuum_insert_threshold feature by commit b07642db, which had
an undesirable interaction with the vacuum_cleanup_index_scale_factor
mechanism: it made insert-driven autovacuums perform full index scans,
even though there is no real benefit to doing so.  This has been tied to
a regression with an append-only insert benchmark [1].

Also have remaining cases that perform a full scan of an index during a
cleanup-only nbtree VACUUM indicate that the final tuple count is only
an estimate.  This prevents vacuumlazy.c from setting the index's
pg_class.reltuples in those cases (it will now only update pg_class when
vacuumlazy.c had TIDs for nbtree to bulk delete).  This arguably fixes
an oversight in deduplication-related bugfix commit 48e12913.

[1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com
Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
2021-03-10 16:27:01 -08:00
Thomas Munro c427de427a Fix another portability bug in recent pgbench commit.
Commit 547f04e7 produced errors on AIX/xlc while building plpython.  The
new code appears to be incompatible with the hack installed by commit
a11cf433.  Without access to an AIX system to check, my guess is that
_POSIX_C_SOURCE may be required for <time.h> to declare the things the
header needs to see, but plpython.h undefines it.

For now, to unbreak build farm animal hoverfly, just move the new
pg_time_usec_t support into pgbench.c.  Perhaps later we could figure
out what to rearrange to put it back into a header for wider use.

Discussion: https://postgr.es/m/CA%2BhUKG%2BP%2BjcD%3Dx9%2BagyTdWtjpOT64MYiGic%2Bcbu_TD8CV%3D6A3w%40mail.gmail.com
2021-03-10 23:20:41 +13:00
Thomas Munro 68b34b2338 Try to fix portability bugs in recent pgbench commits.
1.  pg_time_usec_t needs to be printed with INT64_FORMAT, not %ld, or 32
bit systems complain, per lapwing.

2.  Some Windows compilers didn't like a thread function not marked with
__stdcall, per whelk; let's see if this fixes the problem.
2021-03-10 21:12:11 +13:00
Michael Paquier 6c788d9f6a Move tablespace path re-creation from the makefiles to pg_regress
Moving this logic into pg_regress fixes a potential failure with
parallel tests when pg_upgrade and the main regression test suite both
trigger the makefile rule that cleaned up testtablespace/ under
src/test/regress.  Even if pg_upgrade was triggering this rule, it has
no need to do so as it uses a different tablespace path.  So if
pg_upgrade triggered the makefile rule for the tablespace setup while
the main regression test suite ran the tablespace cases, it would fail.

61be85a was a similar attempt at achieving that, but that broke cases
where the regression tests require to run under an Administrator
account, like with Appveyor.

Reported-by: Andres Freund, Kyotaro Horiguchi
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20201209012911.uk4d6nxcnkp7ehrx@alap3.anarazel.de
2021-03-10 14:50:00 +09:00
Thomas Munro aeb57af8e6 pgbench: Synchronize client threads.
Wait until all pgbench threads are connected before benchmarking begins.
This fixes a problem where some connections could take a very long time
to be established because of lock contention from earlier connections,
making results unstable and bogus with high connection counts.

Author: Andres Freund <andres@anarazel.de>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
2021-03-10 17:44:04 +13:00
Thomas Munro 547f04e734 pgbench: Improve time logic.
Instead of instr_time (struct timespec) and the INSTR_XXX macros,
introduce pg_time_usec_t and use integer arithmetic.  Don't include the
connection time in TPS unless using -C mode, but report it separately.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
2021-03-10 17:44:04 +13:00
Thomas Munro b1d6a8f868 pgbench: Refactor thread portability support.
Instead of maintaining an incomplete emulation of POSIX threads for
Windows, let's use an extremely minimalist macro-based abstraction for
now.  A later patch will extend this, without the need to supply more
complicated pthread emulation code.  (There may be a need for a more
serious portable thread abstraction in later projects, but this is not
it.)

Minor incidental problems fixed: it wasn't OK to use (pthread_t) 0 as a
special value, it wasn't OK to compare thread_t values with ==, and we
incorrectly assumed that pthread functions set errno.

Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
2021-03-10 17:44:04 +13:00
Michael Paquier 0ba71107ef Revert changes for SSL compression in libpq
This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as
it could cause breakages in distributions that share one single libpq
version across multiple major versions of Postgres for extensions and
applications linking to that.

Note that the backend is unchanged here, and it still disables SSL
compression while simplifying the underlying catalogs that tracked if
compression was enabled or not for a SSL connection.

Per discussion with Tom Lane and Daniel Gustafsson.

Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
2021-03-10 09:35:42 +09:00
Michael Paquier f9264d1524 Remove support for SSL compression
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since.  Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that.  The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.

This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.

Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.

A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.

Bump catalog version.

Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion:  https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
2021-03-09 11:16:47 +09:00
Michael Paquier f1516ad7b3 pgbench: Simplify some port, host, user and dbname assignments
Using pgbench in an environment with both PGPORT and PGUSER set would
have caused the generation of a debug log with an incorrect database
name due to an oversight in 412893b.  Not specifying user, port and/or
database using the option switches, without their respective environment
variables, generated a log entry with empty strings, which was
rather useless.

This commit fixes this set of issues by simplifying the logic grabbing
the connection information, removing a set of getenv() calls that
emulated what libpq already does.  The faulty debug log now directly
uses the information from the libpq connection, and it gets generated
after the connection to the backend is completed, not before it (in the
event of a failure libpq would complain with more information about the
connection attempt so the log is not really useful before anyway).

Author: Kota Miyake
Reviewed-by: Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/026b3ae6fc339a18394d053c32a4463d@oss.nttdata.com
2021-03-06 21:26:34 +09:00
Peter Eisentraut 040af77938 pg_upgrade: Fix oversight in version checking
Mistake in f06b1c598254f8adb2b7f51d6a7685618a7fb121: We should only
check the version of the binaries in the target installation.  The
source installation can of course be of a different version.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/E1lHNKN-0005IC-V6%40gemulon.postgresql.org
2021-03-04 10:34:17 +01:00
Fujii Masao 4a4241e15b Remove redundant getenv() for PGUSER, in psql help.
Previously psql obtained the value of PGUSER twice to display
a default user in its help message.

Author: Kota Miyake
Reviewed-by: Nitin Jadhav, Fujii Masao
Discussion: https://postgr.es/m/2a3c612babdd6ed63a9d877bb575d793@oss.nttdata.com
2021-03-04 18:23:22 +09:00
Heikki Linnakangas 3174d69fb9 Remove server and libpq support for old FE/BE protocol version 2.
Protocol version 3 was introduced in PostgreSQL 7.4. There shouldn't be
many clients or servers left out there without version 3 support. But as
a courtesy, I kept just enough of the old protocol support that we can
still send the "unsupported protocol version" error in v2 format, so that
old clients can display the message properly. Likewise, libpq still
understands v2 ErrorResponse messages when establishing a connection.

The impetus to do this now is that I'm working on a patch to COPY
FROM, to always prefetch some data. We cannot do that safely with the
old protocol, because it requires parsing the input one byte at a time
to detect the end-of-copy marker.

Reviewed-by: Tom Lane, Alvaro Herrera, John Naylor
Discussion: https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b%40iki.fi
2021-03-04 10:45:55 +02:00
Peter Eisentraut f06b1c5982 pg_upgrade: Check version of target cluster binaries
This expands the binary validation in pg_upgrade with a version
check per binary to ensure that the target cluster installation
only contains binaries from the target version.

In order to reduce duplication, validate_exec is exported from
port.h and the local copy in pg_upgrade is removed.

Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
2021-03-03 09:45:56 +01:00
Michael Paquier 57e6db706e Add --tablespace option to reindexdb
This option provides REINDEX (TABLESPACE) for reindexdb, applying the
tablespace value given by the caller to all the REINDEX queries
generated.

While on it, this commit adds some tests for REINDEX TABLESPACE, with
and without CONCURRENTLY, when run on toast indexes and tables.  Such
operations are not allowed, and toast relation names are not stable
enough to be part of the main regression test suite (even if using a PL
function with a TRY/CATCH logic, as CONCURRENTLY could not be tested).

Author: Michael Paquier
Reviewed-by: Mark Dilger, Daniel Gustafsson
Discussion: https://postgr.es/m/YDiaDMnzLICqeukl@paquier.xyz
2021-03-03 10:14:21 +09:00
Alvaro Herrera 75dbfe4ca7
Use native path separators to pg_ctl in initdb
On Windows, CMD.EXE allegedly does not run a command that uses forward slashes,
so let's convert the path to use backslashes instead.

Backpatch to 10.

Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Reviewed-by: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Discussion: https://postgr.es/m/CAMm1aWaNDuaPYFYMAqDeJrZmPtNvLcJRS++CcZWY8LT6KcoBZw@mail.gmail.com
2021-03-02 15:39:34 -03:00
Michael Paquier c5530d8474 Fix duplicated test case in TAP tests of reindexdb
The same test for REINDEX (VERBOSE) was done twice, while it is clear
that the second test should use --concurrently.  Issue introduced in
5dc92b8, for what looks like a copy-paste mistake.

Reviewed-by: Mark Dilger
Discussion: https://postgr.es/m/A7AE97EA-F4B0-4CAB-8FFF-3FECD31F9D63@enterprisedb.com
Backpatch-through: 12
2021-03-02 13:18:06 +09:00
Michael Paquier 943eb47880 pgbench: Remove now-dead CState->ecnt
The last use of ecnt was in 12788ae.  It was getting incremented after a
backend error without any purpose since then, so let's get rid of it.

Author: Kota Miyake
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/786c3d9fbe067763d899e78c296f9f0f@oss.nttdata.com
2021-02-28 07:50:26 +09:00
Fujii Masao 6b40d9bdbd Improve tab-completion for TRUNCATE.
Author: Kota Miyake
Reviewed-by: Muhammad Usama
Discussion: https://postgr.es/m/f5d30053d00dcafda3280c9e267ecb0f@oss.nttdata.com
2021-02-25 18:20:57 +09:00
Michael Paquier bcf2667bf6 Fix some typos, grammar and style in docs and comments
The portions fixing the documentation are backpatched where needed.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210210235557.GQ20012@telsasoft.com
backpatch-through: 9.6
2021-02-24 16:13:17 +09:00
Peter Eisentraut 6f6f284c7e Simplify printing of LSNs
Add a macro LSN_FORMAT_ARGS for use in printf-style printing of LSNs.
Convert all applicable code to use it.

Reviewed-by: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/CAExHW5ub5NaTELZ3hJUCE6amuvqAtsSxc7O+uK7y4t9Rrk23cw@mail.gmail.com
2021-02-23 10:27:02 +01:00
Thomas Munro 5bc09a7471 Tab-complete CREATE COLLATION.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com
2021-02-23 00:16:16 +13:00
Fujii Masao fe06819f10 Fix psql's ON_ERROR_ROLLBACK so that it handles COMMIT AND CHAIN.
When ON_ERROR_ROLLBACK is enabled, psql releases a temporary savepoint
if it's idle in a valid transaction block after executing a query. But psql
doesn't do that after RELEASE or ROLLBACK is executed because a temporary
savepoint has already been destroyed in that case.

This commit changes psql's ON_ERROR_ROLLBACK so that it doesn't release
a temporary savepoint also when COMMIT AND CHAIN is executed. A temporary
savepoint doesn't need to be released in that case because
COMMIT AND CHAIN also destroys any savepoints defined within the transaction
to commit. Otherwise psql tries to release the savepoint that
COMMIT AND CHAIN has already destroyed and cause an error
"ERROR:  savepoint "pg_psql_temporary_savepoint" does not exist".

Back-patch to v12 where transaction chaining was added.

Reported-by: Arthur Nascimento
Author: Arthur Nascimento
Reviewed-by: Fujii Masao, Vik Fearing
Discussion: https://postgr.es/m/16867-3475744069228158@postgresql.org
2021-02-19 22:01:25 +09:00
Peter Eisentraut 678d0e239b Update snowball
Update to snowball tag v2.1.0.  Major changes are new stemmers for
Armenian, Serbian, and Yiddish.
2021-02-19 08:10:15 +01:00
Michael Paquier e6b8e83b9f Add psql completion for [ NO ] DEPENDS ON EXTENSION
ALTER INDEX was able to handle that already.  This adds tab completion
for all the remaining commands that support this grammar:
- ALTER FUNCTION
- ALTER PROCEDURE
- ALTER ROUTINE
- ALTER TRIGGER
- ALTER MATERIALIZED VIEW

Author: Ian Lawrence Barwick
Discussion: https://postgr.es/m/CAB8KJ=iypYudXuMOAMOP4BpkaYbXxk=a2cdJppX0e9mJXWtuig@mail.gmail.com
2021-02-17 11:50:58 +09:00
Thomas Munro 2c8b42b50d Use pg_pwrite() in pg_test_fsync.
For consistency with the PostgreSQL behavior this test program is
intended to simulate, use pwrite() instead of lseek() + write().

Also fix the final "non-sync" test, which was opening and closing the
file for every write.

Discussion: https://postgr.es/m/CA%2BhUKGJjjid2BJsvjMALBTduo1ogdx2SPYaTQL3wAy8y2hc4nw%40mail.gmail.com
2021-02-15 15:23:12 +13:00
Michael Paquier b83dcf7928 Add result size as argument of pg_cryptohash_final() for overflow checks
With its current design, a careless use of pg_cryptohash_final() could
would result in an out-of-bound write in memory as the size of the
destination buffer to store the result digest is not known to the
cryptohash internals, without the caller knowing about that.  This
commit adds a new argument to pg_cryptohash_final() to allow such sanity
checks, and implements such defenses.

The internals of SCRAM for HMAC could be tightened a bit more, but as
everything is based on SCRAM_KEY_LEN with uses particular to this code
there is no need to complicate its interface more than necessary, and
this comes back to the refactoring of HMAC in core.  Except that, this
minimizes the uses of the existing DIGEST_LENGTH variables, relying
instead on sizeof() for the result sizes.  In ossp-uuid, this also makes
the code more defensive, as it already relied on dce_uuid_t being at
least the size of a MD5 digest.

This is in philosophy similar to cfc40d3 for base64.c and aef8948 for
hex.c.

Reported-by: Ranier Vilela
Author: Michael Paquier, Ranier Vilela
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAEudQAoqEGmcff3J4sTSV-R_16Monuz-UpJFbf_dnVH=APr02Q@mail.gmail.com
2021-02-15 10:18:34 +09:00
Magnus Hagander e7f4291485 Remove extra Success message at the end of initdb
This was accidentally included in e09155bd62 and is redundant with the
lines right above it.

Reported-By: Peter Eisentraut
Discussion: https://postgr.es/m/455845d1-441d-cc40-d2a7-b47f4e422489@2ndquadrant.com
2021-02-10 18:21:55 +01:00
Peter Eisentraut 6499008150 pg_dump: Add const decorations
Add const decorations to the *info arguments of the dump* functions,
to clarify that they don't modify that argument.  Many other nearby
functions modify their arguments, so this can help clarify these
different APIs a bit.

Discussion: https://www.postgresql.org/message-id/flat/012d3030-9a2c-99a1-ed2d-988978b5632f%40enterprisedb.com
2021-02-10 13:21:47 +01:00
Michael Paquier 7cb3048f38 Add option PROCESS_TOAST to VACUUM
This option controls if toast tables associated with a relation are
vacuumed or not when running a manual VACUUM.  It was already possible
to trigger a manual VACUUM on a toast relation without processing its
main relation, but a manual vacuum on a main relation always forced a
vacuum on its toast table.  This is useful in scenarios where the level
of bloat or transaction age of the main and toast relations differs a
lot.

This option is an extension of the existing VACOPT_SKIPTOAST that was
used by autovacuum to control if toast relations should be skipped or
not.  This internal flag is renamed to VACOPT_PROCESS_TOAST for
consistency with the new option.

A new option switch, called --no-process-toast, is added to vacuumdb.

Author: Nathan Bossart
Reviewed-by: Kirk Jamison, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/BA8951E9-1524-48C5-94AF-73B1F0D7857F@amazon.com
2021-02-09 14:13:57 +09:00
Robert Haas 418611c84d Generalize parallel slot result handling.
Instead of having a hard-coded behavior that we ignore missing
tables and report all other errors, let the caller decide what
to do by setting a callback.

Mark Dilger, reviewed and somewhat revised by me. The larger patch
series of which this is a part has also had review from Peter
Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul
Sul, but I don't know whether any of them have reviewed this bit
specifically.

Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com
Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
2021-02-05 16:08:45 -05:00
Robert Haas e955bd4b6c Move some code from src/bin/scripts to src/fe_utils to permit reuse.
The parallel slots infrastructure (which implements client-side
multiplexing of server connections doing similar things, not
threading or multiple processes or anything like that) are moved from
src/bin/scripts/scripts_parallel.c to src/fe_utils/parallel_slot.c.

The functions consumeQueryResult() and processQueryResult() which were
previously part of src/bin/scripts/common.c are now moved into that
file as well, becoming static helper functions. This might need to be
changed in the future, but currently they're not used for anything
else.

Some other functions from src/bin/scripts/common.c are moved to to
src/fe_utils and are split up among several files.  connectDatabase(),
connectMaintenanceDatabase(), and disconnectDatabase() are moved to
connect_utils.c.  executeQuery(), executeCommand(), and
executeMaintenanceCommand() are move to query_utils.c.
handle_help_version_opts() is moved to option_utils.c.

Mark Dilger, reviewed by me. The larger patch series of which this is
a part has also had review from Peter Geoghegan, Andres Freund, Álvaro
Herrera, Michael Paquier, and Amul Sul, but I don't know whether any
of them have reviewed this bit specifically.

Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com
Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
2021-02-05 13:33:38 -05:00
Thomas Munro e1c02d92ae Tab-complete CREATE DATABASE ... LOCALE.
Author: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CAB8KJ%3Dh0XO2CB4QbLBc1Tm9Bg5wzSGQtT-eunaCmrghJp4nqdA%40mail.gmail.com
2021-02-05 15:30:56 +13:00
Michael Paquier c5b286047c Add TABLESPACE option to REINDEX
This patch adds the possibility to move indexes to a new tablespace
while rebuilding them.  Both the concurrent and the non-concurrent cases
are supported, and the following set of restrictions apply:
- When using TABLESPACE with a REINDEX command that targets a
partitioned table or index, all the indexes of the leaf partitions are
moved to the new tablespace.  The tablespace references of the non-leaf,
partitioned tables in pg_class.reltablespace are not changed. This
requires an extra ALTER TABLE SET TABLESPACE.
- Any index on a toast table rebuilt as part of a parent table is kept
in its original tablespace.
- The operation is forbidden on system catalogs, including trying to
directly move a toast relation with REINDEX.  This results in an error
if doing REINDEX on a single object.  REINDEX SCHEMA, DATABASE and
SYSTEM skip system relations when TABLESPACE is used.

Author: Alexey Kondratov, Michael Paquier, Justin Pryzby
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/8a8f5f73-00d3-55f8-7583-1375ca8f6a91@postgrespro.ru
2021-02-04 14:34:20 +09:00
Peter Eisentraut 0bf83648a5 pg_dump: Fix dumping of inherited generated columns
Generation expressions of generated columns are always inherited, so
there is no need to set them separately in child tables, and there is
no syntax to do so either.  The code previously used the code paths
for the handling of default values, for which different rules apply;
in particular it might want to set a default value explicitly for an
inherited column.  This resulted in unrestorable dumps.  For generated
columns, just skip them in inherited tables.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
2021-02-03 11:27:13 +01:00
Peter Eisentraut dfb75e478c Add primary keys and unique constraints to system catalogs
For those system catalogs that have a unique indexes, make a primary
key and unique constraint, using ALTER TABLE ... PRIMARY KEY/UNIQUE
USING INDEX.

This can be helpful for GUI tools that look for a primary key, and it
might in the future allow declaring foreign keys, for making schema
diagrams.

The constraint creation statements are automatically created by
genbki.pl from DECLARE_UNIQUE_INDEX directives.  To specify which one
of the available unique indexes is the primary key, use the new
directive DECLARE_UNIQUE_INDEX_PKEY instead.  By convention, we
usually make a catalog's OID column its primary key, if it has one.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98@2ndquadrant.com
2021-01-30 19:44:29 +01:00
Peter Eisentraut 2592be8be5 Fix typo 2021-01-29 09:43:21 +01:00
Alvaro Herrera 6819b9042f
pgbench: Remove dead code
doConnect() never returns connections in state CONNECTION_BAD, so
checking for that is pointless.  Remove the code that does.

This code has been dead since ba708ea3dc, 20 years ago.

Discussion: https://postgr.es/m/20210126195224.GA20361@alvherre.pgsql
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2021-01-28 12:50:40 -03:00
Peter Eisentraut b034ef9b37 Remove gratuitous uses of deprecated SELECT INTO
CREATE TABLE AS has been preferred over SELECT INTO (outside of ecpg
and PL/pgSQL) for a long time.  There were still a few uses of SELECT
INTO in tests and documentation, some old, some more recent.  This
changes them to CREATE TABLE AS.  Some occurrences in the tests remain
where they are specifically testing SELECT INTO parsing or similar.

Discussion: https://www.postgresql.org/message-id/flat/96dc0df3-e13a-a85d-d045-d6e2c85218da%40enterprisedb.com
2021-01-28 14:28:41 +01:00
Tom Lane f76a85000b Code review for psql's helpSQL() function.
The loops to identify word boundaries could access past the end of
the input string.  Likely that would never result in an actual
crash, but it makes valgrind unhappy.

The logic to try different numbers of words didn't work when the
input has two words but we only have a match to the first, eg
"\h with select".  (We must "continue" the pass loop, not "break".)

The logic to compute nl_count was bizarrely managed, and in at
least two code paths could end up calling PageOutput with
nl_count = 0, resulting in failing to paginate output that should
have been fed to the pager.  Also, in v12 and up, the nl_count
calculation hadn't been updated to account for the addition of a URL.

The PQExpBuffer holding the command syntax details wasn't freed,
resulting in a session-lifespan memory leak.

While here, improve some comments, choose a more descriptive name
for a variable, fix inconsistent datatype choice for another variable.

Per bug #16837 from Alexander Lakhin.  This code is very old,
so back-patch to all supported branches.

Kyotaro Horiguchi and Tom Lane

Discussion: https://postgr.es/m/16837-479bcd56040c71b3@postgresql.org
2021-01-26 13:04:52 -05:00
Tom Lane 58cd8dca3d Avoid redundantly prefixing PQerrorMessage for a connection failure.
libpq's error messages for connection failures pretty well stand on
their own, especially since commits 52a10224e/27a48e5a1.  Prefixing
them with 'could not connect to database "foo"' or the like is just
redundant, and perhaps even misleading if the specific database name
isn't relevant to the failure.  (When it is, we trust that the
backend's error message will include the DB name.)  Indeed, psql
hasn't used any such prefix in a long time.  So, make all our other
programs and documentation examples agree with psql's practice.

Discussion: https://postgr.es/m/1094524.1611266589@sss.pgh.pa.us
2021-01-22 16:52:31 -05:00
Tom Lane 27a48e5a16 Improve new wording of libpq's connection failure messages.
"connection to server so-and-so failed:" seems clearer than the
previous wording "could not connect to so-and-so:" (introduced by
52a10224e), because the latter suggests a network-level connection
failure.  We're now prefixing this string to all types of connection
failures, for instance authentication failures; so we need wording
that doesn't imply a low-level error.

Per discussion with Robert Haas.

Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
2021-01-21 16:10:18 -05:00
Tomas Vondra ad600bba04 psql \dX: list extended statistics objects
The new command lists extended statistics objects. All past releases
with extended statistics are supported.

This is a simplified version of commit 891a1d0bca, which had to be
reverted due to not considering pg_statistic_ext_data is not accessible
by regular users. Fields requiring access to this catalog were removed.
It's possible to add them, but it'll require changes to core.

Author: Tatsuro Yamada
Reviewed-by: Julien Rouhaud, Alvaro Herrera, Tomas Vondra, Noriyoshi Shinoda
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
2021-01-20 22:57:21 +01:00
Tomas Vondra 1db0d173a2 Revert "psql \dX: list extended statistics objects"
Reverts 891a1d0bca, because the new  psql command \dX only worked for
users users who can read pg_statistic_ext_data catalog, and most regular
users lack that privilege (the catalog may contain sensitive user data).

Reported-by: Noriyoshi Shinoda
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
2021-01-17 15:11:14 +01:00
Magnus Hagander e09155bd62 Add --no-instructions parameter to initdb
Specifying this parameter removes the informational messages about how
to start the server. This is intended for use by wrappers in different
packaging systems, where those instructions would most likely be wrong
anyway, but the other output from initdb would still be useful (and thus
just redirecting everything to /dev/null would be bad).

Author: Magnus Hagander
Reviewed-By: Peter Eisentraut
Discusion: https://postgr.es/m/CABUevEzo4t5bmTXF0_B9WzmuWpVbMpkNZZiGvzV8NZa-=fPqeQ@mail.gmail.com
2021-01-17 14:34:41 +01:00
Tomas Vondra 891a1d0bca psql \dX: list extended statistics objects
The new command lists extended statistics objects, possibly with their
sizes. All past releases with extended statistics are supported.

Author: Tatsuro Yamada
Reviewed-by: Julien Rouhaud, Alvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
2021-01-17 00:16:45 +01:00
Noah Misch f713ff7c64 Fix pg_dump for GRANT OPTION among initial privileges.
The context is an object that no longer bears some aclitem that it bore
initially.  (A user issued REVOKE or GRANT statements upon the object.)
pg_dump is forming SQL to reproduce the object ACL.  Since initdb
creates no ACL bearing GRANT OPTION, reaching this bug requires an
extension where the creation script establishes such an ACL.  No PGXN
extension does that.  If an installation did reach the bug, pg_dump
would have omitted a semicolon, causing a REVOKE and the next SQL
statement to fail.  Separately, since the affected code exists to
eliminate an entire aclitem, it wants plain REVOKE, not REVOKE GRANT
OPTION FOR.  Back-patch to 9.6, where commit
23f34fa4ba first appeared.

Discussion: https://postgr.es/m/20210109102423.GA160022@rfd.leadboat.com
2021-01-16 12:21:35 -08:00
Tom Lane 8e396a773b pg_dump: label PUBLICATION TABLE ArchiveEntries with an owner.
This is the same fix as commit 9eabfe300 applied to INDEX ATTACH
entries, but for table-to-publication attachments.  As in that
case, even though the backend doesn't record "ownership" of the
attachment, we still ought to label it in the dump archive with
the role name that should run the ALTER PUBLICATION command.
The existing behavior causes the ALTER to be done by the original
role that started the restore; that will usually work fine, but
there may be corner cases where it fails.

The bulk of the patch is concerned with changing struct
PublicationRelInfo to include a pointer to the associated
PublicationInfo object, so that we can get the owner's name
out of that when the time comes.  While at it, I rewrote
getPublicationTables() to do just one query of pg_publication_rel,
not one per table.

Back-patch to v10 where this code was introduced.

Discussion: https://postgr.es/m/1165710.1610473242@sss.pgh.pa.us
2021-01-14 16:19:38 -05:00
Fujii Masao 3f238b882c Improve tab-completion for CLOSE, DECLARE, FETCH and MOVE.
This commit makes CLOSE, FETCH and MOVE commands tab-complete the list of
cursors. Also this commit makes DECLARE command tab-complete the options.

Author: Shinya Kato, Sawada Masahiko, tweaked by Fujii Masao
Reviewed-by: Shinya Kato, Sawada Masahiko, Fujii Masao
Discussion: https://postgr.es/m/b0e4c5c53ef84c5395524f5056fc71f0@MP-MSGSS-MBX001.msg.nttdata.co.jp
2021-01-14 15:41:22 +09:00
Tom Lane c21ea4d53e Disallow a digit as the first character of a variable name in pgbench.
The point of this restriction is to avoid trying to substitute variables
into timestamp literal values, which may contain strings like '12:34'.

There is a good deal more that should be done to reduce pgbench's
tendency to substitute where it shouldn't.  But this is sufficient to
solve the case complained of by Jaime Soler, and it's simple enough
to back-patch.

Back-patch to v11; before commit 9d36a3866, pgbench had a slightly
different definition of what a variable name is, and anyway it seems
unwise to change long-stable branches for this.

Fabien Coelho

Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2006291740420.805678@pseudo
2021-01-13 14:52:59 -05:00
Tom Lane 9eabfe300a pg_dump: label INDEX ATTACH ArchiveEntries with an owner.
Although a partitioned index's attachment to its parent doesn't
have separate ownership, the ArchiveEntry for it needs to be
marked with an owner anyway, to ensure that the ALTER command
is run by the appropriate role when restoring with
--use-set-session-authorization.  Without this, the ALTER will
be run by the role that started the restore session, which will
usually work but it's formally the wrong thing.

Back-patch to v11 where this type of ArchiveEntry was added.
In HEAD, add equivalent commentary to the just-added TABLE ATTACH
case, which I'd made do the right thing already.

Discussion: https://postgr.es/m/1094034.1610418498@sss.pgh.pa.us
2021-01-12 13:37:38 -05:00
Tom Lane 9a4c0e36fb Dump ALTER TABLE ... ATTACH PARTITION as a separate ArchiveEntry.
Previously, we emitted the ATTACH PARTITION command as part of
the child table's ArchiveEntry.  This was a poor choice since it
complicates restoring the partition as a standalone table; you have
to ignore the error from the ATTACH, which isn't even an option when
restoring direct-to-database with pg_restore.  (pg_restore will issue
the whole ArchiveEntry as one PQexec, so that any error rolls back
the table creation as well.)  Hence, separate it out as its own
ArchiveEntry, as indeed we already did for index ATTACH PARTITION
commands.

Justin Pryzby

Discussion: https://postgr.es/m/20201023052940.GE9241@telsasoft.com
2021-01-11 21:09:18 -05:00
Tom Lane d5ab79d815 Make pg_dump's table of object-type priorities more maintainable.
Wedging a new object type into this table has historically required
manually renumbering a lot of existing entries.  (Although it appears
that some people got lazy and re-used the priority level of an
existing object type, even if it wasn't particularly related.)
We can let the compiler do the counting by inventing an enum type that
lists the desired priority levels in order.  Now, if you want to add
or remove a priority level, that's a one-liner.

This patch is not purely cosmetic, because I split apart the priorities
of DO_COLLATION and DO_TRANSFORM, as well as those of DO_ACCESS_METHOD
and DO_OPERATOR, which look to me to have been merged out of expediency
rather than because it was a good idea.  Shell types continue to be
sorted interchangeably with full types, and opclasses interchangeably
with opfamilies.
2021-01-11 21:09:18 -05:00
Tom Lane 52a10224e3 Uniformly identify the target host in libpq connection failure reports.
Prefix "could not connect to host-or-socket-path:" to all connection
failure cases that occur after the socket() call, and remove the
ad-hoc server identity data that was appended to a few of these
messages.  This should produce much more intelligible error reports
in multiple-target-host situations, especially for error cases that
are off the beaten track to any degree (because none of those provided
any server identity info).

As an example of the change, formerly a connection attempt with a bad
port number such as "psql -p 12345 -h localhost,/tmp" might produce

psql: error: could not connect to server: Connection refused
        Is the server running on host "localhost" (::1) and accepting
        TCP/IP connections on port 12345?
could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 12345?
could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.12345"?

Now it looks like

psql: error: could not connect to host "localhost" (::1), port 12345: Connection refused
        Is the server running on that host and accepting TCP/IP connections?
could not connect to host "localhost" (127.0.0.1), port 12345: Connection refused
        Is the server running on that host and accepting TCP/IP connections?
could not connect to socket "/tmp/.s.PGSQL.12345": No such file or directory
        Is the server running locally and accepting connections on that socket?

This requires adjusting a couple of regression tests to allow for
variation in the contents of a connection failure message.

Discussion: https://postgr.es/m/BN6PR05MB3492948E4FD76C156E747E8BC9160@BN6PR05MB3492.namprd05.prod.outlook.com
2021-01-11 14:03:39 -05:00
Tom Lane 9ffe227837 Adjust createdb TAP tests to work on recent OpenBSD.
We found last February that the error-case tests added by commit
008cf0409 failed on OpenBSD, because that platform doesn't really
check locale names.  At the time it seemed that that was only an issue
for LC_CTYPE, but testing on a more recent version of OpenBSD shows
that it's now equally lax about LC_COLLATE.

Rather than dropping the LC_COLLATE test too, put back LC_CTYPE
(reverting c4b0edb07), and adjust these tests to accept the different
error message that we get if setlocale() doesn't reject a bogus locale
name.  The point of these tests is not really what the backend does
with the locale name, but to show that createdb quotes funny locale
names safely; so we're not losing test reliability this way.

Back-patch as appropriate.

Discussion: https://postgr.es/m/231373.1610058324@sss.pgh.pa.us
2021-01-07 20:36:09 -05:00
Michael Paquier bc08f7971c Promote --data-checksums to the common set of options in initdb --help
This was previously part of the section dedicated to less common
options, but it is an option commonly used these days.

Author: Michael Banck
Reviewed-by: Stephen Frost, Michael Paquier
Discussion: https://postgr.es/m/d7938aca4d4ea8e8c72c33bd75efe9f8218fe390.camel@credativ.de
2021-01-06 10:52:26 +09:00
Tom Lane 7d80441d2c Allow psql's \dt and \di to show TOAST tables and their indexes.
Formerly, TOAST objects were unconditionally suppressed, but since
\d is able to print them it's not very clear why these variants
should not.  Instead, use the same rules as for system catalogs:
they can be seen if you write the 'S' modifier or a table name
pattern.  (In practice, since hardly anybody would keep pg_toast
in their search_path, it's really down to whether you use a pattern
that can match pg_toast.*.)

No docs change seems necessary because the docs already say that
this happens for "system objects"; we're just classifying TOAST
tables as being that.

Justin Pryzby, reviewed by Laurenz Albe

Discussion: https://postgr.es/m/20201130165436.GX24052@telsasoft.com
2021-01-05 18:41:50 -05:00
Bruce Momjian ca3b37487b Update copyright for 2021
Backpatch-through: 9.5
2021-01-02 13:06:25 -05:00
Tom Lane 091866724c More fixups for pg_upgrade cross-version tests.
Commit 7ca37fb04 removed regress_putenv from the regress.so library,
so reloading a SQL function dependent on that would not work.
Fix similarly to 52202bb39.

Per buildfarm.
2020-12-30 14:15:48 -05:00
Tom Lane 7ca37fb040 Use setenv() in preference to putenv().
Since at least 2001 we've used putenv() and avoided setenv(), on the
grounds that the latter was unportable and not in POSIX.  However,
POSIX added it that same year, and by now the situation has reversed:
setenv() is probably more portable than putenv(), since POSIX now
treats the latter as not being a core function.  And setenv() has
cleaner semantics too.  So, let's reverse that old policy.

This commit adds a simple src/port/ implementation of setenv() for
any stragglers (we have one in the buildfarm, but I'd not be surprised
if that code is never used in the field).  More importantly, extend
win32env.c to also support setenv().  Then, replace usages of putenv()
with setenv(), and get rid of some ad-hoc implementations of setenv()
wannabees.

Also, adjust our src/port/ implementation of unsetenv() to follow the
POSIX spec that it returns an error indicator, rather than returning
void as per the ancient BSD convention.  I don't feel a need to make
all the call sites check for errors, but the portability stub ought
to match real-world practice.

Discussion: https://postgr.es/m/2065122.1609212051@sss.pgh.pa.us
2020-12-30 12:56:06 -05:00
Noah Misch fa744697c7 In pg_upgrade cross-version test, handle postfix operators.
Commit 1ed6b89563 eliminated support for
them, so drop them from regression databases before upgrading.  This is
necessary but not sufficient for testing v13 -> v14 upgrades.

Discussion: https://postgr.es/m/449144.1600439950@sss.pgh.pa.us
2020-12-30 01:43:43 -08:00
Noah Misch 52202bb396 In pg_upgrade cross-version test, handle lack of oldstyle_length().
This suffices for testing v12 -> v13; some other version pairs need more
changes.  Back-patch to v10, which removed the function.
2020-12-30 01:43:43 -08:00
Michael Paquier 1b3433e25f doc: Improve description of min_dynamic_shared_memory
While on it, fix one oversight in 90fbf7c, that introduced a reference
to an incorrect value for the compression level of pg_dump.

Author: Justin Pryzby
Reviewed-by: Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CA+hUKGJRTLWWPcQfjm_xaOk98M8aROK903X92O0x-4vLJPWrrA@mail.gmail.com
2020-12-29 16:49:14 +09:00
Bruce Momjian 3187ef7c46 Revert "Add key management system" (978f869b99) & later commits
The patch needs test cases, reorganization, and cfbot testing.
Technically reverts commits 5c31afc49d..e35b2bad1a (exclusive/inclusive)
and 08db7c63f3..ccbe34139b.

Reported-by: Tom Lane, Michael Paquier

Discussion: https://postgr.es/m/E1ktAAG-0002V2-VB@gemulon.postgresql.org
2020-12-27 21:37:42 -05:00
Bruce Momjian ccbe34139b initdb: document that -K requires an argument
Reported-by: "Shinoda, Noriyoshi"

Discussion: https://postgr.es/m/TU4PR8401MB1152E92B4D44C81E496D6032EEDB0@TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM

Author: "Shinoda, Noriyoshi"

Backpatch-through: msater
2020-12-26 10:00:05 -05:00
Bruce Momjian e174a6f193 pg_alterckey: remove TAP check rules from Makefile
Reported-by: Pavel Stehule, Michael Paquier

Discussion: https://postgr.es/m/CAFj8pRBRNo4co5bqCx4BLx1ZZ45Z_T-opPxA+u7SLp7gAtBpNA@mail.gmail.com

Backpatch-through: master
2020-12-26 09:03:12 -05:00
Bruce Momjian 82f8c45be5 pg_alterckey: adjust doc build and Win32 sleep/open build fails
Fix for commit 62afb42a7f.

Reported-by: Tom Lane

Discussion: https://postgr.es/m/1252111.1608953815@sss.pgh.pa.us

Backpatch-through: master
2020-12-25 22:47:16 -05:00
Bruce Momjian 62afb42a7f Add pg_alterckey utility to change the cluster key
This can change the key that encrypts the data encryption keys used for
cluster file encryption.

Discussion: https://postgr.es/m/20201202213814.GG20285@momjian.us

Backpatch-through: master
2020-12-25 20:24:53 -05:00
Bruce Momjian 26d60f2a6c fixes docs and missing initdb help option for commit 978f869b99
Reported-by: Erik Rijkers

Discussion: https://postgr.es/m/a27e7bb60fc4c4a1fe960f7b055ba822@xs4all.nl

Backpatch-through: master
2020-12-25 14:00:22 -05:00
Bruce Momjian 978f869b99 Add key management system
This adds a key management system that stores (currently) two data
encryption keys of length 128, 192, or 256 bits.  The data keys are
AES256 encrypted using a key encryption key, and validated via GCM
cipher mode.  A command to obtain the key encryption key must be
specified at initdb time, and will be run at every database server
start.  New parameters allow a file descriptor open to the terminal to
be passed.  pg_upgrade support has also been added.

Discussion: https://postgr.es/m/CA+fd4k7q5o6Nc_AaX6BcYM9yqTbC6_pnH-6nSD=54Zp6NBQTCQ@mail.gmail.com
Discussion: https://postgr.es/m/20201202213814.GG20285@momjian.us

Author: Masahiko Sawada, me, Stephen Frost
2020-12-25 10:19:44 -05:00
Tom Lane 5c31afc49d Avoid time-of-day-dependent failure in log rotation test.
Buildfarm members pogona and petalura have shown a failure when
pg_ctl/t/004_logrotate.pl starts just before local midnight.
The default rotate-at-midnight behavior occurs just before the
Perl script examines current_logfiles, so it figures that the
rotation it's already requested has occurred ... but in reality,
that rotation happens just after it looks, so the expected new
log data goes into a different file than the one it's examining.

In HEAD, src/test/kerberos/t/001_auth.pl has acquired similar code
that evidently has a related failure mode.  Besides being quite new,
few buildfarm critters run that test, so it's unsurprising that
we've not yet seen a failure there.

Fix both cases by setting log_rotation_age = 0 so that no time-based
rotation can occur.  Also absorb 004_logrotate.pl's decision to
set lc_messages = 'C' into the kerberos test, in hopes that it will
work in non-English prevailing locales.

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-12-24%2022%3A10%3A04
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-02-01%2022%3A20%3A04
2020-12-24 21:37:46 -05:00
Michael Paquier 90fbf7c57d Fix typos and grammar in docs and comments
This fixes several areas of the documentation and some comments in
matters of style, grammar, or even format.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20201222041153.GK30237@telsasoft.com
2020-12-24 17:05:49 +09:00
Alexander Korotkov 8344d72ccc Fixes for pg_dump.c regarding multiranges
This commit fixes two wrong version number checks and one wrong check for null.
2020-12-20 08:14:35 +03:00
Alexander Korotkov 6df7a9698b Multirange datatypes
Multiranges are basically sorted arrays of non-overlapping ranges with
set-theoretic operations defined over them.

Since v14, each range type automatically gets a corresponding multirange
datatype.  There are both manual and automatic mechanisms for naming multirange
types.  Once can specify multirange type name using multirange_type_name
attribute in CREATE TYPE.  Otherwise, a multirange type name is generated
automatically.  If the range type name contains "range" then we change that to
"multirange".  Otherwise, we add "_multirange" to the end.

Implementation of multiranges comes with a space-efficient internal
representation format, which evades extra paddings and duplicated storage of
oids.  Altogether this format allows fetching a particular range by its index
in O(n).

Statistic gathering and selectivity estimation are implemented for multiranges.
For this purpose, stored multirange is approximated as union range without gaps.
This field will likely need improvements in the future.

Catversion is bumped.

Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com
Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0
Author: Paul Jungwirth, revised by me
Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule
Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston
Reviewed-by: Zhihong Yu, Alexander Korotkov
2020-12-20 07:20:33 +03:00
Bruce Momjian d6abfdf84e initdb: complete getopt_long alphabetization
Backpatch-through: 9.5
2020-12-12 12:59:09 -05:00
Bruce Momjian 39f3a9d2ff initdb: properly alphabetize getopt_long options in C string
Backpatch-through: 9.5
2020-12-12 12:51:16 -05:00
Peter Eisentraut d2a2808eb4 pg_dump: Don't use enums for defining bit mask values
This usage would mean that values of the enum type are potentially not
one of the enum values.  Use macros instead, like everywhere else.

Discussion: https://www.postgresql.org/message-id/14dde730-1d34-260e-fa9d-7664df2d6313@enterprisedb.com
2020-12-11 19:15:30 +01:00
Tom Lane c7aba7c14e Support subscripting of arbitrary types, not only arrays.
This patch generalizes the subscripting infrastructure so that any
data type can be subscripted, if it provides a handler function to
define what that means.  Traditional variable-length (varlena) arrays
all use array_subscript_handler(), while the existing fixed-length
types that support subscripting use raw_array_subscript_handler().
It's expected that other types that want to use subscripting notation
will define their own handlers.  (This patch provides no such new
features, though; it only lays the foundation for them.)

To do this, move the parser's semantic processing of subscripts
(including coercion to whatever data type is required) into a
method callback supplied by the handler.  On the execution side,
replace the ExecEvalSubscriptingRef* layer of functions with direct
calls to callback-supplied execution routines.  (Thus, essentially
no new run-time overhead should be caused by this patch.  Indeed,
there is room to remove some overhead by supplying specialized
execution routines.  This patch does a little bit in that line,
but more could be done.)

Additional work is required here and there to remove formerly
hard-wired assumptions about the result type, collation, etc
of a SubscriptingRef expression node; and to remove assumptions
that the subscript values must be integers.

One useful side-effect of this is that we now have a less squishy
mechanism for identifying whether a data type is a "true" array:
instead of wiring in weird rules about typlen, we can look to see
if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER.  For this
to be bulletproof, we have to forbid user-defined types from using
that handler directly; but there seems no good reason for them to
do so.

This patch also removes assumptions that the number of subscripts
is limited to MAXDIM (6), or indeed has any hard-wired limit.
That limit still applies to types handled by array_subscript_handler
or raw_array_subscript_handler, but to discourage other dependencies
on this constant, I've moved it from c.h to utils/array.h.

Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov,
Peter Eisentraut, Pavel Stehule

Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com
Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
2020-12-09 12:40:37 -05:00
Heikki Linnakangas 6ba581cf11 Fix more race conditions in the newly-added pg_rewind test.
pg_rewind looks at the control file to check what timeline a server is on.
But promotion doesn't immediately write a checkpoint, it merely writes
an end-of-recovery WAL record. If pg_rewind runs immediately after
promotion, before the checkpoint has completed, it will think think that
the server is still on the earlier timeline. We ran into this issue a long
time ago already, see commit 484a848a73.

It's a bit bogus that pg_rewind doesn't determine the timeline correctly
until the end-of-recovery checkpoint has completed. We probably should
fix that. But for now work around it by waiting for the checkpoint
to complete before running pg_rewind, like we did in commit 484a848a73.

In the passing, tidy up the new test a little bit. Rerder the INSERTs so
that the comments make more sense, remove a spurious CHECKPOINT call after
pg_rewind has already run, and add --debug option, so that if this fails
again, we'll have more data.

Per buildfarm failure at https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2020-12-06%2018%3A32%3A19&stg=pg_rewind-check.
Backpatch to all supported versions.

Discussion: https://www.postgresql.org/message-id/1713707e-e318-761c-d287-5b6a4aa807e8@iki.fi
2020-12-07 14:50:20 +02:00
Tom Lane 0473296246 pg_dump: Reorganize dumpBaseType()
Along the same lines as ed2c7f65b and daa9fe8a5, reduce code duplication
by having just one copy of the parts of the query that are the same
across all server versions; and make the conditionals control the
smallest possible amount of code.  This is in preparation for adding
another dumpable field to pg_type.
2020-12-06 22:37:40 -05:00
Michael Paquier 51c3889877 Fix fd leak in pg_verifybackup
An error code path newly-introduced by 87ae969 forgot to close a file
descriptor when verifying a file's checksum.

Per report from Coverity, via Tom Lane.
2020-12-07 09:30:36 +09:00
Heikki Linnakangas 36a4ac20fc Fix race conditions in newly-added test.
Buildfarm has been failing sporadically on the new test.  I was able to
reproduce this by adding a random 0-10 s delay in the walreceiver, just
before it connects to the primary. There's a race condition where node_3
is promoted before it has fully caught up with node_1, leading to diverged
timelines. When node_1 is later reconfigured as standby following node_3,
it fails to catch up:

LOG:  primary server contains no more WAL on requested timeline 1
LOG:  new timeline 2 forked off current database system timeline 1 before current recovery point 0/30000A0

That's the situation where you'd need to use pg_rewind, but in this case
it happens already when we are just setting up the actual pg_rewind
scenario we want to test, so change the test so that it waits until
node_3 is connected and fully caught up before promoting it, so that you
get a clean, controlled failover.

Also rewrite some of the comments, for clarity. The existing comments
detailed what each step in the test did, but didn't give a good overview
of the situation the steps were trying to create.

For reasons I don't understand, the test setup had to be written slightly
differently in 9.6 and 9.5 than in later versions. The 9.5/9.6 version
needed node 1 to be reinitialized from backup, whereas in later versions
it could be shut down and reconfigured to be a standby. But even 9.5 should
support "clean switchover", where primary makes sure that pending WAL is
replicated to standby on shutdown. It would be nice to figure out what's
going on there, but that's independent of pg_rewind and the scenario that
this test tests.

Discussion: https://www.postgresql.org/message-id/b0a3b95b-82d2-6089-6892-40570f8c5e60%40iki.fi
2020-12-04 18:26:46 +02:00
Heikki Linnakangas 2b4f313038 Fix pg_rewind bugs when rewinding a standby server.
If the target is a standby server, its WAL doesn't end at the last
checkpoint record, but at minRecoveryPoint. We must scan all the
WAL from the last common checkpoint all the way up to minRecoveryPoint
for modified pages, and also consider that portion when determining
whether the server needs rewinding.

Backpatch to all supported versions.

Author: Ian Barwick and me
Discussion: https://www.postgresql.org/message-id/CABvVfJU-LDWvoz4-Yow3Ay5LZYTuPD7eSjjE4kGyNZpXC6FrVQ%40mail.gmail.com
2020-12-03 15:57:48 +02:00
Michael Paquier b5913f6120 Refactor CLUSTER and REINDEX grammar to use DefElem for option lists
This changes CLUSTER and REINDEX so as a parenthesized grammar becomes
possible for options, while unifying the grammar parsing rules for
option lists with the existing ones.

This is a follow-up of the work done in 873ea9e for VACUUM, ANALYZE and
EXPLAIN.  This benefits REINDEX for a potential backend-side filtering
for collatable-sensitive indexes and TABLESPACE, while CLUSTER would
benefit from the latter.

Author: Alexey Kondratov, Justin Pryzby
Discussion: https://postgr.es/m/8a8f5f73-00d3-55f8-7583-1375ca8f6a91@postgrespro.ru
2020-12-03 10:13:21 +09:00
Michael Paquier 87ae9691d2 Move SHA2 routines to a new generic API layer for crypto hashes
Two new routines to allocate a hash context and to free it are created,
as these become necessary for the goal behind this refactoring: switch
the all cryptohash implementations for OpenSSL to use EVP (for FIPS and
also because upstream does not recommend the use of low-level cryptohash
functions for 20 years).  Note that OpenSSL hides the internals of
cryptohash contexts since 1.1.0, so it is necessary to leave the
allocation to OpenSSL itself, explaining the need for those two new
routines.  This part is going to require more work to properly track
hash contexts with resource owners, but this not introduced here.
Still, this refactoring makes the move possible.

This reduces the number of routines for all SHA2 implementations from
twelve (SHA{224,256,386,512} with init, update and final calls) to five
(create, free, init, update and final calls) by incorporating the hash
type directly into the hash context data.

The new cryptohash routines are moved to a new file, called cryptohash.c
for the fallback implementations, with SHA2 specifics becoming a part
internal to src/common/.  OpenSSL specifics are part of
cryptohash_openssl.c.  This infrastructure is usable for more hash
types, like MD5 or HMAC.

Any code paths using the internal SHA2 routines are adapted to report
correctly errors, which are most of the changes of this commit.  The
zones mostly impacted are checksum manifests, libpq and SCRAM.

Note that e21cbb4 was a first attempt to switch SHA2 to EVP, but it
lacked the refactoring needed for libpq, as done here.

This patch has been tested on Linux and Windows, with and without
OpenSSL, and down to 1.0.1, the oldest version supported on HEAD.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz
2020-12-02 10:37:20 +09:00
Bruce Momjian 888671a8cd pg_checksums: data_checksum_version is unsigned so use %u not %d
While the previous behavior didn't generate a warning, we might as well
use an accurate *printf specification.

Backpatch-through: 12
2020-12-01 20:27:06 -05:00
Tom Lane 7e5e1bba03 Fix recently-introduced breakage in psql's \connect command.
Through my misreading of what the existing code actually did,
commits 85c54287a et al. broke psql's behavior for the case where
"\c connstring" provides a password in the connstring.  We should
use that password in such a case, but as of 85c54287a we ignored it
(and instead, prompted for a password).

Commit 94929f1cf fixed that in HEAD, but since I thought it was
cleaning up a longstanding misbehavior and not one I'd just created,
I didn't back-patch it.

Hence, back-patch the portions of 94929f1cf having to do with
password management.  In addition to fixing the introduced bug,
this means that "\c -reuse-previous=on connstring" will allow
re-use of an existing connection's password if the connstring
doesn't change user/host/port.  That didn't happen before, but
it seems like a bug fix, and anyway I'm loath to have significant
differences in this code across versions.

Also fix an error with the same root cause about whether or not to
override a connstring's setting of client_encoding.  As of 85c54287a
we always did so; restore the previous behavior of overriding only
when stdin/stdout are a terminal and there's no environment setting
of PGCLIENTENCODING.  (I find that definition a bit surprising, but
right now doesn't seem like the time to revisit it.)

Per bug #16746 from Krzysztof Gradek.  As with the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/16746-44b30e2edf4335d4@postgresql.org
2020-11-29 15:22:04 -05:00
Noah Misch 0f89ca083b Retry initial slurp_file("current_logfiles"), in test 004_logrotate.pl.
Buildfarm member topminnow failed when the test script attempted this
before the syslogger would have created the file.  Back-patch to v12,
which introduced the test.
2020-11-28 21:52:27 -08:00
Tom Lane 314fb9baea In psql's \d commands, don't truncate attribute default values.
Historically, psql has truncated the text of a column's default
expression at 128 characters.  This is unlike any other behavior
in describe.c, and it's become particularly confusing now that
the limit is only applied to the expression proper and not to
the "generated always as (...) stored" text that may get wrapped
around it.

Excavation in our git history suggests that the original motivation
for this limit was not really to limit the display width (as I'd long
supposed), but to make it safe to use a fixed-width output buffer to
store the result.  That implementation restriction is long gone of
course, but the limit remained.  Let's just get rid of it.

While here, rearrange the logic about when to free the output string
so that it's not so dependent on unstated assumptions about the
possible values of attidentity and attgenerated.

Per bug #16743 from David Turon.  Back-patch to v12 where GENERATED
came in.  (Arguably we could take it back further, but I'm hesitant
to change the behavior of long-stable branches for this.)

Discussion: https://postgr.es/m/16743-7b1bacc4af76e7ad@postgresql.org
2020-11-25 16:19:25 -05:00
Peter Eisentraut c9f0624bc2 Add support for abstract Unix-domain sockets
This is a variant of the normal Unix-domain sockets that don't use the
file system but a separate "abstract" namespace.  At the user
interface, such sockets are represented by names starting with "@".
Supported on Linux and Windows right now.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033@2ndquadrant.com
2020-11-25 08:33:57 +01:00
Heikki Linnakangas c71f9a094b Make pg_rewind test case more stable.
If replication is exceptionally slow for some reason, pg_rewind might run
before the test row has been replicated. Add an explicit wait for it.

Reported-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/20201120003811.iknhqwatitw2vvxf%40alap3.anarazel.de
2020-11-20 16:11:52 +02:00
Michael Paquier 13b58f8934 Improve failure detection with array parsing in pg_dump
Similarly to 3636efa, the checks done in pg_dump when parsing array
values from catalogs have been too lax.  Under memory pressure, it could
be possible, though very unlikely, to finish with dumps that miss some
data like:
- Statistics for indexes
- Run-time configuration of functions
- Configuration of extensions
- Publication list for a subscription

No backpatch is done as this is not going to be a problem in practice.
For example, if an OOM causes an array parsing to fail, a follow-up code
path of pg_dump would most likely complain with an allocation failure
due to the memory pressure.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20201111061319.GE2276@paquier.xyz
2020-11-19 10:36:08 +09:00
Michael Paquier bf0aa7c4b8 Add tab completion for CREATE [OR REPLACE] TRIGGER in psql
92bf7e2 has added support for this grammar.

Author: Noriyoshi Shinoda
Discussion: https://postgr.es/m/TU4PR8401MB115244623CF4724DCA0D507FEEE30@TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM
2020-11-18 14:01:53 +09:00
Heikki Linnakangas 39f9f04b57 Fix timing issue in pg_rewind test.
The test inserts a row in primary server, waits until the insertion has
been replicated to a cascaded standby, and checks that it's visible there
by querying the cascaded standby. In order for that to work reliably, the
test needs to wait until the insertion WAL record has been fully replayed.

This should fix the occasional buildfarm failures. Diagnosis by Tom Lane.

Discussion: https://www.postgresql.org/message-id/606796.1605424022@sss.pgh.pa.us
2020-11-15 17:09:31 +02:00
Heikki Linnakangas 3bf44303b9 Remove another test that doesn't work on Windows.
Apparently double-quotes are not allowed in filenames on Windows, either.

Per buildfarm.
2020-11-13 13:41:58 +02:00
Heikki Linnakangas 951dfa34f4 Remove tests that don't work on Windows.
On Windows, a filename cannot contain backslashes, because a backslash
is used directory separator. Remove tests I added in commit 9c4f5192f
that tried to do that. We could perhaps use a SKIP block to only skip
them on Windows, but I'm not sure how exactly to formulate that, so just
remove the tests to make the buildfarm green again.

Per buildfarm.
2020-11-12 19:18:34 +02:00
Heikki Linnakangas 9c4f5192f6 Allow pg_rewind to use a standby server as the source system.
Using a hot standby server as the source has not been possible, because
pg_rewind creates a temporary table in the source system, to hold the
list of file ranges that need to be fetched. Refactor it to queue up the
file fetch requests in pg_rewind's memory, so that the temporary table
is no longer needed.

Also update the logic to compute 'minRecoveryPoint' correctly, when the
source is a standby server.

Reviewed-by: Kyotaro Horiguchi, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-12 14:52:24 +02:00
Magnus Hagander 1e12a495b4 Remove vacuumdb --analyze-in-stages from pg_upgrade tests
This step was only there to test the script when we generated those, but
commit 8f113698b6 removed those scripts, so it's not needed anymore.

Reported-By: Peter Eisentraut
Discussion: https://postgr.es/m/ea403f46-2b33-a7de-618e-9cab35a698c8@enterprisedb.com
2020-11-11 16:47:13 +01:00
Heikki Linnakangas 72d172743e pg_rewind: Fix thinko in parsing target WAL.
It's entirely possible to see WAL for a relation that doesn't exist in
the target anymore. That happens when the relation was dropped later.
The refactoring in commit eb00f1d4b broke that case, by sanity-checking
the file type in the target before checking the flag forwhether it
exists there at all.

I noticed this during manual testing. Modify the 001_basic.pl test so
that it covers this case.
2020-11-10 19:25:46 +02:00
Noah Misch 098fb00799 Ignore attempts to \gset into specially treated variables.
If an interactive psql session used \gset when querying a compromised
server, the attacker could execute arbitrary code as the operating
system account running psql.  Using a prefix not found among specially
treated variables, e.g. every lowercase string, precluded the attack.
Fix by issuing a warning and setting no variable for the column in
question.  Users wanting the old behavior can use a prefix and then a
meta-command like "\set HISTSIZE :prefix_HISTSIZE".  Back-patch to 9.5
(all supported versions).

Reviewed by Robert Haas.  Reported by Nick Cleaton.

Security: CVE-2020-25696
2020-11-09 07:32:09 -08:00
Magnus Hagander 8f113698b6 Remove analyze_new_cluster script from pg_upgrade
Since this script just runs vacuumdb anyway, remove the script and
replace the instructions to run it with instructions to run vacuumdb
directly.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/CABUevEwg5LDFzthhxzSj7sZGMiVsZe0VVNbzzwTQOHJ=rN7+5A@mail.gmail.com
2020-11-09 12:15:48 +01:00
Thomas Munro 3636efa119 Fix parsePGArray() error checking in pg_dump.
Coverity complained about a defect in commit 257836a7:

  Calling "parsePGArray" without checking return value (as is
  done elsewhere 11 out of 13 times).

Fix, and also check for empty strings explicitly (NULL as represented by
PQgetvalue()).  That worked correctly before only because parsePGArray()
happens to set *nitems = 0 when it fails on an empty string.  Also
convert a sanity check assertion to an error to be more paranoid, and
pgindent a nearby line.

Reported-by: Michael Paquier <michael@paquier.xyz>
2020-11-09 16:26:47 +13:00
Peter Eisentraut 6be725e701 Fix redundant error messages in client tools
A few client tools duplicate error messages already provided by libpq.

Discussion: https://www.postgresql.org/message-id/flat/3e937641-88a1-e697-612e-99bba4b8e5e4%40enterprisedb.com
2020-11-07 23:03:54 +01:00
Michael Paquier a05dbf477b Add GUC_LIST_INPUT and GUC_LIST_QUOTE to unix_socket_directories
This should have been done in the initial commit that made
unix_socket_directories a list as of c9b0cbe.  This change allows to
support correctly the case of ALTER SYSTEM, where it is possible to
specify multiple paths as a list, like the following pattern where
flattening is applied to each item:
ALTER SYSTEM SET unix_socket_directories = '/path1', '/path2';

Any parameters specified in postgresql.conf are parsed the same way, so
there is no compatibility change.  pg_dump has a hardcoded list of
parameters marked with GUC_LIST_QUOTE, that gets its routine update.
These are reordered alphabetically for clarity.

Author: Ian Lawrence Barwick
Reviewed-by: Peter Eisentraunt, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAB8KJ=iMOtNY6_sUwV=LQVCJ2zgYHBDyNzVfvE5GN3WQ3v9kQg@mail.gmail.com
2020-11-07 10:30:22 +09:00
Tom Lane d3adaabaf7 Revert "pg_dump: Lock all relations, not just plain tables".
Revert 403a3d91c, as well as the followup fix 7f4235032, in all
branches.  We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases.  We'll take another crack at this
later.

Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
2020-11-06 15:48:04 -05:00
Heikki Linnakangas 37d2ff3803 pg_rewind: Refactor the abstraction to fetch from local/libpq source.
This makes the abstraction of a "source" server more clear, by introducing
a common abstract class, borrowing the object-oriented programming term,
that represents all the operations that can be done on the source server.
There are two implementations of it, one for fetching via libpq, and
another to fetch from a local directory. This adds some code, but makes it
easier to understand what's going on.

The copy_executeFileMap() and libpq_executeFileMap() functions contained
basically the same logic, just calling different functions to fetch the
source files. Refactor so that the common logic is in one place, in a new
function called perform_rewind().

Reviewed-by: Kyotaro Horiguchi, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-04 11:21:18 +02:00
Heikki Linnakangas f81e97d047 pg_rewind: Replace the hybrid list+array data structure with simplehash.
Now that simplehash can be used in frontend code, let's make use of it.

Reviewed-by: Kyotaro Horiguchi, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-04 11:21:14 +02:00
Heikki Linnakangas eb00f1d4bf Refactor pg_rewind for more clear decision making.
Deciding what to do with each file is now a separate step after all the
necessary information has been gathered. It is more clear that way.
Previously, the decision-making was divided between process_source_file()
and process_target_file(), and it was a bit hard to piece together what
the overall rules were.

Reviewed-by: Kyotaro Horiguchi, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-04 11:21:09 +02:00
Heikki Linnakangas ffb4e27e9c pg_rewind: Move syncTargetDirectory() to file_ops.c
For consistency. All the other low-level functions that operate on the
target directory are in file_ops.c.

Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/0c5b3783-af52-3ee5-f8fa-6e794061f70d%40iki.fi
2020-11-04 10:38:39 +02:00
Thomas Munro 257836a755 Track collation versions for indexes.
Record the current version of dependent collations in pg_depend when
creating or rebuilding an index.  When accessing the index later, warn
that the index may be corrupted if the current version doesn't match.

Thanks to Douglas Doole, Peter Eisentraut, Christoph Berg, Laurenz Albe,
Michael Paquier, Robert Haas, Tom Lane and others for very helpful
discussion.

Author: Thomas Munro <thomas.munro@gmail.com>
Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> (earlier versions)
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
2020-11-03 01:19:50 +13:00
Thomas Munro 7d1297df08 Remove pg_collation.collversion.
This model couldn't be extended to cover the default collation, and
didn't have any information about the affected database objects when the
version changed.  Remove, in preparation for a follow-up commit that
will add a new mechanism.

Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
2020-11-03 00:44:59 +13:00
Michael Paquier 8a15e735be Fix some grammar and typos in comments and docs
The documentation fixes are backpatched down to where they apply.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20201031020801.GD3080@telsasoft.com
Backpatch-through: 9.6
2020-11-02 15:14:41 +09:00
Tom Lane 7f4235032f Avoid null pointer dereference if error result lacks SQLSTATE.
Although error results received from the backend should always have
a SQLSTATE field, ones generated by libpq won't, making this code
vulnerable to a crash after, say, untimely loss of connection.
Noted by Coverity.

Oversight in commit 403a3d91c.  Back-patch to 9.5, as that was.
2020-11-01 11:26:16 -05:00
Tom Lane 66f8687a8f Use mode "r" for popen() in psql's evaluate_backtick().
In almost all other places, we use plain "r" or "w" mode in popen()
calls (the exceptions being for COPY data).  This one has been
overlooked (possibly because it's buried in a ".l" flex file?),
but it's using PG_BINARY_R.

Kensuke Okamura complained in bug #16688 that we fail to strip \r
when stripping the trailing newline from a backtick result string.
That's true enough, but we'd also fail to convert embedded \r\n
cleanly, which also seems undesirable.  Fixing the popen() mode
seems like the best way to deal with this.

It's been like this for a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/16688-c649c7b69cd7e6f8@postgresql.org
2020-10-28 14:35:53 -04:00
Alvaro Herrera 403a3d91c8
pg_dump: Lock all relations, not just plain tables
Now that LOCK TABLE can take any relation type, acquire lock on all
relations that are to be dumped.  This prevents schema changes or
deadlock errors that could cause a dump to fail after expending much
effort.  The server is tested to have the capability and the feature
disabled if it doesn't, so that a patched pg_dump doesn't fail when
connecting to an unpatched server.

Backpatch to 9.5.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
2020-10-27 14:31:37 -03:00
Michael Paquier 0b46e82c06 Add tab completion for ALTER TABLE .. FORCE ROW LEVEL SECURITY in psql
This completes both the FORCE and NO FORCE options, NO INHERIT needing a
small adjustment.

Author: Li Japin
Discussion: https://postgr.es/m/15B10F9F-5847-4F5E-BD66-8E25AA473C95@hotmail.com
2020-10-24 10:29:55 +09:00
Tom Lane 1b62d0fb3e Allow psql to re-use connection parameters after a connection loss.
Instead of immediately PQfinish'ing a dead connection, save it aside
so that we can still extract its parameters for \connect attempts.
(This works because PQconninfo doesn't care whether the PGconn is in
CONNECTION_BAD state.)  This allows developers to reconnect with
just \c after a database crash and restart.

It's tempting to use the same approach instead of closing the old
connection after a failed non-interactive \connect command.  However,
that would not be very safe: consider a script containing
	\c db1 user1 live_server
	\c db2 user2 dead_server
	\c db3
The script would be expecting to connect to db3 at dead_server, but
if we re-use parameters from the first connection then it might
successfully connect to db3 at live_server.  This'd defeat the goal
of not letting a script accidentally execute commands against the
wrong database.

Discussion: https://postgr.es/m/38464.1603394584@sss.pgh.pa.us
2020-10-23 17:07:15 -04:00
Tom Lane 94929f1cf6 Clean up some unpleasant behaviors in psql's \connect command.
The check for whether to complain about not having an old connection
to get parameters from was seriously out of date: it had not been
rethought when we invented connstrings, nor when we invented the
-reuse-previous option.  Replace it with a check that throws an
error if reuse-previous is active and we lack an old connection to
reuse.  While that doesn't move the goalposts very far in terms of
easing reconnection after a server crash, at least it's consistent.

If the user specifies a connstring plus additional parameters
(which is invalid per the documentation), the extra parameters were
silently ignored.  That seems like it could be really confusing,
so let's throw a syntax error instead.

Teach the connstring code path to re-use the old connection's password
in the same cases as the old-style-syntax code path would, ie if we
are reusing parameters and the values of username, host/hostaddr, and
port are not being changed.  Document this behavior, too, since it was
unmentioned before.  Also simplify the implementation a bit, giving
rise to two new and useful properties: if there's a "password=xxx" in
the connstring, we'll use it not ignore it, and by default (i.e.,
except with --no-password) we will prompt for a password if the
re-used password or connstring password doesn't work.  The previous
code just failed if the re-used password didn't work.

Given the paucity of field complaints about these issues, I don't
think that they rise to the level of back-patchable bug fixes,
and in any case they might represent undesirable behavior changes
in minor releases.  So no back-patch.

Discussion: https://postgr.es/m/235210.1603321144@sss.pgh.pa.us
2020-10-22 14:04:28 -04:00
Tom Lane 85c54287af Fix connection string handling in psql's \connect command.
psql's \connect claims to be able to re-use previous connection
parameters, but in fact it only re-uses the database name, user name,
host name (and possibly hostaddr, depending on version), and port.
This is problematic for assorted use cases.  Notably, pg_dump[all]
emits "\connect databasename" commands which we would like to have
re-use all other parameters.  If such a script is loaded in a psql run
that initially had "-d connstring" with some non-default parameters,
those other parameters would be lost, potentially causing connection
failure.  (Thus, this is the same kind of bug addressed in commits
a45bc8a4f and 8e5793ab6, although the details are much different.)

To fix, redesign do_connect() so that it pulls out all properties
of the old PGconn using PQconninfo(), and then replaces individual
properties in that array.  In the case where we don't wish to re-use
anything, get libpq's default settings using PQconndefaults() and
replace entries in that, so that we don't need different code paths
for the two cases.

This does result in an additional behavioral change for cases where
the original connection parameters allowed multiple hosts, say
"psql -h host1,host2", and the \connect request allows re-use of the
host setting.  Because the previous coding relied on PQhost(), it
would only permit reconnection to the same host originally selected.
Although one can think of scenarios where that's a good thing, there
are others where it is not.  Moreover, that behavior doesn't seem to
meet the principle of least surprise, nor was it documented; nor is
it even clear it was intended, since that coding long pre-dates the
addition of multi-host support to libpq.  Hence, this patch is content
to drop it and re-use the host list as given.

Per Peter Eisentraut's comments on bug #16604.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
2020-10-21 16:19:00 -04:00
Peter Eisentraut 8a58347a3c Fix -Wcast-function-type warnings on Windows/MinGW
After de8feb1f3a, some warnings remained
that were only visible when using GCC on Windows.  Fix those as well.

Note that the ecpg test source files don't use the full pg_config.h,
so we can't use pg_funcptr_t there but have to do it the long way.
2020-10-21 08:17:51 +02:00
Tom Lane 8e5793ab60 Fix connection string handling in src/bin/scripts/ programs.
When told to process all databases, clusterdb, reindexdb, and vacuumdb
would reconnect by replacing their --maintenance-db parameter with the
name of the target database.  If that parameter is a connstring (which
has been allowed for a long time, though we failed to document that
before this patch), we'd lose any other options it might specify, for
example SSL or GSS parameters, possibly resulting in failure to connect.
Thus, this is the same bug as commit a45bc8a4f fixed in pg_dump and
pg_restore.  We can fix it in the same way, by using libpq's rules for
handling multiple "dbname" parameters to add the target database name
separately.  I chose to apply the same refactoring approach as in that
patch, with a struct to handle the command line parameters that need to
be passed through to connectDatabase.  (Maybe someday we can unify the
very similar functions here and in pg_dump/pg_restore.)

Per Peter Eisentraut's comments on bug #16604.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
2020-10-19 19:03:46 -04:00
Tom Lane 929c69aa19 In pg_restore's dump_lo_buf(), work a little harder on error handling.
Failure to write data to a large object during restore led to an ugly
and uninformative error message.  To add insult to injury, it then
fatal'd out, where other SQL-level errors usually result in pressing on.

Report the underlying error condition, rather than just giving not-very-
useful byte counts, and use warn_or_exit_horribly() so as to adhere to
pg_restore's general policy about whether to continue or not.

Also recognize that lo_write() returns int not size_t.

Per report from Justin Pryzby, though I didn't use his patch.
Given the lack of comparable complaints, I'm not sure this is
worth back-patching.

Discussion: https://postgr.es/m/20201018010232.GF9241@telsasoft.com
2020-10-18 12:26:02 -04:00
Tom Lane 7d00a6b2de In libpq for Windows, call WSAStartup once and WSACleanup not at all.
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call.  However, if that ever had actual
relevance, it wasn't in this century.  Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash.  Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.

libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles.  We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge.  It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.

However, the real reason for acting on this is that recent
experiments by Alexander Lakhin suggest that calling WSACleanup
during PQfinish might be triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.

Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.

While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.

If this proves to suppress the fairly-common ecpg test failures
we see on Windows, I'll back-patch, but for now let's just do it
in HEAD and see what happens.

Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
2020-10-17 16:53:48 -04:00
Bruce Momjian 536de14e2b pg_upgrade: remove C99 compiler req. from commit 3c0471b5fd
This commit required support for inline variable definition, which is
not a requirement.

RELEASE NOTE AUTHOR:  the author of commit 3c0471b5fd
(pg_upgrade/tablespaces) was Justin Pryzby, not me.

Reported-by: Andres Freund

Discussion: https://postgr.es/m/20201016001959.h24fkywfubkv2pc5@alap3.anarazel.de

Backpatch-through: 9.5
2020-10-15 20:37:20 -04:00
Bruce Momjian 3c0471b5fd pg_upgrade: generate check error for left-over new tablespace
Previously, if pg_upgrade failed, and the user recreated the cluster but
did not remove the new cluster tablespace directory, a later pg_upgrade
would fail since the new tablespace directory would already exists.
This adds error reporting for this during check.

Reported-by: Justin Pryzby

Discussion: https://postgr.es/m/20200925005531.GJ23631@telsasoft.com

Backpatch-through: 9.5
2020-10-15 19:33:46 -04:00