Commit Graph

6163 Commits

Author SHA1 Message Date
Tom Lane 1e8c5cf7c6 Make pg_ctl stop/restart/promote recheck postmaster aliveness.
"pg_ctl stop/restart" checked that the postmaster PID is valid just
once, as a side-effect of sending the stop signal, and then would
wait-till-timeout for the postmaster.pid file to go away.  This
neglects the case wherein the postmaster dies uncleanly after we
signal it.  Similarly, once "pg_ctl promote" has sent the signal,
it'd wait for the corresponding on-disk state change to occur
even if the postmaster dies.

I'm not sure how we've managed not to notice this problem, but it
seems to explain slow execution of the 017_shm.pl test script on AIX
since commit 4fdbf9af5, which added a speculative "pg_ctl stop" with
the idea of making real sure that the postmaster isn't there.  In the
test steps that kill-9 and then restart the postmaster, it's possible
to get past the initial signal attempt before kill() stops working
for the doomed postmaster.  If that happens, pg_ctl waited till
PGCTLTIMEOUT before giving up ... and the buildfarm's AIX members
have that set very high.

To fix, include a "kill(pid, 0)" test (similar to what
postmaster_is_alive uses) in these wait loops, so that we'll
give up immediately if the postmaster PID disappears.

While here, I chose to refactor those loops out of where they were.
do_stop() and do_restart() can perfectly well share one copy of the
wait-for-stop loop, and it seems desirable to put a similar function
beside that for wait-for-promote.

Back-patch to all supported versions, since pg_ctl's wait logic
is substantially identical in all, and we're seeing the slow test
behavior in all branches.

Discussion: https://postgr.es/m/20220210023537.GA3222837@rfd.leadboat.com
2022-02-10 16:49:39 -05:00
Peter Eisentraut 3c879e61d9 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 063c497a909612d444c7c7188482db9aef86200f
2022-02-07 13:31:57 +01:00
Tom Lane 1042de69db pg_dump: avoid useless query in binary_upgrade_set_type_oids_by_type_oid
Commit 6df7a9698 wrote appendPQExpBuffer where it should have
written printfPQExpBuffer.  This resulted in re-issuing the
previous query along with the desired one, which very accidentally
had no negative consequences except for some wasted cycles.

Back-patch to v14 where that came in.

Discussion: https://postgr.es/m/1714711.1642962663@sss.pgh.pa.us
2022-01-23 13:54:24 -05:00
Tom Lane 3886785b4c Fix psql \d's query for identifying parent triggers.
The original coding (from c33869cc3) failed with "more than one row
returned by a subquery used as an expression" if there were unrelated
triggers of the same tgname on parent partitioned tables.  (That's
possible because statement-level triggers don't get inherited.)  Fix
by applying LIMIT 1 after sorting the candidates by inheritance level.

Also, wrap the subquery in a CASE so that we don't have to execute it at
all when the trigger is visibly non-inherited.  Aside from saving some
cycles, this avoids the need for a confusing and undocumented NULLIF().

While here, tweak the format of the emitted query to look a bit
nicer for "psql -E", and add some explanation of this subquery,
because it badly needs it.

Report and patch by Justin Pryzby (with some editing by me).
Back-patch to v13 where the faulty code came in.

Discussion: https://postgr.es/m/20211217154356.GJ17618@telsasoft.com
2022-01-17 21:18:49 -05:00
Tom Lane d91d4338e0 Fix psql's tab-completion of enum label values.
Since enum labels have to be single-quoted, this part of the
tab completion machinery got side-swiped by commit cd69ec66c.
A side-effect of that commit is that (at least with some versions
of Readline) the text string passed for completion will omit the
leading quote mark of the enum label literal.  Libedit still acts
the same as before, though, so adapt COMPLETE_WITH_ENUM_VALUE so
that it can cope with either convention.

Also, when we fail to find any valid completion, set
rl_completion_suppress_quote = 1.  Otherwise readline will
go ahead and append a closing quote, which is unwanted.

Per report from Peter Eisentraut.  Back-patch to v13 where
cd69ec66c came in.

Discussion: https://postgr.es/m/8ca82d89-ec3d-8b28-8291-500efaf23b25@enterprisedb.com
2022-01-16 14:59:20 -05:00
Michael Paquier f5bea83606 Fix issues with describe queries of extended statistics in psql
This addresses some problems in the describe queries used for extended
statistics:
- Two schema qualifications for the text type were missing for \dX.
- The list of extended statistics listed for a table through \d was
ordered based on the object OIDs, but it is more consistent with the
other commands to order by namespace and then by object name.
- A couple of aliases were not used in \d.  These are removed.

This is similar to commits 1f092a3 and 07f8a9e.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220107022235.GA14051@telsasoft.com
Backpatch-through: 14
2022-01-08 16:45:14 +09:00
Andrew Dunstan e4d73d089f
Revert "Check that we have a working tar before trying to use it"
This reverts commit f920f7e799.

The patch in effect fixed a problem we didn't have and caused another
instead.

Backpatch to release 14 like original

Discussion: https://postgr.es/m/3655283.1638977975@sss.pgh.pa.us
2021-12-08 16:50:04 -05:00
Andrew Dunstan 90c08ed113
Check that we have a working tar before trying to use it
Issue exposed by commit edc2332550 and the buildfarm.

Backpatch to release 14 where this usage started.
2021-12-08 10:23:48 -05:00
Michael Paquier b6dac98b05 Move into separate file all the SQL queries used in pg_upgrade tests
The existing pg_upgrade/test.sh and the buildfarm code have been holding
the same set of SQL queries when doing cross-version upgrade tests to
adapt the objects created by the regression tests before the upgrade
(mostly, incompatible or non-existing objects need to be dropped from
the origin, perhaps re-created).

This moves all those SQL queries into a new, separate, file with a set
of \if clauses to handle the version checks depending on the old version
of the cluster to-be-upgraded.

The long-term plan is to make the buildfarm code re-use this new SQL
file, so as committers are able to fix any compatibility issues in the
tests of pg_upgrade with a refresh of the core code, without having to
poke at the buildfarm client.  Note that this is only able to handle the
main regression test suite, and that nothing is done yet for contrib
modules yet (these have more issues like their database names).

A backpatch down to 10 is done, adapting the version checks as this
script needs to be only backward-compatible, so as it becomes possible
to clean up a maximum amount of code within the buildfarm client.

Author: Justin Pryzby, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 10
2021-12-02 10:31:29 +09:00
Tom Lane 0fdf67476c Adjust pg_dump's priority ordering for casts.
When a stored expression depends on a user-defined cast, the backend
records the dependency as being on the cast's implementation function
--- or indeed, if there's no cast function involved but just
RelabelType or CoerceViaIO, no dependency is recorded at all.  This
is problematic for pg_dump, which is at risk of dumping things in the
wrong order leading to restore failures.  Given the lack of previous
reports, the risk isn't that high, but it can be demonstrated if the
cast is used in some view whose rowtype is then used as an input or
result type for some other function.  (That results in the view
getting hoisted into the functions portion of the dump, ahead of
the cast.)

A logically bulletproof fix for this would require including the
cast's OID in the parsed form of the expression, whence it could be
extracted by dependency.c, and then the stored dependency would force
pg_dump to do the right thing.  Such a change would be fairly invasive,
and certainly not back-patchable.  Moreover, since we'd prefer that
an expression using cast syntax be equal() to one doing the same
thing by explicit function call, the cast OID field would have to
have special ignored-by-comparisons semantics, making things messy.

So, let's instead fix this by a very simple hack in pg_dump: change
the object-type priority order so that casts are initially sorted
before functions, immediately after types.  This fixes the problem
in a fairly direct way for casts that have no implementation function.
For those that do, the implementation function will be hoisted to just
before the cast by the dependency sorting step, so that we still have
a valid dump order.  (I'm not sure that this provides a full guarantee
of no problems; but since it's been like this for many years without
any previous reports, this is probably enough to fix it in practice.)

Per report from Дмитрий Иванов.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAPL5KHoGa3uvyKp6z6m48LwCnTsK+LRQ_mcA4uKGfqAVSEjV_A@mail.gmail.com
2021-11-22 17:16:29 -05:00
Tom Lane aedc4600d8 Fix pg_dump --inserts mode for generated columns with dropped columns.
If a table contains a generated column that's preceded by a dropped
column, dumpTableData_insert failed to account for the dropped
column, and would emit DEFAULT placeholder(s) in the wrong column(s).
This resulted in failures at restore time.  The default COPY code path
did not have this bug, likely explaining why it wasn't noticed sooner.

While we're fixing this, we can be a little smarter about the
situation: (1) avoid unnecessarily fetching the values of generated
columns, (2) omit generated columns from the output, too, if we're
using --column-inserts.  While these modes aren't expected to be
as high-performance as the COPY path, we might as well be as
efficient as we can; it doesn't add much complexity.

Per report from Дмитрий Иванов.
Back-patch to v12 where generated columns came in.

Discussion: https://postgr.es/m/CAPL5KHrkBniyQt5e1rafm5DdXvbgiiqfEQEJ9GjtVzN71Jj5pA@mail.gmail.com
2021-11-22 15:25:48 -05:00
Tom Lane 3bd7556bbe pg_receivewal, pg_recvlogical: allow canceling initial password prompt.
Previously it was impossible to terminate these programs via control-C
while they were prompting for a password.  We can fix that trivially
for their initial password prompts, by moving setup of the SIGINT
handler from just before to just after their initial GetConnection()
calls.

This fix doesn't permit escaping out of later re-prompts, but those
should be exceedingly rare, since the user's password or the server's
authentication setup would have to have changed meanwhile.  We
considered applying a fix similar to commit 46d665bc2, but that
seemed more complicated than it'd be worth.  Moreover, this way is
back-patchable, which that wasn't.

The misbehavior exists in all supported versions, so back-patch to all.

Tom Lane and Nathan Bossart

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-21 14:13:35 -05:00
Tom Lane 53c4a580e4 Clean up error handling in pg_basebackup's walmethods.c.
The error handling here was a mess, as a result of a fundamentally
bad design (relying on errno to keep its value much longer than is
safe to assume) as well as a lot of just plain sloppiness, both as
to noticing errors at all and as to reporting the correct errno.
Moreover, the recent addition of LZ4 compression broke things
completely, because liblz4 doesn't use errno to report errors.

To improve matters, keep the error state in the DirectoryMethodData or
TarMethodData struct, and add a string field so we can handle cases
that don't set errno.  (The tar methods already had a version of this,
but it can be done more efficiently since all these cases use a
constant error string.)  Make the dir and tar methods handle errors
in basically identical ways, which they didn't before.

This requires copying errno into the state struct in a lot of places,
which is a bit tedious, but it has the virtue that we can get rid of
ad-hoc code to save and restore errno in a number of places ... not
to mention that it fixes other places that should've saved/restored
errno but neglected to.

In passing, fix some pointlessly static buffers to be ordinary
local variables.

There remains an issue about exactly how to handle errors from
fsync(), but that seems like material for its own patch.

While the LZ4 problems are new, all the rest of this is fixes for
old bugs, so backpatch to v10 where walmethods.c was introduced.

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
2021-11-17 14:16:34 -05:00
Tom Lane 6b413b41b4 Handle close() failures more robustly in pg_dump and pg_basebackup.
Coverity complained that applying get_gz_error after a failed gzclose,
as we did in one place in pg_basebackup, is unsafe.  I think it's
right: it's entirely likely that the call is touching freed memory.
Change that to inspect errno, as we do for other gzclose calls.

Also, be careful to initialize errno to zero immediately before any
gzclose() call where we care about the error status.  (There are
some calls where we don't, because we already failed at some previous
step.)  This ensures that we don't get a misleadingly irrelevant
error code if gzclose() fails in a way that doesn't set errno.
We could work harder at that, but it looks to me like all such cases
are basically can't-happen if we're not misusing zlib, so it's
not worth the extra notational cruft that would be required.

Also, fix several places that simply failed to check for close-time
errors at all, mostly at some remove from the close or gzclose itself;
and one place that did check but didn't bother to report the errno.

Back-patch to v12.  These mistakes are older than that, but between
the frontend logging API changes that happened in v12 and the fact
that frontend code can't rely on %m before that, the patch would need
substantial revision to work in older branches.  It doesn't quite
seem worth the trouble given the lack of related field complaints.

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
2021-11-17 13:08:25 -05:00
Tom Lane 99389cb66b Make psql's \password default to CURRENT_USER, not PQuser(conn).
The documentation says plainly that \password acts on "the current user"
by default.  What it actually acted on, or tried to, was the username
used to log into the current session.  This is not the same thing if
one has since done SET ROLE or SET SESSION AUTHENTICATION.  Aside from
the possible surprise factor, it's quite likely that the current role
doesn't have permissions to set the password of the original role.

To fix, use "SELECT CURRENT_USER" to get the role name to act on.
(This syntax works with servers at least back to 7.0.)  Also, in
hopes of reducing confusion, include the role name that will be
acted on in the password prompt.

The discrepancy from the documentation makes this a bug, so
back-patch to all supported branches.

Patch by me; thanks to Nathan Bossart for review.

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-12 14:55:32 -05:00
Peter Eisentraut 5a75612022 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: f54c1d7c2c97bb2a238a149e407023a9bc007b06
2021-11-08 10:06:30 +01:00
Magnus Hagander a0b6520ecf Clarify that --system reindexes system catalogs *only*
Make this more clear both in the help message and docs.

Reviewed-By: Michael Paquier
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CABUevEw6Je0WUFTLhPKOk4+BoBuDrE-fKw3N4ckqgDBMFu4paA@mail.gmail.com
2021-10-27 16:28:22 +02:00
Noah Misch dde966efb2 Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start.  That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index.  Queries that use the
resulting index can silently fail to find rows.  Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation.  It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).

Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.

Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
2021-10-23 18:36:42 -07:00
Tom Lane 3ad2c2455b pg_dump: fix mis-dumping of non-global default privileges.
Non-global default privilege entries should be dumped as-is,
not made relative to the default ACL for their object type.
This would typically only matter if one had revoked some
on-by-default privileges in a global entry, and then wanted
to grant them again in a non-global entry.

Per report from Boris Korzun.  This is an old bug, so back-patch
to all supported branches.

Neil Chen, test case by Masahiko Sawada

Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru
Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
2021-10-22 15:22:25 -04:00
Daniel Gustafsson 3e2f32b01d Fix bug in TOC file error message printing
If the blob TOC file cannot be parsed, the error message was failing
to print the filename as the variable holding it was shadowed by the
destination buffer for parsing.  When the filename fails to parse,
the error will print an empty string:

 ./pg_restore -d foo -F d dump
 pg_restore: error: invalid line in large object TOC file "": ..

..instead of the intended error message:

 ./pg_restore -d foo -F d dump
 pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": ..

Fix by renaming both variables as the shared name was too generic to
store either and still convey what the variable held.

Backpatch all the way down to 9.6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/A2B151F5-B32B-4F2C-BA4A-6870856D9BDE@yesql.se
Backpatch-through: 9.6
2021-10-19 12:59:54 +02:00
Daniel Gustafsson 121be6a665 Fix sscanf limits in pg_basebackup and pg_dump
Make sure that the string parsing is limited by the size of the
destination buffer.

In pg_basebackup the available values sent from the server
is limited to two characters so there was no risk of overflow.

In pg_dump the buffer is bounded by MAXPGPATH, and thus the limit
must be inserted via preprocessor expansion and the buffer increased
by one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the
destination buffer.

Backpatch the pg_basebackup fix to 11 where it was introduced, and
the pg_dump fix all the way down to 9.6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se
Backpatch-through: 11 and 9.6
2021-10-19 12:59:50 +02:00
Tom Lane 3e4c8db931 Avoid core dump in pg_dump when dumping from pre-8.3 server.
Commit f0e21f2f6 missed adding a tgisinternal output column
to getTriggers' query for pre-8.3 servers.  Back-patch to v11,
like that commit.
2021-10-16 15:03:05 -04:00
Tom Lane b5152e3ba6 Make pg_dump acquire lock on partitioned tables that are to be dumped.
It was clearly the intent to do so all along, but the original coding
fat-fingered this by checking the wrong array element.  We fixed it
in passing in 403a3d91c, but that later got reverted, and we forgot
to keep this bug fix.

Most of the time this'd be relatively harmless, since once we lock
any of the partitioned table's leaf partitions, that would suffice
to prevent major DDL on the partitioned table itself.  However, a
childless partitioned table would get dumped with no relevant lock
whatsoever, possibly allowing dump failure or inconsistent output.

Unlike 403a3d91c, there are no versioning concerns, since every server
version that has partitioned tables will allow you to lock one.

Back-patch to v10 where partitioned tables were introduced.

Discussion: https://postgr.es/m/1018205.1634346327@sss.pgh.pa.us
2021-10-16 12:24:11 -04:00
Peter Geoghegan 5863115e4c Remove unstable pg_amcheck tests.
Recent pg_amcheck bugfix commit d2bf06db added a test case that the
buildfarm has shown to be non-portable.  It doesn't particularly seem
worth keeping anyway.  Remove it.

Discussion: https://postgr.es/m/CAH2-Wz=7HKJ9WzAh7+M0JfwJ1yfT9qoE+KPa3P7iGToPOtGhXg@mail.gmail.com
Backpatch: 14-, just like the original commit.
2021-10-14 14:50:25 -07:00
Peter Geoghegan dd58194cf5 pg_amcheck: avoid unhelpful verification attempts.
Avoid calling contrib/amcheck functions with relations that are
unsuitable for checking.  Specifically, don't attempt verification of
temporary relations, or indexes whose pg_index entry indicates that the
index is invalid, or not ready.

These relations are not supported by any of the contrib/amcheck
functions, for reasons that are pretty fundamental.  For example, the
implementation of REINDEX CONCURRENTLY can add its own "transient"
pg_index entries, which has rather unclear implications for the B-Tree
verification functions, at least in the general case -- so they just
treat it as an error.  It falls to the amcheck caller (in this case
pg_amcheck) to deal with the situation at a higher level.

pg_amcheck now simply treats these conditions as additional "visibility
concerns" when it queries system catalogs.  This is a little arbitrary.
It seems to have the least problems among any of the available
alternatives.

Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Bug: #17212
Discussion: https://postgr.es/m/17212-34dd4a1d6bba98bf@postgresql.org
Backpatch: 14-, where pg_amcheck was introduced.
2021-10-13 14:08:11 -07:00
Michael Paquier f4e1c8892b Fix tests of pg_upgrade across different major versions
This fixes a set of issues that cause different breakages or annoyances
when using pg_upgrade's test.sh to do upgrades across different major
versions:
- test.sh is completely broken when using v14 as new version because of
the removal of testtablespace/ as Makefile rule.  Older versions of
pg_regress don't support --make-tablespacedir, blocking the creation of
the tablespace.  In order to fix that, it is simple enough to create
those directories in the script itself, but only do that when an old
version is involved.  This fix is needed on HEAD and REL_14_STABLE.
- The script would fail when using PG <= v11 as old version because of
WITH OIDS relations not supported in v12.  In order to fix this, this
steals a method from the buildfarm that uses a DO block to change all
the relations marked as WITH OIDS, allowing pg_upgrade to pass.  This is
more portable than using ALTER TABLE queries on the relations causing
issues.  This is fixed down to v12, and authored originally by Andrew
Dunstan.
- Not using --extra-float-digits=0 with v11 as old version causes
a lot of diffs in the dumps, making the whole unreadable.  This gets
only done when using v11 as old version.  This is fixed down to v12.
The buildfarm code uses that already.

Note that the addition of --wal-segsize and --allow-group-access breaks
the script when using v10 or older at initdb time as these got added in
11.  10 would be EOL'd next year and nobody has complained about those
problems yet, so nothing is done about that.  This means that this
commit fixes upgrade tests using test.sh with v11 as minimum older
version, up to HEAD, and that it is enough to apply this change down to
12.  The old and new dumps still generate diffs, still require manual
checks, and more could be done to reduce the noise, but this allows the
tests to run with a rather minimal amount of them.

I have tested this commit and test.sh with v11 as minimum across all the
branches where this is applied.  Note that this commit has no impact on
the normal pg_upgrade test run with a simple "make check".

Author:  Justin Pryzby, Andrew Dunstan, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 12
2021-10-13 09:22:23 +09:00
Michael Paquier d834ebcf23 Add more $Test::Builder::Level in the TAP tests
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.

This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).

A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.

Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.

Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
2021-10-12 11:16:20 +09:00
Michael Paquier ae254356f9 Fix warning in TAP test of pg_verifybackup
Oversight in a3fcbcd.

Reported-by: Thomas Munro
Discussion: https://postgr.es/m/CA+hUKGKnajZEwe91OTjro9kQLCMGGFHh2vvFn8tgHgbyn4bF9w@mail.gmail.com
Backpatch-through: 13
2021-10-06 13:28:30 +09:00
Tom Lane 919c08d909 Update our mapping of Windows time zone names some more.
Per discussion, let's just follow CLDR's default zone mappings
faithfully.  There are two changes here that are clear improvements:

* Mapping "Greenwich Standard Time" to Atlantic/Reykjavik is actually
a better fit than using London, because Iceland hasn't observed DST
since 1968, so this is more nearly what people might expect.

* Since the "Samoa" zone is specified to be UTC+13:00, we must map
it to Pacific/Apia not Pacific/Samoa; the latter refers to American
Samoa which is now on the other side of the date line.

The rest of these changes look like they're choosing the most populous
IANA zone as representative.  Whatever the details, we're just going
to say "if you don't like this mapping, complain to CLDR".

Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
2021-10-04 14:52:17 -04:00
Tom Lane fa8db48791 Update our mapping of Windows time zone names using CLDR info.
This corrects a bunch of entries in win32_tzmap[], and adds a few
new ones, based on the CLDR project's windowsZones.xml file.
Non-cosmetic changes fall into four main categories:

* Flat-out errors:

US/Aleutan doesn't exist
America/Salvador doesn't exist
Asia/Baku is wrong for Yerevan
Asia/Dhaka (Bangladesh) is wrong for Astana (Kazakhstan)
Europe/Bucharest is wrong for Chisinau
America/Mexico_City is wrong for Chetumal
America/Buenos_Aires is wrong for Cayenne
America/Caracas has its own zone, so poor fit for La Paz
US/Eastern is wrong for Haiti
US/Eastern is wrong for Indiana (East)
Asia/Karachi is wrong for Tashkent
Etc/UTC+12 doesn't exist
Signs of Etc/GMT zones were backwards

* Judgment calls:

(These changes follow CLDR's choices, except for the first one)

Use Europe/London for "Greenwich Standard Time", since that seems much
more likely than Africa/Casablanca to be what people will think that
zone name means.  CLDR has Atlantic/Reykjavik here, but that's no better.

Asia/Shanghai seems a better fit than Hong Kong for "China Standard
Time".

Europe/Sarajevo is now a link to Belgrade, ie "Central Europe Standard
Time"; so use Warsaw for "Central European Standard Time".

America/Sao_Paulo seems more representative than Araguaina for
"E. South America Standard Time".

Africa/Johannesburg seems more representative than Harare for
"South Africa Standard Time".

* New Windows zone names:

"Israel Standard Time"
"Kaliningrad Standard Time"
"Russia Time Zone N" for various N
"Singapore Standard Time"
"South Sudan Standard Time"
"W. Central Africa Standard Time"
"West Bank Standard Time"
"Yukon Standard Time"

Some of these replace older spellings, but I kept the older spellings
too in case our code runs on a machine with the older data.

* Replace aliases (tzdb Links) with underlying city-named zones:

(This tracks tzdb's longstanding practice, and reduces inconsistency
with the rest of the entries, as well as with CLDR.)

US/Alaska
Asia/Kuwait
Asia/Muscat
Canada/Atlantic
Australia/Canberra
Canada/Saskatchewan
US/Central
US/Eastern
US/Hawaii
US/Mountain
Canada/Newfoundland
US/Pacific

Back-patch to all supported branches, as is our usual practice for
time zone data updates.

Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
2021-10-02 16:06:09 -04:00
Tom Lane 81464999bc Re-alphabetize the win32_tzmap[] array.
The original intent seems to have been to sort case-insensitively
by the Windows zone name, but various changes over the years did
not get that memo.  This commit just moves a few entries to
restore exact alphabetic order, to ease comparison to the outputs
of processing scripts.

Back-patch to all supported branches, as is our usual practice for
time zone data updates.

Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
2021-10-02 16:06:09 -04:00
Fujii Masao 8231c500ed pgbench: Fix handling of socket errors during benchmark.
Previously socket errors such as invalid socket or socket wait method failures
during benchmark caused pgbench to exit with status 0. Instead, errors during
the run should result in exit status 2.

Back-patch to v12 where pgbench started reporting exit status.

Original complaint and patch by Hayato Kuroda.

Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
2021-09-29 21:49:29 +09:00
Fujii Masao 8021334d37 pgbench: Correct log level of message output when socket wait method fails.
The failure of socket wait method like "select()" doesn't terminate pgbench.
So the log level of error message when that failure happens should be ERROR.
But previously FATAL was used in that case.

Back-patch to v13 where pgbench started using common logging API.

Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/20210617005934.8bd37bf72efd5f1b38e6f482@sraoss.co.jp
2021-09-29 21:47:25 +09:00
Magnus Hagander febbb2f52c Properly schema-prefix reference to pg_catalog.pg_get_statisticsobjdef_columns
Author: Tatsuro Yamada
Backpatch-through: 14
Discussion: https://www.postgresql.org/message-id/7ad8cd13-db5b-5cf6-8561-dccad1a934cb@nttcom.co.jp
2021-09-28 16:24:24 +02:00
Peter Eisentraut 9f535e4aea Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 941ca560d0b36a8bace8432b06302ca003829f42
2021-09-27 09:22:27 +02:00
Alexander Korotkov 7186f07189 Split macros from visibilitymap.h into a separate header
That allows to include just visibilitymapdefs.h from file.c, and in turn,
remove include of postgres.h from relcache.h.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20210913232614.czafiubr435l6egi%40alap3.anarazel.de
Author: Alexander Korotkov
Reviewed-by: Andres Freund, Tom Lane, Alvaro Herrera
Backpatch-through: 13
2021-09-23 19:59:11 +03:00
Peter Eisentraut 3e8525aab8 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 10b675b81a3a04bac460cb049e0b7b6e17fb4795
2021-09-20 16:23:13 +02:00
Fujii Masao b27d0cd314 pgbench: Stop counting skipped transactions as soon as timer is exceeded.
When throttling is used, transactions that lag behind schedule by
more than the latency limit are counted and reported as skipped.
Previously, there was the case where pgbench counted all skipped
transactions even if the timer specified in -T option was exceeded.
This could take a very long time to do that especially when unrealistically
high rate setting in -R option caused quite a lot of transactions that
lagged behind schedule. This could prevent pgbench from ending
immediately, and so pgbench could look like it got stuck to users.

To fix the issue, this commit changes pgbench so that it stops counting
skipped transactions as soon as the timer is exceeded. The timer can
make pgbench end soon even when there are lots of skipped transactions
that have not been counted yet.

Note that there is no guarantee that all skipped transactions are
counted under -T though there is under -t. This is OK in practice
because it's very unlikely to happen with realistic setting. Also this is
not the issue that this commit newly introdues. There used to be
the case where pgbench ended without counting all skipped
transactions since before.

Back-patch to v14. Per discussion, we decided not to bother
back-patch to the stable branches because it's hard to imagine
the issue happens in practice (with realistic setting).

Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Greg Sabino Mullane, Fujii Masao
Discussion: https://postgr.es/m/20210613040151.265ff59d832f835bbcf8b3ba@sraoss.co.jp
2021-09-10 01:28:51 +09:00
Tom Lane 52c300df32 Avoid useless malloc/free traffic around getFormattedTypeName().
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string.  Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles.  We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.

Back-patch, as commit 6c450a861 was.  This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
2021-09-08 15:09:42 -04:00
Tom Lane 5edbcbd5b9 Minor improvements for psql help output.
Fix alphabetization of the output of "\?", and improve one description.

Update PageOutput counts where needed, fixing breakage from previous
patches.

Haiying Tang (PageOutput fixes by me)

Discussion: https://postgr.es/m/OS0PR01MB61136018064660F095CB57A8FB129@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-09-04 13:28:16 -04:00
Tom Lane 69d670e68e Remove arbitrary MAXPGPATH limit on command lengths in pg_ctl.
Replace fixed-length command buffers with psprintf() calls.  We didn't
have anything as convenient as psprintf() when this code was written,
but now that we do, there's little reason for the limitation to
stand.  Removing it eliminates some corner cases where (for example)
starting the postmaster with a whole lot of options fails.

Most individual file names that pg_ctl deals with are still restricted
to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename.

Back-patch to all supported branches.

Phil Krylov

Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
2021-09-03 21:04:44 -04:00
Fujii Masao d760d942c7 pgbench: Fix bug in measurement of disconnection delays.
When -C/--connect option is specified, pgbench establishes and closes
the connection for each transaction. In this case pgbench needs to
measure the times taken for all those connections and disconnections,
to include the average connection time in the benchmark result.
But previously pgbench could not measure those disconnection delays.
To fix the bug, this commit makes pgbench measure the disconnection
delays whenever the connection is closed at the end of transaction,
if -C/--connect option is specified.

Back-patch to v14. Per discussion, we concluded not to back-patch to v13
or before because changing that behavior in stable branches would
surprise users rather than providing benefits.

Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif Rehman, Fujii Masao
Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
2021-09-01 17:06:11 +09:00
Tomas Vondra 4d1816ec26 Don't print extra parens around expressions in extended stats
The code printing expressions for extended statistics doubled the
parens, producing results like ((a+1)), which is unnecessary and not
consistent with how we print expressions elsewhere.

Fixed by tweaking the code to produce just a single set of parens.

Reported by Mark Dilger, fix by me. Backpatch to 14, where support for
extended statistics on expressions was added.

Reported-by: Mark Dilger
Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
2021-09-01 00:44:12 +02:00
Tom Lane a20a9f26ce In pg_dump, avoid doing per-table queries for RLS policies.
For no particularly good reason, getPolicies() queried pg_policy
separately for each table.  We can collect all the policies in
a single query instead, and attach them to the correct TableInfo
objects using findTableByOid() lookups.  On the regression
database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local
server.

Per complaint from Hubert Depesz Lubaczewski.  Since this is such
a simple fix and can have a visible performance benefit, back-patch
to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 15:04:05 -04:00
Tom Lane 9407dbbcb5 Cache the results of format_type() queries in pg_dump.
There's long been a "TODO: there might be some value in caching
the results" annotation on pg_dump's getFormattedTypeName function;
but we hadn't gotten around to checking what it was costing us to
repetitively look up type names.  It turns out that when dumping the
current regression database, about 10% of the total number of queries
issued are duplicative format_type() queries.  However, Hubert Depesz
Lubaczewski reported a not-unusual case where these account for over
half of the queries issued by pg_dump.  Individually these queries
aren't expensive, but when network lag is a factor, they add up to a
problem.  We can very easily add some caching to getFormattedTypeName
to solve it.

Since this is such a simple fix and can have a visible performance
benefit, back-patch to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 13:53:49 -04:00
Alvaro Herrera ce6b662aae
psql: Fix name quoting on extended statistics
Per our message style guidelines, for human consumption we quote
qualified names as a whole rather than each part separately; but commits
bc085205c8 introduced a deviation for extended statistics and
a4d75c86bf copied it.  I don't agree with this policy applying to
names shown by psql, but that's a poor reason to deviate from the
practice only in two obscure corners, so make said corners use the same
style as everywhere else.

Backpatch to 14.  The first of these is older, but I'm not sure we want
to destabilize the psql output in the older branches for such a small
thing.

Discussion: https://postgr.es/m/20210828181618.GS26465@telsasoft.com
2021-08-30 14:01:29 -04:00
Fujii Masao efe2382d5a pgbench: Avoid unnecessary measurement of connection delays.
Commit 547f04e734 changed pgbench so that it used the measurement result
of connection delays in its benchmark report only when -C/--connect option
is specified. But previously those delays were unnecessarily measured
even when that option is not specified. Which was a waste of cycles.
This commit improves pgbench so that it avoids such unnecessary measurement.

Back-patch to v14 where commit 547f04e734 first appeared.

Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Asif Rehman, Fujii Masao
Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
2021-08-30 21:37:43 +09:00
Alvaro Herrera 359bcf7755
psql \dX: reference regclass with "pg_catalog." prefix
Déjà vu of commit fc40ba1296, for another backslash command.
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent.  The
omission first appeared in commit ad600bba04 and replicated in
a4d75c86bf15; backpatch to 14.

Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
2021-08-28 12:04:15 -04:00
Alvaro Herrera a32860b88f
psql \dP: reference regclass with "pg_catalog." prefix
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent.  The
omission first appeared in commit 1c5d9270e3, so backpatch to 12.

Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
2021-08-28 11:45:47 -04:00
Amit Kapila 5cfcd46e9d Fix Alter Subscription's Add/Drop Publication behavior.
The current refresh behavior tries to just refresh added/dropped
publications but that leads to removing wrong tables from subscription. We
can't refresh just the dropped publication because it is quite possible
that some of the tables are removed from publication by that time and now
those will remain as part of the subscription. Also, there is a chance
that the tables that were part of the publication being dropped are also
part of another publication, so we can't remove those.

So, we decided that by default, add/drop commands will also act like
REFRESH PUBLICATION which means they will refresh all the publications. We
can keep the old behavior for "add publication" but it is better to be
consistent with "drop publication".

Author: Hou Zhijie
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-08-24 08:38:11 +05:30