In dumputils, we may have successfully parsed the acls when we discover
that we can't parse the reverse ACLs and then return- check and free
aclitems if that happens.
In dumpTableSchema, move ftoptions and srvname under the relkind !=
RELKIND_VIEW branch (since they're only used there) and then check if
they've been allocated and, if so, free them at the end of that block.
Pointed out by Pavel Raiskup, though I didn't use those patches.
Discussion: https://postgr.es/m/2183976.vkCJMhdhmF@nb.usersys.redhat.com
This does not improve the security and reliability of the touched areas,
but it makes the style more consistent.
Author: Michael Paquier
Reviewed-by- Noah Misch
Discussion: https://postgr.es/m/20180309075538.GD9376@paquier.xyz
This allows control of the directory in which the postmaster sockets
are created for the temporary postmasters started by pg_upgrade.
The default location remains the current working directory, which is
typically fine, but if it is deeply nested then its pathname might
be too long to be a socket name.
In passing, clean up some messiness in pg_upgrade's option handling,
particularly the confusing and undocumented way that configuration-only
datadirs were handled. And fix check_required_directory's substantially
under-baked cleanup of directory pathnames.
Daniel Gustafsson, reviewed by Hironobu Suzuki, some code cleanup by me
Discussion: https://postgr.es/m/E72DD5C3-2268-48A5-A907-ED4B34BEC223@yesql.se
TAP tests on msys need to run with the DTK perl, which understands msys
virtualized paths. Postgres, however, does not understand such paths,
so before a path can be used safely with CREATE TABLESPACE, it needs to
be translated into a path on the underlying file system.
Per report from buildfarm member jacana. Suggested fix is from Andrew
Dunstan.
Discussion: https://postgr.es/m/20181130053555.GF2267@paquier.xyz
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund. More tests are added to cover this case.
A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.
Author: Michael Banck, Michael Paquier
Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
This basically reverts commit d55241af70,
leaving around a portion of the regression tests still adapted with
empty relation files, and corrupted cases. This is also proving to be
failing to check properly relation files located in a non-default
tablespace path.
Per discussion with various folks, including Stephen Frost, David
Steele, Andres Freund, Michael Banck and myself.
Reported-by: Michael Banck
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
Backpatch-through: 11
CREATE STATISTICS completion was checking manually for the start and end
of the parenthesised list of types. That works, but we now have a better
way to do that as commit 121213d9d taught word_matches() to allow '*' in
the middle of an alternative. But it only applied that to tab completion
for EXPLAIN, ANALYZE and VACUUM. Use it for CREATE STATISTICS too.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://www.postgresql.org/message-id/flat/d8jwooziy1s.fsf%40dalvik.ping.uio.no
The primary purpose of this commit is to ensure pg_upgrade tests yield
comparable dumps pre/post upgrade, which got broken by 12a53c732 /
578b229718, as the order in pg_largeobject_metadata is likely to
differ pre/post upgrade.
It also seems like a generally good idea to make sure such dumps are
comparable, outside of pg_upgrade tests.
LO metadata already was already dumped in an ordered manner as the
metadata is dumped in a well defined order via
sortDumpableObjectsByTypeName() and sortDumpableObjects(). But large
object data is currently not tracked via that mechanism.
As Tom points out it seems possible that at some point dumpBlobs() was
assumed to dump out objects in a well defined order, due to the use of
DISTINCT, which at that time only was done using sorting.
Per complaint from Andrew Dunstan and discussion with him and Tom
Lane.
Author: Andres Freund
Discussion: https://postgr.es/m/2735.1543333649@sss.pgh.pa.us
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
pg_upgrade previously copied pg_largeobject_metadata over from the old
cluster. That doesn't work, because the table has oids before
578b229718. I missed that.
As most pieces of metadata for large objects already were dumped as
DDL (except for comments overwritten by pg_upgrade, due to the copy of
pg_largeobject_metadata) it seems reasonable to just also dump grants
for large objects. If we ever consider this a relevant performance
problem, we'd need to fix the rest of the already emitted DDL
too.
There's still an open discussion about whether we'll want to force a
specific ordering for the dumped objects, as currently
pg_largeobjects_metadata potentially has a different ordering
before/after pg_upgrade, which can make automated testing a bit
harder.
Reported-By: Andrew Dunstan
Author: Andres Freund
Discussion: https://postgr.es/m/91a8a980-41bc-412b-fba2-2ba71a141c2b@2ndQuadrant.com
"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.
This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs. It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.
Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me
Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
Commit eaf746a5b unintentionally made psql's "latex" output format
inaccessible, since not only "latex" but all abbreviations of it
were considered ambiguous against "latex-longtable". Let's go
back to the longstanding behavior that all shortened versions
mean "latex", and you have to write at least "latex-" to get
"latex-longtable". This leaves the only difference from pre-v12
behavior being that "\pset format a" is considered ambiguous.
The fact that the regression tests didn't expose this is pretty bad,
but fixing it is material for a separate commit.
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
This commit continues the code improvements started by commit
12788ae49e. With this commit, state machine transitions are better
contained in the routine that was called doCustom() and is now called
advanceConnectionState -- the resulting code is easier to reason about,
since there are no state changes occuring in the outer layer.
This change is prompted by future patches to add more features to
pgbench, which will need to effect some more surgery to this code.
Fabien's original had all the machine state changes inside one routine,
but I (Álvaro) thought that a subroutine to handle command execution is
more straightforward to review, so this commit does not match Fabien's
submission closely. If something is broken, it's probably my fault.
Author: Fabien Coelho, Álvaro Herrera
Reviewed-by: Kirk Jamison
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
Two issues have been spotted and get fixed here:
- When checking for corrupted files, make sure that pg_verify_checksums
complains about the correct file. In order to make the logic more
robust, all files created are immediately removed once checks on them
are done. The error message generated by pg_verify_checksums also now
includes the file name it sees as corrupted.
- Before running corruption-related tests, empty files are generated
which used names mapping with the corrupted files, potentially leading
to conflicts. So use different set of names for both.
Author: Michael Banck
Discussion: https://postgr.es/m/20181119181119.GC23740@nighthawk.caipicrew.dd-dns.de
In \d and \z, instead of conflating partitioned tables and indexes with
plain ones, set the "type" column and table title differently to make
the distinction obvious. A simple ease-of-use improvement.
Author: Pavel Stehule, Michaël Paquier, Álvaro Herrera
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAFj8pRDMWPgijpt_vPj1t702PgLG4Ls2NCf+rEcb+qGPpossmg@mail.gmail.com
When hostaddr is given, the actual IP address that psql is connected to
can be totally unexpected for the given host. The more verbose output
we now generate makes things clearer. Since the "host" and "hostaddr"
parts of the conninfo could come from different sources (say, one of
them is in the service specification or a URI-style conninfo and the
other is not), this is not as silly as it may first appear. This is
also definitely useful if the hostname resolves to multiple addresses.
Author: Fabien Coelho
Reviewed-by: Pavel Stehule, Arthur Zakirov
Discussion: https://postgr.es/m/alpine.DEB.2.21.1810261532380.27686@lancrehttps://postgr.es/m/alpine.DEB.2.21.1808201323020.13832@lancre
This becomes useful when used to retry a transaction after a
serialization error or deadlock abort. (We don't yet have that feature,
but this is preparation for it.)
While at it, use separate random state for thread administratrivia such
as deciding which script to run, how long to delay for throttling, or
whether to log a message when sampling; this not only makes these tasks
independent of each other, but makes the actual thread run
deterministic.
Author: Marina Polyakova
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/72a0d590d6ba06f242d75c2e641820ec@postgrespro.ru
The previous behavior of preferring the oldest match had the advantage
of not breaking existing scripts when we add a conflicting format name;
but that behavior was undocumented and fragile (it seems just luck that
commit add9182e5 didn't break it). Let's go over to the less mistake-
prone approach of complaining when there are multiple matches.
Since this is a small compatibility break, no back-patch.
Daniel Vérité
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
A table with OIDs that was the first in the dump output would not get
dumped with OIDs enabled. Fix that.
The reason was that the currWithOids flag was declared to be bool but
actually also takes a -1 value for "don't know yet". But under
stdbool.h semantics, that is coerced to true, so the required SET
default_with_oids command is not output again. Change the variable
type to char to fix that.
Reported-by: Derek Nelson <derek@pipelinedb.com>
Add another transfer mode --clone to pg_upgrade (besides the existing
--link and the default copy), using special file cloning calls. This
makes the file transfer faster and more space efficient, achieving
speed similar to --link mode without the associated drawbacks.
On Linux, file cloning is supported on Btrfs and XFS (if formatted with
reflink support). On macOS, file cloning is supported on APFS.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
The installcheck run takes a sizable fraction of test.sh to run. Using
a parallel schedule reduces that noticably.
It's possible that we want to backpatch this at some point, to reduce
buildfarm times, but for now lets just see if this upsets the
buildfarm.
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
This moves one check for conflicting options from the archive restore
code to the main function where other similar checks are performed.
Also reword the error message to be consistent with other messages.
The only option combination impacted is --create specified with
--single-transaction, and informing the caller at an early step saves
from opening the archive worked on. A TAP test is added for this
combination.
Author: Daniel Gustafsson
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/616808BD-4B59-4E6C-97A9-7317F62D5570@yesql.se
This adds tab completion of the clauses WHEN and EXECUTE
FUNCTION|PROCEDURE clauses to CREATE EVENT TRIGGER, similar to CREATE
TRIGGER in the previous commit. This has version-dependent logic so as
FUNCTION is chosen over PROCEDURE for 11 and newer versions.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/d8jmur4q4yc.fsf@dalvik.ping.uio.no
The change to accept EXECUTE FUNCTION as well as EXECUTE PROCEDURE in
CREATE TRIGGER (added by 0a63f99) forgot to tell psql's tab completion
system about this. In passing, add tab completion of EXECUTE
FUNCTION/PROCEDURE after a complete WHEN ( … ) clause.
This change is version-aware, with FUNCTION being selected automatically
instead of PROCEDURE depending on the backend version, PROCEDURE being
an historical grammar kept for compatibility and considered as
deprecated in v11.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/d8jmur4q4yc.fsf@dalvik.ping.uio.no
The pg_verify_checksums test tries to create files with corrupt data
named "123." and "123_." But on Windows a file name with a trailing dot
is the same as a file without the trailing dot. In the first case this
will create a file with a "valid" name, which causes the test to fail in
an unexpected way, and in the secongd case this will be redandant as the
test already creates a file named "123_".
Bug discovered by buildfarm animal bowerbird.
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
The original implementation of pg_verify_checksums used a blacklist to
decide which files should be skipped for scanning as they do not include
data checksums, like pg_internal.init or pg_control. However, this
missed two things:
- Some files are created within builds of EXEC_BACKEND and these were
not listed, causing failures on Windows.
- Extensions may create custom files in data folders, causing the tool
to equally fail.
This commit switches to a whitelist-like method instead by checking if
the files to scan are authorized relation files. This is close to a
reverse-engineering of what is defined in relpath.c in charge of
building the relation paths, and we could consider refactoring what this
patch does so as all routines are in a single place. This is left for
later.
This is based on a suggestion from Andres Freund. TAP tests are updated
so as multiple file patterns are tested. The bug has been spotted by
various buildfarm members as a result of b34e84f which has introduced
the TAP tests of pg_verify_checksums.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Michael Banck
Discussion: https://postgr.es/m/20181012005614.GC26424@paquier.xyz
Backpatch-through: 11
When an error occurs during a benchmark run, exit with a nonzero exit
code and write a message at the end. Previously, it would just print
the error message when it happened but then proceed to print the run
summary normally and exit with status 0. To still allow
distinguishing setup from run-time errors, we use exit status 2 for
the new state, whereas existing errors during pgbench initialization
use exit status 1.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
All options available in the utility get coverage:
- Tests with disabled page checksums.
- Tests with enabled test checksums.
- Emulation of corruption and broken checksums with a full scan and
single relfilenode scan.
This patch has been contributed mainly by Michael Banck and Magnus
Hagander with things presented on various threads, and I have gathered
all the contents into a single patch.
Author: Michael Banck, Magnus Hagander, Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20181005012645.GE1629@paquier.xyz
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
I started out with the idea that we needed to detect use of %m format specs
in contexts other than elog/ereport calls, because we couldn't rely on that
working in *printf calls. But a better answer is to fix things so that it
does work. Now that we're using snprintf.c all the time, we can implement
%m in that and we've fixed the problem.
This requires also adjusting our various printf-wrapping functions so that
they ensure "errno" is preserved when they call snprintf.c.
Remove elog.c's handmade implementation of %m, and let it rely on
snprintf to support the feature. That should provide some performance
gain, though I've not attempted to measure it.
There are a lot of places where we could now simplify 'printf("%s",
strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
to make that happen.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
It's fairly silly to truncate the throttle_delay to integer when the only
math we ever do with it requires converting back to double. Furthermore,
given that people are starting to complain about restrictions like only
supporting 1K client connections, I don't think we're very far away from
situations where the precision loss matters. As the code stood, for
example, there's no difference between --rate 100001 and --rate 111111;
both get converted to throttle_delay = 9. Somebody trying to run 100
threads and have each one dispatch around 1K TPS would find this lack of
precision rather surprising, especially since the required per-thread
delays are around 1ms, well within the timing precision of modern systems.
96e1cb4 has added support for --no-publications in pg_dump, pg_dumpall
and pg_restore, but forgot the fact that publication tables also need to
be ignored when this option is used.
Author: Gilles Darold
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3f48e812-b0fa-388e-2043-9a176bdee27e@dalibo.com
Backpatch-through: 10, where publications have been added.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
Previously, pgbench always used select(2) for this purpose, but that's
problematic for very high client counts, because select() can't deal
with file descriptor numbers larger than FD_SETSIZE. It's pretty common
for that to be only 1024 or so, whereas modern OSes can allow many more
open files than that. Using poll(2) would surmount that problem, but it
creates another one: poll()'s timeout resolution is only 1ms, which is
poor enough to cause problems with --rate specifications approaching or
exceeding 1K TPS.
On platforms that have ppoll(2), which includes Linux and recent
FreeBSD, we can use that to avoid the FD_SETSIZE problem without any
loss of timeout resolution. Hence, add configure logic to test for
ppoll(), and use it if available.
This patch introduces an abstraction layer into pgbench that could
be extended to support other kernel event-wait APIs such as kevents.
But actually adding such support is a matter for some future patch.
Doug Rady, reviewed by Robert Haas and Fabien Coelho, and whacked around
a good bit more by me
Discussion: https://postgr.es/m/23D017C9-81B7-484D-8490-FD94DEC4DF59@amazon.com
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single. In a stand-alone
backend, "SELECT random()" always started at the same seed.
On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed. The sysv fallback appeared after ten crashes of
--boot or --single postgres. Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.
Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.
Reviewed by Tom Lane and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20180915221546.GA3159382@rfd.leadboat.com
This replaces the "TailMatchesN" macros with just "TailMatches",
and likewise "HeadMatchesN" becomes "HeadMatches" and "MatchesN"
becomes "Matches". The various COMPLETE_WITH_LISTn macros are
reduced to COMPLETE_WITH, and the single-item COMPLETE_WITH_CONST
also gets folded into that. This eliminates a lot of minor
annoyance in writing tab-completion rules. Usefully, the compiled
code also gets a bit smaller (10% or so, on my machine).
The implementation depends on variadic macros, so we couldn't have
done this before we required C99.
Andres Freund and Thomas Munro; some cosmetic cleanup by me.
Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no
You can't use "FOR TABLE" as a single Matches argument, because readline
will consider that input to be two words not one. It's necessary to make
the pattern contain two arguments.
The case accidentally worked anyway because the words_after_create
test fired ... but only for the first such table name.
Noted by Edmund Horner, though this isn't exactly his proposed fix.
Backpatch to v10 where the faulty code came in.
Discussion: https://postgr.es/m/CAMyN-kDe=gBmHgxWwUUaXuwK+p+7g1vChR7foPHRDLE592nJPQ@mail.gmail.com
Previously, we made no attempt to provide tab completion in these
statements' optional parenthesized options lists. This patch teaches
psql to do so.
To prevent the option completions from being offered after we've already
seen a complete parenthesized option list, it's necessary to improve
word_matches() so that it allows a wildcard '*' in the middle of an
alternative, not only at the end as formerly. That requires only a
little more code than before, and it allows us to test for "incomplete
parenthesized options" with a test like
else if (HeadMatches2("EXPLAIN", "(*") &&
!HeadMatches2("EXPLAIN", "(*)"))
In addition, add some logic to offer column names in the context of
"ANALYZE tablename ( ...", and likewise for VACUUM. This isn't real
complete; it won't offer column names again after a comma. But it's
better than before, and it doesn't take much code.
Justin Pryzby, reviewed at various times by Álvaro Herrera, Arthur
Zakirov, and Edmund Horner; some additional fixups by me
Discussion: https://postgr.es/m/20180529000623.GA21896@telsasoft.com
The previous convention was to use names based on the set of relkinds being
selected for, which was not at all helpful for maintenance, especially
since people had been quite inconsistent about whether to change the names
when they changed the relkinds being selected for. Instead, use names
based on the functionality we need the relation to have, following the
model established by Query_for_list_of_updatables.
While at it, sort the list of Query constants a bit better; it had the
distinct air of code-assembled-by-dartboard before.
Discussion: https://postgr.es/m/14830.1537481254@sss.pgh.pa.us
This should offer the same relation types that SELECT ... FROM would.
You can't select from an index for instance, so offering it here is
unhelpful. Noted while testing ilmari's recent patch.
We have the infrastructure to offer a list of tablespace names, but
it wasn't being used here; instead you got "FROM", "CURRENT", and "TO"
which aren't actually legal in this syntax.
Dagfinn Ilmari Mannsåker, reviewed by Arthur Zakirov
Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no
The pref_non_data heuristic has been dead code for nearly ten years,
and as far as I can tell was dead code even when it was first committed.
I'm tired of silencing Coverity complaints about it, so get rid of it.
If anyone is ever interested in pursuing the concept, they can get the
code out of our git history.
Our general practice in frontend code is to accept input with either
Unix-style newlines (\n) or DOS-style (\r\n). pgbench was mostly down
with that, but its rule for line continuations (backslash-newline) was
not. This had been masked on Windows buildfarm machines before commit
0ba06e0bf by use of Windows text mode to read files. We could have fixed
it by forcing text mode again, but it's better to fix the parsing code
so that Windows-style text files on Unix systems don't cause problems.
Back-patch to v10 where pgbench grew line continuations.
Discussion: https://postgr.es/m/17194.1537191697@sss.pgh.pa.us
Previously, the way this worked was that a parallel pg_dump would
re-order the TABLE_DATA items in the dump's TOC into decreasing size
order, and separately re-order (some of) the INDEX items into decreasing
size order. Then pg_dump would dump the items in that order. Later,
parallel pg_restore just followed the TOC order. This method had lots
of deficiencies:
* TOC ordering randomly differed between parallel and non-parallel
dumps, and was hard to predict in the former case, causing problems
for building stable pg_dump test cases.
* Parallel restore only followed a well-chosen order if the dump had
been done in parallel; in particular, this never happened for restore
from custom-format dumps.
* The best order for restore isn't necessarily the same as for dump,
and it's not really static either because of locking considerations.
* TABLE_DATA and INDEX items aren't the only things that might take a lot
of work during restore. Scheduling was particularly stupid for the BLOBS
item, which might require lots of work during dump as well as restore,
but was left to the end in either case.
This patch removes the logic that changed the TOC order, fixing the
test instability problem. Instead, we sort the parallelizable items
just before processing them during a parallel dump. Independently
of that, parallel restore prioritizes the ready-to-execute tasks
based on the size of the underlying table. In the case of dependent
tasks such as index, constraint, or foreign key creation, the largest
relevant table is used as the metric for estimating the task length.
(This is pretty crude, but it should be enough to avoid the case we
want to avoid, which is ending the run with just a few large tasks
such that we can't make use of all N workers.)
Patch by me, responding to a complaint from Peter Eisentraut,
who also reviewed the patch.
Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
PostgreSQL uses a custom wrapper for open() and fopen() which is
concurrent-safe, allowing multiple processes to open and work on the
same file. This has a couple of advantages:
- pg_test_fsync does not handle O_DSYNC correctly otherwise, leading to
false claims that disks are unsafe.
- TAP tests can run into race conditions when a postmaster and pg_ctl
open postmaster.pid, fixing some random failures in the buildfam.
pg_upgrade is one frontend tool using workarounds to bypass file locking
issues with the log files it generates, however the interactions with
pg_ctl are proving to be tedious to get rid of, so this is left for
later.
Author: Laurenz Albe
Reviewed-by: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/1527846213.2475.31.camel@cybertec.at
Discussion: https://postgr.es/m/16922.1520722108@sss.pgh.pa.us
Fix one untranslatable string concatenation in pg_rewind.
Fix one message in pg_verify_checksums to use a style use elsewhere
and avoid plural issues.
Fix one gratuitous abbreviation in psql.
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
* Include partitioned tables in what's offered after ANALYZE.
* Include toast_tuple_target in what's offered after ALTER TABLE ... SET|RESET.
* Include HASH in what's offered after PARTITION BY.
This is extracted from a larger patch; these bits seem like
uncontroversial bug fixes for v11 features, so back-patch them into v11.
Justin Pryzby
Discussion: https://postgr.es/m/20180529000623.GA21896@telsasoft.com
There's a project policy against using plain "char buf[BLCKSZ]" local
or static variables as page buffers; preferred style is to palloc or
malloc each buffer to ensure it is MAXALIGN'd. However, that policy's
been ignored in an increasing number of places. We've apparently got
away with it so far, probably because (a) relatively few people use
platforms on which misalignment causes core dumps and/or (b) the
variables chance to be sufficiently aligned anyway. But this is not
something to rely on. Moreover, even if we don't get a core dump,
we might be paying a lot of cycles for misaligned accesses.
To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock
that the compiler must allocate with sufficient alignment, and use
those in place of plain char arrays.
I used these types even for variables where there's no risk of a
misaligned access, since ensuring proper alignment should make
kernel data transfers faster. I also changed some places where
we had been palloc'ing short-lived buffers, for coding style
uniformity and to save palloc/pfree overhead.
Since this seems to be a live portability hazard (despite the lack
of field reports), back-patch to all supported versions.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
Currently there are two ways to trigger log rotation in logging collector
process: call pg_rotate_logfile() SQL-function or send SIGUSR1 signal directly
to logging collector process. However, it's nice to have more suitable way
for external tools to do that, which wouldn't require SQL connection or
knowledge of logging collector pid. This commit implements triggering log
rotation by "pg_ctl logrotate" command.
Discussion: https://postgr.es/m/20180416.115435.28153375.horiguchi.kyotaro%40lab.ntt.co.jp
Author: Kyotaro Horiguchi, Alexander Kuzmenkov, Alexander Korotkov
A cast declared WITH INOUT was described as '(binary coercible)',
which seems pretty inaccurate; let's print '(with inout)' instead.
Per complaint from Jean-Pierre Pelletier.
This definitely seems like a bug fix, but given that it's been wrong
since 8.4 and nobody complained before, I'm hesitant to back-patch a
behavior change into stable branches. It doesn't seem too late for
v11 though.
Discussion: https://postgr.es/m/5b887023.1c69fb81.ff96e.6a1d@mx.google.com
Use postgres_fe.h, since this is frontend code. Pretend that we've heard
of project style guidelines for, eg, #include order. Use BlockNumber not
int arithmetic for block numbers, to avoid misbehavior with relations
exceeding 2^31 blocks. Avoid an unnecessary strict-aliasing warning
(per report from Michael Banck). Const-ify assorted stuff. Avoid
scribbling on the output of readdir() -- perhaps that's safe in practice,
but POSIX forbids it, and this code has so far earned exactly zero
credibility portability-wise. Editorialize on an ambiguously-worded
message.
I did not touch the problem of the "buf" local variable being possibly
insufficiently aligned; that's not specific to this code, and seems like
it should be fixed as part of a different, larger patch.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
To verify the checksums, we open the file in text mode which doesn't work
on Windows as WIN32 treats Control-Z as EOF in files opened in text mode.
This leads to "short read of block .." error in some cases.
Fix it by opening the files in the binary mode.
Author: Amit Kapila
Reviewed-by: Magnus Hagander
Backpatch-through: 11
Discussion: https://postgr.es/m/CAA4eK1+LOnzod+h85FGmyjWzXKy-XV1FYwEyP-Tky2WpD5cxwA@mail.gmail.com
Instead of repeating the almost same large query in each version branch,
use one query and add a few columns to the SELECT list depending on the
version. This saves a lot of duplication.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Using -d is odd, because we normally reserve that for a database
argument, so rename it to -v and add long version --verbose.
Also, reduce it to emit one line per file checked rather than one line
per block.
Per a complaint from Michael Banck.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Michael Banck <michael.banck@credativ.de>
Discussion: https://postgr.es/m/20180827113411.GA22768@nighthawk.caipicrew.dd-dns.de
This function had a blacklist of dump object types that it believed
needed exclusive lock ... but we hadn't maintained that, so that it
was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of
which need (or should be treated as needing) exclusive lock.
Since the same oversight seems likely in future, let's reverse the
sense of the test so that the code has a whitelist of safe object
types; better to wrongly assume a command can't be run in parallel
than the opposite. Currently the only POST_DATA object type that's
safe is CREATE INDEX ... and that list hasn't changed in a long time.
Back-patch to 9.5 where RLS came in.
Discussion: https://postgr.es/m/11450.1535483506@sss.pgh.pa.us
Ensure the TOC entry is marked with the correct schema, so that its
name is as unique as the index's is.
Fix the dependencies: we want dependencies from this TOC entry to the
two indexes it depends on, and we don't care (at least not for this
purpose) what order the indexes are created in. Also, add dependencies
on the indexes' underlying tables. Those might seem pointless given
the index dependencies, but they are helpful to cue parallel restore
to avoid running the ATTACH PARTITION in parallel with other DDL on
the same tables.
Discussion: https://postgr.es/m/10817.1535494963@sss.pgh.pa.us
The source code was already set up for NLS support, so just a nls.mk
file needed to be added. Also, fix the old problem of putting the int64
format specifier right into the string, which breaks NLS.
The archive should show a dependency on the item's table, but it failed
to include one. This could cause failures in parallel restore due to
emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring
the table's data. In practice the odds of a problem seem low, since
you would typically need to have set FORCE ROW LEVEL SECURITY as well,
and you'd also need a very high --jobs count to have any chance of this
happening. That probably explains the lack of field reports.
Still, it's a bug, so back-patch to 9.5 where RLS was introduced.
Discussion: https://postgr.es/m/19784.1535390902@sss.pgh.pa.us
Since procedures are now a different thing from functions, change the
CREATE OPERATOR syntax to use FUNCTION in the clause that specifies the
function. PROCEDURE is still accepted for compatibility.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
Historically, the term procedure was used as a synonym for function in
Postgres/PostgreSQL. Now we have procedures as separate objects from
functions, so we need to clean up the documentation to not mix those
terms.
In particular, mentions of "trigger procedures" are changed to "trigger
functions", and access method "support procedures" are changed to
"support functions". (The latter already used FUNCTION in the SQL
syntax anyway.) Also, the terminology in the SPI chapter has been
cleaned up.
A few tests, examples, and code comments are also adjusted to be
consistent with documentation changes, but not everything.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
This commit prevents a crash of pg_dump caused by the exclusion of a
table which has identity columns, as the table would be correctly
excluded but not its identity sequence. In order to fix that, identity
sequences are excluded if the parent table is defined as such. Knowing
about such sequences has no meaning without their parent table anyway.
Reported-by: Andy Abelisto
Author: David Rowley
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/153479393218.1316.8472285660264976457@wrigleys.postgresql.org
Backpatch-through: 10
While monitoring the code, a couple of issues related to string
translation has showed up:
- Some routines for auto-updatable views return an error string, which
sometimes missed the shot. A comment regarding string translation is
added for each routine to help with future features.
- GSSAPI authentication missed two translations.
- vacuumdb handles non-translated strings.
- GetConfigOptionByNum should translate strings. This part is not
back-patched as after a minor upgrade this could be surprising for
users.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20180810.152131.31921918.horiguchi.kyotaro@lab.ntt.co.jp
Backpatch-through: 9.3
Previously, this code blindly followed the common coding pattern of
passing PQserverVersion(AH->connection) as the server-version parameter
of fmtQualifiedId. That works as long as we have a connection; but in
pg_restore with text output, we don't. Instead we got a zero from
PQserverVersion, which fmtQualifiedId interpreted as "server is too old to
have schemas", and so the name went unqualified. That still accidentally
managed to work in many cases, which is probably why this ancient bug went
undetected for so long. It only became obvious in the wake of the changes
to force dump/restore to execute with restricted search_path.
In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server-
version behavioral dependency, and just making it schema-qualify all the
time. We no longer support pg_dump from servers old enough to need the
ability to omit schema name, let alone restoring to them. (Also, the few
callers outside pg_dump already didn't work with pre-schema servers.)
In older branches, that's not an acceptable solution, so instead just
tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify
its output regardless of server version.
Per bug #15338 from Oleg somebody. Back-patch to all supported branches.
Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
Commit 244142d32a only tested for the
pg_controldata output for primary servers, but standby servers have
different "Database cluster state" output, so check for that too.
Diagnosed-by: Michael Paquier
Discussion: https://postgr.es/m/20180810164240.GM13638@paquier.xyz
Backpatch-through: 9.3
Commit bcbd94080 didn't bother to fix up a comment it had falsified.
Also, const-ify choose_dsm_implementation(), since what it's returning
is always a constant string; pickier compilers would warn about this.
pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands
for all objects that are members of extensions. It forgot to do so for
event triggers, as per bug #15310 from Nick Barnes. Back-patch to 9.3
where event triggers were introduced.
Haribabu Kommi
Discussion: https://postgr.es/m/153360083872.1395.4593932457718151600@wrigleys.postgresql.org
The original coding here (which is, I believe, my fault) supposed that
it didn't need to concern itself with the possibility that one object
of a given type-priority has a namespace while another doesn't. But
that's not reliably true anymore, if it ever was; and if it does happen
then it's possible that DOTypeNameCompare returns self-inconsistent
comparison results. That leads to unspecified behavior in qsort()
and a resultant weird output order from pg_dump.
This should end up being only a cosmetic problem, because any ordering
constraints that actually matter should be enforced by the later
dependency-based sort. Still, it's a bug, so back-patch.
Report and fix by Jacob Champion, though I editorialized on his
patch to the extent of making NULL sort after non-NULL, for consistency
with our usual sorting definitions.
Discussion: https://postgr.es/m/CABAq_6Hw+V-Kj7PNfD5tgOaWT_-qaYkc+SRmJkPLeUjYXLdxwQ@mail.gmail.com
6cb3372 enforces errno to ENOSPC when less bytes than what is expected
have been written when it is unset, though it forgot to properly reset
errno before doing a system call to write(), causing errno to
potentially come from a previous system call.
Reported-by: Tom Lane
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/31797.1533326676@sss.pgh.pa.us
\conninfo prints the results of PQhost() and some other libpq functions.
It used to override the PQhost() result with the hostaddr parameter if
that'd been given, but that's unhelpful when multiple hosts were listed
in the connection string. Furthermore, it seems unnecessary in the wake
of commit 1944cdc98, since PQhost does any useful substitution itself.
So let's just remove the extra code and print PQhost()'s result without
any editorialization.
Back-patch to v10, as 1944cdc98 (just) was.
Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
Commits 742869946 et al turn out to be a couple bricks shy of a load.
We were dumping the stored values of GUC_LIST_QUOTE variables as they
appear in proconfig or setconfig catalog columns. However, although that
quoting rule looks a lot like SQL-identifier double quotes, there are two
critical differences: empty strings ("") are legal, and depending on which
variable you're considering, values longer than NAMEDATALEN might be valid
too. So the current technique fails altogether on empty-string list
entries (as reported by Steven Winfield in bug #15248) and it also risks
truncating file pathnames during dump/reload of GUC values that are lists
of pathnames.
To fix, split the stored value without any downcasing or truncation,
and then emit each element as a SQL string literal.
This is a tad annoying, because we now have three copies of the
comma-separated-string splitting logic in varlena.c as well as a fourth
one in dumputils.c. (Not to mention the randomly-different-from-those
splitting logic in libpq...) I looked at unifying these, but it would
be rather a mess unless we're willing to tweak the API definitions of
SplitIdentifierString, SplitDirectoriesString, or both. That might be
worth doing in future; but it seems pretty unsafe for a back-patched
bug fix, so for now accept the duplication.
Back-patch to all supported branches, as the previous fix was.
Discussion: https://postgr.es/m/7585.1529435872@sss.pgh.pa.us
pg_dump knew about printing ALTER TABLE ... REPLICA IDENTITY USING INDEX
for indexes declared as indexes, but it failed to print that for indexes
declared as unique or primary-key constraints. Per report from Achilleas
Mantzios.
This has been broken since the feature was introduced, AFAICS.
Back-patch to 9.4.
Discussion: https://postgr.es/m/1e6cc5ad-b84a-7c07-8c08-a4d0c3cdc938@matrix.gatewaynet.com
This is useful to know when the data copy has been finished. The
current situation can be confusing for users as the last message is
"waiting for background process to finish streaming", so it looks like
this is taking time but the final sync is instead.
Author: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1ypeoMJ=tFBG8vP13sxEtXd4Pm_x1SqsJdW_RvzpcvN=A@mail.gmail.com
Previously pg_upgrade checked for the pid file and started/stopped the
server to force a clean shutdown. However, "pg_ctl -m immediate"
removes the pid file but doesn't do a clean shutdown, so check
pg_controldata for a clean shutdown too.
Diagnosed-by: Vimalraj A
Discussion: https://postgr.es/m/CAFKBAK5e4Q-oTUuPPJ56EU_d2Rzodq6GWKS3ncAk3xo7hAsOZg@mail.gmail.com
Backpatch-through: 9.3
Previously only the missing library name was reported, forcing users to
look in all databases to find the library entries.
Discussion: https://postgr.es/m/20180713162815.GA3835@momjian.us
Author: Daniel Gustafsson, me
Depending on the platform used, this can cause a crash in the worst
case, or an unhelpful error message, so fail gracefully.
Author: Fabien Coelho
Discussion: https://postgr.es/m/alpine.DEB.2.21.1807262302550.29874@lancre
Backpatch: 11-, where hash() has been added in pgbench.
Some error messages which report something about a file operation use
as well context which is already provided within the path being worked
on, making things rather duplicated. This creates more work for
translators, and does not actually bring clarity.
More could be done, however in a lot of cases the context used is
actually useful, still that patch gets down things with a good cut.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/20180718044711.GA8565@paquier.xyz
I blew the dust off a Bourne shell (file date 1996, yea verily) and
tried to run test.sh with it. It mostly worked, but I found that the
temp-directory creation code introduced by commit be76a6d39 was not
compatible, for a couple of reasons: this shell thinks "set -e" should
force an exit if a command within backticks fails, and it also thinks code
within braces should be executed by a sub-shell, meaning that variable
settings don't propagate back up to the parent shell. In view of Victor
Wagner's report that Solaris is still using pre-POSIX shells, seems like
we oughta make this case work. It's not like the code is any less
idiomatic this way; the prior coding technique appeared nowhere else.
(There is a remaining bash-ism here, which is that $RANDOM doesn't do
what the code hopes in non-bash shells. But the use of $$ elsewhere in
that path should be enough to ensure uniqueness and some amount of
randomness, so I think it's okay as-is.)
Back-patch to all supported branches, as the previous commit was.
Discussion: https://postgr.es/m/20180720153820.69e9ae6c@fafnir.local.vm
Double-quote $PGDATA in "find" commands introduced by commit da9b580d8,
in case that path contains spaces or other special characters.
Adjust a few other places so that quoting is done more consistently.
None of the others are actual bugs AFAICS, but it's confusing to readers
if the same thing is done differently in different places.
Noted by Tels.
Discussion: https://postgr.es/m/c96303c04c360bbedaa04f90f515745b.squirrel@sm.webmail.pair.com
Most of test.sh uses traditional backtick syntax for command substitution,
but commit da9b580d8 introduced two uses of $(...) syntax, which is not
recognized by very old shells. Bring those into line with the rest.
Victor Wagner
Discussion: https://postgr.es/m/20180720153820.69e9ae6c@fafnir.local.vm
This is essential information when looking at an index that has
"included" columns. Per discussion, follow the style used in \dC
and some other places: column header is "Key?" and values are "yes"
or "no" (all translatable).
While at it, revise describeOneTableDetails to be a bit more maintainable:
avoid hard-wired column numbers and multiple repetitions of what needs
to be identical test logic. This also results in the emitted catalog
query corresponding more closely to what we print, which should be a
benefit to users of ECHO_HIDDEN mode, and perhaps a bit faster too
(the old logic sometimes asked for values it would not print, even
ones that are fairly expensive to get).
Discussion: https://postgr.es/m/21724.1531943735@sss.pgh.pa.us
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.
Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
Some error messages related to file handling are using the code path
context to define their state. For example, 2PC-related errors are
referring to "two-phase status files", or "relation mapping file" is
used for catalog-to-filenode mapping, however those prove to be
difficult to translate, and are not more helpful than just referring to
the path of the file being worked on. So simplify all those error
messages by just referring to files with their path used. In some
cases, like the manipulation of WAL segments, the context is actually
helpful so those are kept.
Calls to the system function read() have also been rather inconsistent
with their error handling sometimes not reporting the number of bytes
read, and some other code paths trying to use an errno which has not
been set. The in-core functions are using a more consistent pattern
with this patch, which checks for both errno if set or if an
inconsistent read is happening.
So as to care about pluralization when reading an unexpected number of
byte(s), "could not read: read %d of %zu" is used as error message, with
%d field being the output result of read() and %zu the expected size.
This simplifies the work of translators with less variations of the same
message.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180520000522.GB1603@paquier.xyz
Update links that resulted in redirects. Most are changes from http to
https, but there are also some other minor edits. (There are still some
redirects where the target URL looks less elegant than the one we
currently have. I have left those as is.)
The patch that ended up as commit 3de241dba8 ("Foreign keys on
partitioned tables") lacked pg_dump tests, so the pg_dump code that was
there to support it inadvertently stopped working when in later
development I modified the backend code not to emit pg_trigger rows for
the partitioned table itself.
Bug analysis and code fix is by Michaël. I (Álvaro) added the test.
Reported-by: amul sul <sulamul@gmail.com>
Co-authored-by: Michaël Paquier <michael@paquier.xyz>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CAAJ_b94n=UsNVhgs97vCaWEZAMe-tGDRVuZ73oePQH=eaJKGSA@mail.gmail.com
PostgreSQL nowadays offers some kind of dynamic shared memory feature on
all supported platforms. Having the choice of "none" prevents us from
relying on DSM in core features. So this patch removes the choice of
"none".
Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
This is an option consistent with what pg_dump and pg_basebackup provide
which is useful for leveraging the I/O effort when testing things, not
to be used in a production environment.
Author: Michael Paquier
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/20180325122607.GB3707@paquier.xyz
The previous sync logic relied on looking for and then launching
externally initdb -S, which is a simple wrapper on top of fsync_pgdata.
There is nothing preventing pg_rewind to directly call this routine, so
remove the dependency to initdb and just call it directly.
Author: Michael Paquier
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/20180325122607.GB3707@paquier.xyz
In TAP test functions, that is, those that produce test results, locally
increment $Test::Builder::Level. This has the effect that test failures
are reported at the callers location rather than somewhere in the test
support libraries.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
In a partition, row triggers that had been cloned from their parent
partitioned table would not be listed at all in psql's \d, which could
surprise users, per insistent complaint from Ashutosh Bapat (though his
aim was elsewhere). The simplest possible fix, suggested by Peter
Eisentraut, seems to be to list triggers marked as internal if they have
a row in pg_depend that points to some other trigger.
Author: Álvaro Herrera
Discussion: https://postgr.es/m/20180618165910.p26vhk7dpq65ix54@alvherre.pgsql
This file has been missing the fact that it needs to report back to
callers a proper failure on fsync calls. I have spotted the one in
tar_finish() while Kuntal has spotted the one in tar_close().
Backpatch down to 10 where this code has been introduced.
Reported by: Michael Paquier, Kuntal Ghosh
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh, Magnus Hagander
Discussion: https://postgr.es/m/20180625024356.GD1146@paquier.xyz
System calls mixed up in error code paths are causing two issues which
several code paths have not correctly handled:
1) For write() calls, sometimes the system may return less bytes than
what has been written without errno being set. Some paths were careful
enough to consider that case, and assumed that errno should be set to
ENOSPC, other calls missed that.
2) errno generated by a system call is overwritten by other system calls
which may succeed once an error code path is taken, causing what is
reported to the user to be incorrect.
This patch uses the brute-force approach of correcting all those code
paths. Some refactoring could happen in the future, but this is let as
future work, which is not targeted for back-branches anyway.
Author: Michael Paquier
Reviewed-by: Ashutosh Sharma
Discussion: https://postgr.es/m/20180622061535.GD5215@paquier.xyz
Commit 16828d5c02 neglected to do this, so upgraded databases would
silently get null instead of the specified default in rows without the
attribute defined.
A new binary upgrade function is provided to perform this and pg_dump is
adjusted to output a call to the function if required in binary upgrade
mode.
Also included is code to drop missing attribute values for dropped
columns. That way if the type is later dropped the missing value won't
have a dangling reference to the type.
Finally the regression tests are adjusted to ensure that there is a row
with a missing value so that this code is exercised in upgrade testing.
Catalog version unfortunately bumped.
Regression test changes from Tom Lane.
Remainder from me, reviewed by Tom Lane, Andres Freund, Alvaro Herrera
Discussion: https://postgr.es/m/19987.1529420110@sss.pgh.pa.us
This could only cause an issue if strftime returned a ridiculously
long timezone name, which seems unlikely; and it wouldn't qualify
as a security problem even then, since pg_waldump (nee pg_xlogdump)
is a debug tool not part of the server. But gcc 8 has started issuing
warnings about it, so let's use snprintf and be safe.
Backpatch to 9.3 where this code was added.
Discussion: https://postgr.es/m/21789.1529170195@sss.pgh.pa.us
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.
A small cosmetic piece of tidying of the pgperlcritic script is
included.
Mike Blackwell
Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
Commit c37b3d08c dropped its added GetDataDirectoryCreatePerm call into
the wrong place in pg_resetwal.c, namely after the chdir to DataDir.
That broke invocations using a relative path, as reported by Tushar Ahuja.
We could have left it where it was and changed the argument to be ".",
but that'd result in a rather confusing error message in event of a
failure, so re-ordering seems like a better solution.
Similarly reorder operations in pg_rewind.c. The issue there is that
it doesn't seem like a good idea to do any actual operations before the
not-root check (on Unix) or the restricted token acquisition (on Windows).
I don't know that this is an actual bug, but I'm definitely not convinced
that it isn't, either.
Assorted other code review for c37b3d08c and da9b580d8: fix some
misspelled or otherwise badly worded comments, put the #include for
<sys/stat.h> where it actually belongs, etc.
Discussion: https://postgr.es/m/aeb9c3a7-3c3f-a57f-1a18-c8d4fcdc2a1f@enterprisedb.com
While glibc's version of printf accepts %m, most others do not;
to be portable, we have to do it the hard way with strerror(errno).
pg_verify_checksums evidently did not get that memo.
Noted while fooling around with NetBSD-current, which generates
a compiler warning for this mistake.
- Change vacuum_cleanup_index_scale_factor GUC to PGC_USERSET.
vacuum_cleanup_index_scale_factor GUC was defined as PGC_SIGHUP. But this
GUC affects not only autovacuum. So it might be useful to change it from user
session in order to influence manually runned VACUUM.
- Add missing tab-complete support for vacuum_cleanup_index_scale_factor
reloption.
- Fix condition for B-tree index cleanup.
Zero value of vacuum_cleanup_index_scale_factor means that user wants B-tree
index cleanup to be never skipped.
- Documentation and comment improvements
Authors: Justin Pryzby, Alexander Korotkov, Liudmila Mantrova
Reviewed by: all authors and Robert Haas
Discussion: https://www.postgresql.org/message-id/flat/20180502023025.GD7631%40telsasoft.com
The vertical tightness settings collapse vertical whitespace between
opening and closing brackets (parentheses, square brakets and braces).
This can make data structures in particular harder to read, and is not
very consistent with our style in non-Perl code. This patch restricts
that setting to parentheses only, and reformats all the perl code
accordingly. Not applying this to parentheses has some unfortunate
effects, so the consensus is to keep the setting for parentheses and not
for the others.
The diff for this patch does highlight some places where structures
should have trailing commas. They can be added manually, as there is no
automatic tool to do so.
Discussion: https://postgr.es/m/a2f2b87c-56be-c070-bfc0-36288b4b41c1@2ndQuadrant.com
The regexes used in 102_vacuumdb_stages.pl to check the postmaster log
for expected output contained several places with ".*.*", which is
underdetermined and can cause exponential runtime growth in Perl's regex
matcher (since it's not bright enough not to waste time seeing whether
different splits of the same substring would allow a match). We were
fortunate that the amount of text in the postmaster log was generally not
enough to make the runtime go to the moon; although commit 6271fceb8 had
been on the hairy edge of an obvious problem, thanks to its increasing the
default log verbosity to DEBUG1. Experimentation shows that anyone who
tried to run this test case with an even higher log verbosity would have
been in for serious pain. But even at default logging level, fixing this
saves several hundred ms on my workstation, more on slower buildfarm
members.
Remove the extra ".*"s, restoring more-or-less-linear matching speed.
Back-patch to 9.4 where the test case was added, mostly in case anyone
tries to do related debugging in a back branch.
Discussion: https://postgr.es/m/32459.1525657786@sss.pgh.pa.us
While poking into initdb's performance, I noticed that this query
wasn't being done very intelligently. By forcing it to execute
obj_description() for each pg_proc/pg_operator join row, we were
essentially setting up a nestloop join to pg_description, which
is not a bright query plan when there are hundreds of outer rows.
Convert the check for a "deprecated" operator into a NOT EXISTS
so that it can be done as a hashed antijoin. On my workstation
this reduces the time for this query from ~ 35ms to ~ 10ms.
Which is not a huge win, but it adds up over buildfarm runs.
In passing, insert forced query breaks (\n\n, in single-user mode)
after each SQL-query file that initdb sources, and after some
relatively new queries in setup_privileges(). This doesn't make
a lot of difference normally, but it will result in briefer, saner
error messages if anything goes wrong.
Msys2's uname -s outputs a string beginning MSYS rather than MINGW as is
output by Msys. Allow either in pg_upgrade's test.sh.
Backpatch to all live branches.
For querying pg_database about information about the database being
dumped, look up by using current_database() instead of the value
obtained from PQdb(). When using a connection proxy, the value from
PQdb() might not be the real name of the database.
Unify indnkeys/indnatts/indnkeyatts usage for all version of query to get
index information, remove indnkeys column from query as unused.
Author: Marina Polyakova
Noticed by: Peter Eisentraut
Recent gcc can warn about switch-case fall throughs that are not
explicitly labeled as intentional. This seems like a good thing,
so clean up the warnings exposed thereby by labeling all such
cases with comments that gcc will recognize.
In files that already had one or more suitable comments, I generally
matched the existing style of those. Otherwise I went with
/* FALLTHROUGH */, which is one of the spellings approved at the
more-restrictive-than-default level -Wimplicit-fallthrough=4.
(At the default level you can also spell it /* FALL ?THRU */,
and it's not picky about case. What you can't do is include
additional text in the same comment, so some existing comments
containing versions of this aren't good enough.)
Testing with gcc 8.0.1 (Fedora 28's current version), I found that
I also had to put explicit "break"s after elog(ERROR) or ereport(ERROR);
apparently, for this purpose gcc doesn't recognize that those don't
return. That seems like possibly a gcc bug, but it's fine because
in most places we did that anyway; so this amounts to a visit from the
style police.
Discussion: https://postgr.es/m/15083.1525207729@sss.pgh.pa.us
The predecessor test boiled down to "PQserverVersion(NULL) >= 100000",
which is always false. No release includes that, so it could not have
reintroduced CVE-2018-1058. Back-patch to 9.4, like the addition of the
predecessor in commit 8d2814f274.
Discussion: https://postgr.es/m/20180422215551.GB2676194@rfd.leadboat.com
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)
Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly. This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.
Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.
Per an earlier discussion with John Naylor.
Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
Teach both base backups and pg_verify_checksums that if a page is new,
it does not have a checksum yet, so it shouldn't be verified.
Noted by Tomas Vondra, review by David Steele.
This option makes no sense when the cluster checksum state cannot be
changed, and should have been removed in the revert.
Author: Daniel Gustafsson
Review: Michael Paquier
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
Previously a warning was printed, but the tool actually kept running
even when running as root. This is something we definitely want to
prevent, but since this means a behavior change, not backpatching.
Author: Michael Paquier
In commit 9c0a0de4c, I'd failed to notice that catalog/catalog.h
should also be considered a frontend-unsafe header, because it includes
(and needs) the full form of pg_class.h, not to mention relcache.h.
However, various frontend code was depending on it to get
TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for.
The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY,
as well as the OIDCHARS symbol, to common/relpath.h. Do that, and mop up
inclusions as necessary. (I found that quite a few current users of
catalog/catalog.h don't seem to need it at all anymore, apparently as a
result of the refactorings that created common/relpath.[hc]. And
initdb.c needed it only as a route to pg_class_d.h.)
Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
This reverts the backend sides of commit 1fde38beaa.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
Everything of use to frontend code should now appear in the _d.h files,
and making this change frees us from needing to worry about whether the
catalog header files proper are frontend-safe.
Remove src/interfaces/ecpg/ecpglib/pg_type.h entirely, as the previous
commit reduced it to a confusingly-named wrapper around pg_type_d.h.
In passing, make test_rls_hooks.c follow project convention of including
our own files with #include "" not <>.
Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
Allow the cluster to be optionally init'd with read access for the
group.
This means a relatively non-privileged user can perform a backup of the
cluster without requiring write privileges, which enhances security.
The mode of PGDATA is used to determine whether group permissions are
enabled for directory and file creates. This method was chosen as it's
simple and works well for the various utilities that write into PGDATA.
Changing the mode of PGDATA manually will not automatically change the
mode of all the files contained therein. If the user would like to
enable group access on an existing cluster then changing the mode of all
the existing files will be required. Note that pg_upgrade will
automatically change the mode of all migrated files if the new cluster
is init'd with the -g option.
Tests are included for the backend and all the utilities which operate
on the PG data directory to ensure that the correct mode is set based on
the data directory permissions.
Author: David Steele <david@pgmasters.net>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
Consolidate directory and file create permissions for tools which work
with the PG data directory by adding a new module (common/file_perm.c)
that contains variables (pg_file_create_mode, pg_dir_create_mode) and
constants to initialize them (0600 for files and 0700 for directories).
Convert mkdir() calls in the backend to MakePGDirectory() if the
original call used default permissions (always the case for regular PG
directories).
Add tests to make sure permissions in PGDATA are set correctly by the
tools which modify the PG data directory.
Authors: David Steele <david@pgmasters.net>,
Adam Brightwell <adam.brightwell@crunchydata.com>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
This patch introduces INCLUDE clause to index definition. This clause
specifies a list of columns which will be included as a non-key part in
the index. The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans. Also, such columns don't need to have
appropriate operator classes. Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.
Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine. For now, only B-tree indexes support INCLUDE clause.
In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys). Therefore, B-tree indexes
now might have variable number of attributes. This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid. Free 13th bit of t_info is used for indicating
that. This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.
Bump catalog version
Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
Update the built-in logical replication system to make use of the
previously added logical decoding for TRUNCATE support. Add the
required truncate callback to pgoutput and a new logical replication
protocol message.
Publications get a new attribute to determine whether to replicate
truncate actions. When updating a publication via pg_dump from an older
version, this is not set, thus preserving the previous behavior.
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
These tests accummulated almost a gigabyte of data during the test which
was then removed at the end. Instead, remove output that's no longer
needed between the individual tests, to keep the total disk usage down
lower.
Author: Michael Banck
This makes it possible to turn checksums on in a live cluster, without
the previous need for dump/reload or logical replication (and to turn it
off).
Enabling checkusm starts a background process in the form of a
launcher/worker combination that goes through the entire database and
recalculates checksums on each and every page. Only when all pages have
been checksummed are they fully enabled in the cluster. Any failure of
the process will revert to checksums off and the process has to be
started.
This adds a new WAL record that indicates the state of checksums, so
the process works across replicated clusters.
Authors: Magnus Hagander and Daniel Gustafsson
Review: Tomas Vondra, Michael Banck, Heikki Linnakangas, Andrey Borodin
This reworks how the tests to run are defined. Instead of having to
define all runs for all tests, we define those tests which should pass
(generally using one of the defined broad hashes), add in any which
should be specific for this test, and exclude any specific runs that
shouldn't pass for this test. This ends up removing some 4k+ lines
(more than half the file) but, more importantly, greatly simplifies the
way runs-to-be-tested are defined.
As discussed in the updated comments, for example, take the test which
does CREATE TABLE test_table. That CREATE TABLE should show up in all
'full' runs of pg_dump, except those cases where 'test_table' is
excluded, of course, and that's exactly how the test gets defined now
(modulo a few other related cases, like where we dump only that table,
or we dump the schema it's in, or we exclude the schema it's in):
like => {
%full_runs,
%dump_test_schema_runs,
only_dump_test_table => 1,
section_pre_data => 1, },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1, }, },
Next, we no longer expect every run to be listed for every test. If a
run is listed in 'like' (directly or through a hash) then it's a 'like',
unless it's listed in 'unlike' in which case it's an 'unlike'. If it
isn't listed in either, then it's considered an 'unlike' automatically.
Lastly, this changes the code to no longer use like/unlike but rather to
use 'ok()' with 'diag()' which allows much more control over what gets
spit out to the screen. Gone are the days of the entire dump being sent
to the console, now you'll just get a couple of lines for each failing
test which say the test that failed and the run that it failed on.
This covers both the pg_dump TAP tests in src/bin/pg_dump and those in
src/test/modules/test_pg_dump.
Hopefully fix the fact that these checks are unstable, by introducing
the corruption in a separate table from pg_class, and also explicitly
disable autovacuum on those tables. Also make sure PostgreSQL is
stopped while the corruption is introduced to avoid possible caching
effects.
Author: Michael Banck
We were being careless in some places about the order of -L switches in
link command lines, such that -L switches referring to external directories
could come before those referring to directories within the build tree.
This made it possible to accidentally link a system-supplied library, for
example /usr/lib/libpq.so, in place of the one built in the build tree.
Hilarity ensued, the more so the older the system-supplied library is.
To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL
and the main LDFLAGS variable, both of which are "recursively expanded"
so that they can be incrementally adjusted by different makefiles.
Establish a policy that -L switches for directories in the build tree
must always be added to LDFLAGS_INTERNAL, while -L switches for external
directories must always be added to LDFLAGS. This is sufficient to
ensure a safe search order. For simplicity, we typically also put -l
switches for the respective libraries into those same variables.
(Traditional make usage would have us put -l switches into LIBS, but
cleaning that up is a project for another day, as there's no clear
need for it.)
This turns out to also require separating SHLIB_LINK into two variables,
SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which
switches go into which variable. And likewise for PG_LIBS.
Although this change might appear to affect external users of pgxs.mk,
I think it doesn't; they shouldn't have any need to touch the _INTERNAL
variables.
In passing, tweak src/common/Makefile so that the value of CPPFLAGS
recorded in pg_config lacks "-DFRONTEND" and the recorded value of
LDFLAGS lacks "-L../../../src/common". Both of those things are
mistakes, apparently introduced during prior code rearrangements,
as old versions of pg_config don't print them. In general we don't
want anything that's specific to the src/common subdirectory to
appear in those outputs.
This is certainly a bug fix, but in view of the lack of field
complaints, I'm unsure whether it's worth the risk of back-patching.
In any case it seems wise to see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/25214.1522604295@sss.pgh.pa.us
When base backups are run over the replication protocol (for example
using pg_basebackup), verify the checksums of all data blocks if
checksums are enabled. If checksum failures are encountered, log them
as warnings but don't abort the backup.
This becomes the default behaviour in pg_basebackup (provided checksums
are enabled on the server), so add a switch (-k) to disable the checks
if necessary.
Author: Michael Banck
Reviewed-By: Magnus Hagander, David Steele
Discussion: https://postgr.es/m/20180228180856.GE13784@nighthawk.caipicrew.dd-dns.de
Avoid storing the result of PQsocket() in a pgsocket variable; it's
declared as int, and the no-socket test is properly written as "x < 0"
not "x == PGINVALID_SOCKET". This accidentally had no bad effect
because we never got to init_slot() with a bad connection, but it's
still wrong.
Actually, it seems like we should avoid storing the result for a long
period at all. The function's not so expensive that it's worth avoiding,
and the existing coding technique here would fail if anyone tried to
PQreset the connection during the life of the program. Hence, just
re-call PQsocket every time we construct a select(2) mask.
Speaking of select(), GetIdleSlot imagined that it could compute the
select mask once and continue to use it over multiple calls to
select_loop(), which is pretty bogus since that would stomp on the
mask on return. This could only matter if the function's outer loop
iterated more than once, which is unlikely (it'd take some connection
receiving data, but not enough to complete its command). But if it
did happen, we'd acquire "tunnel vision" and stop watching the other
connections for query termination, with the effect of losing parallelism.
Another way in which GetIdleSlot could lose parallelism is that once
PQisBusy returns false, it would lock in on that connection and do
PQgetResult until that returns NULL; in some cases that could result
in blocking. (Perhaps this can never happen in vacuumdb due to the
limited set of commands that it can issue, but I'm not quite sure
of that, and even if true today it's not a future-proof assumption.)
Refactor the code to do that properly, so that it risks blocking in
PQgetResult only in cases where we need to wait anyway.
Another loss-of-parallelism problem, which *is* easily demonstrable,
is that any setup queries issued during prepare_vacuum_command() were
always issued on the last-to-be-created connection, whether or not
that was idle. Long-running operations on that connection thus
prevented issuance of additional operations on the other ones, except
in the limited cases where no preparatory query was needed. Instead,
wait till we've identified a free connection and use that one.
Also, avoid core dump due to undersized malloc request in the case
that no tables are identified to be vacuumed.
The bogus no-socket test was noted by CharSyam, the other problems
identified in my own code review. Back-patch to 9.5 where parallel
vacuumdb was introduced.
Discussion: https://postgr.es/m/CAMrLSE6etb33-192DTEUGkV-TsvEcxtBDxGWG1tgNOMnQHwgDA@mail.gmail.com
Compilation failed for lack of an #ifdef on builds without
pg_strong_random(). Also fix relevant error messages to meet
project style guidelines.
Fabien Coelho, further adjusted by me
Discussion: https://postgr.es/m/32390.1522464534@sss.pgh.pa.us
The target cluster that was rewound needs to perform recovery from
the checkpoint created at failover, which leads it to remove or recreate
some files and directories that may have been copied from the source
cluster. So pg_rewind can skip synchronizing such files and directories,
and which reduces the amount of data transferred during a rewind
without changing the usefulness of the operation.
Author: Michael Paquier
Reviewed-by: Anastasia Lubennikova, Stephen Frost and me
Discussion: https://postgr.es/m/20180205071022.GA17337@paquier.xyz
After processing the filemap to build the list of chunks that will be
fetched from the source to rewing the target server, it is possible that
a file which was previously processed is removed from the source. A
simple example of such an occurence is a WAL segment which gets recycled
on the target in-between. When the filemap is processed, files not
categorized as relation files are first truncated to prepare for its
full copy of which is going to be taken from the source, divided into a
set of junks. However, for a recycled WAL segment, this would result in
a segment which has a zero-byte size. With such an empty file,
post-rewind recovery thinks that records are saved but they are actually
not because of the truncation which happened when processing the
filemap, resulting in data loss.
In order to fix the problem, make sure that files which are found as
removed on the source when receiving chunks of them are as well deleted
on the target server for consistency.
Back-patch to 9.5 where pg_rewind was added.
Author: Tsunakawa Takayuki
Reviewed-by: Michael Paquier
Reported-by: Tsunakawa Takayuki
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8DAAA2%40G01JPEXMBYT05
If the value of an index expression is unchanged after UPDATE,
allow HOT updates where previously we disallowed them, giving
a significant performance boost in those cases.
Particularly useful for indexes such as JSON->>field where the
JSON value changes but the indexed value does not.
Submitted as "surjective indexes" patch, now enabled by use
of new "recheck_on_update" parameter.
Author: Konstantin Knizhnik
Reviewer: Simon Riggs, with much wordsmithing and some cleanup
Setting random could increase reproducibility of test in some cases. Patch
suggests three providers for seed: time (default), strong random
generator (if available) and unsigned constant. Seed could be set from
command line or enviroment variable.
Author: Fabien Coelho
Reviewed by: Chapman Flack
Discussion: https://www.postgresql.org/message-id/flat/20160407082711.q7iq3ykffqxcszkv@alap3.anarazel.de
This adds a new option --wal-segsize (analogous to initdb) that changes
the WAL segment size in pg_control.
Author: Nathan Bossart <bossartn@amazon.com>
We were running out of good single-letter options for some upcoming
pg_resetwal functionality, so add long options to create more
possibilities. Add to pg_controldata as well for symmetry.
based on patch by Bossart, Nathan <bossartn@amazon.com>