Commit Graph

9316 Commits

Author SHA1 Message Date
Thomas Munro e69d644547 Rotate instead of shifting hash join batch number.
Our algorithm for choosing batch numbers turned out not to work
effectively for multi-billion key inner relations.  We would use
more hash bits than we have, and effectively concentrate all tuples
into a smaller number of batches than we intended.  While ideally
we should switch to wider hashes, for now, change the algorithm to
one that effectively gives up bits from the bucket number when we
don't have enough bits.  That means we'll finish up with longer
bucket chains than would be ideal, but that's better than having
batches that don't fit in work_mem and can't be divided.

Batch-patch to all supported releases.

Author: Thomas Munro
Reviewed-by: Tom Lane, thanks also to Tomas Vondra, Alvaro Herrera, Andres Freund for testing and discussion
Reported-by: James Coleman
Discussion: https://postgr.es/m/16104-dc11ed911f1ab9df%40postgresql.org
2019-12-24 13:05:43 +13:00
Tom Lane 39ebb943de Disallow partition key expressions that return pseudo-types.
This wasn't checked originally, but it should have been, because
in general pseudo-types can't be stored to and retrieved from disk.
Notably, partition bound values of type "record" would not be
interpretable by another session.

In v12 and HEAD, add another flag to CheckAttributeType's repertoire
so that it can produce a specific error message for this case.  That's
infeasible in older branches without an ABI break, so fall back to
a slightly-less-nicely-worded error message in v10 and v11.

Problem noted by Amit Langote, though this patch is not his initial
solution.  Back-patch to v10 where partitioning was introduced.

Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
2019-12-23 12:53:12 -05:00
Peter Geoghegan 696cc3a0ca Normalize _bt_finish_split() argument names.
Make a function prototype argument's name match the function
definition's argument name.
2019-12-22 20:07:45 -08:00
Peter Eisentraut 8f4fb4c648 Generate pg_config.h from pg_config.h.in on Windows
Previously, the Windows MSVC build generated pg_config.h from a
hard-coded pg_config.h.win32 with some ad hoc postprocessing.  The
pg_config.h.win32 file required manual maintenance and was as a result
frequently out of date.

Instead, have the MSVC build scripts emulate what configure and
config.status do: collect a list of defines and then create
pg_config.h from pg_config.h.in by changing the appropriate lines.

The previous setup was made to support old Windows build systems that
didn't have any text processing capabilities, but the current system
has Perl, so it's not a problem.  pg_config.h.win32 is removed.

In order to try to keep the Windows side of things more up to date in
the future, we now also require that all symbols found in
pg_config.h.in are defined in the MSVC build system.  So if there is a
change in configure that results in a new symbol, an update in
Solution.pm will be required.

The other headers managed by AC_CONFIG_HEADERS in configure, namely
src/include/pg_config_ext.h and
src/interfaces/ecpg/include/ecpg_config.h, get the same treatment, so
this removes even more ad hoc code in the MSVC build scripts.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/1441b834-f434-e0bf-46ed-9c4d5c29c2d4%402ndquadrant.com
2019-12-20 09:15:08 +01:00
Robert Haas 16a4e4aecd Extend the ProcSignal mechanism to support barriers.
A new function EmitProcSignalBarrier() can be used to emit a global
barrier which all backends that participate in the ProcSignal
mechanism must absorb, and a new function WaitForProcSignalBarrier()
can be used to wait until all relevant backends have in fact
absorbed the barrier.

This can be used to coordinate global state changes, such as turning
checksums on while the system is running.

There's no real client of this mechanism yet, although two are
proposed, but an enum has to have at least one element, so this
includes a placeholder type (PROCSIGNAL_BARRIER_PLACEHOLDER) which
should be replaced by the first real client of this mechanism to
get committed.

Andres Freund and Robert Haas, reviewed by Daniel Gustafsson and,
in earlier versions, by Magnus Hagander.

Discussion: http://postgr.es/m/CA+TgmoZwDk=BguVDVa+qdA6SBKef=PKbaKDQALTC_9qoz1mJqg@mail.gmail.com
2019-12-19 14:56:20 -05:00
Peter Geoghegan 9f83468b35 Remove unneeded "pin scan" nbtree VACUUM code.
The REDO routine for nbtree's xl_btree_vacuum record type hasn't
performed a "pin scan" since commit 3e4b7d87 went in, so clearly there
isn't any point in VACUUM WAL-logging information that won't actually be
used.  Finish off the work of commit 3e4b7d87 (and the closely related
preceding commit 687f2cd7) by removing the code that generates this
unused information.  Also remove the REDO routine code disabled by
commit 3e4b7d87.

Replace the unneeded lastBlockVacuumed field in xl_btree_vacuum with a
new "ndeleted" field.  The new field isn't actually needed right now,
since we could continue to infer the array length from the overall
record length.  However, an upcoming patch to add deduplication to
nbtree needs to add an "items updated" field to xl_btree_vacuum, so we
might as well start being explicit about the number of items now.
(Besides, it doesn't seem like a good idea to leave the xl_btree_vacuum
struct without any fields; the C standard says that that's undefined.)

nbtree VACUUM no longer forces writing a WAL record for the last block
in the index.  Writing out a WAL record with no items for the final
block was supposed to force processing of a lastBlockVacuumed field by a
pin scan.

Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.

Discussion: https://postgr.es/m/CAH2-WzmY_mT7UnTzFB5LBQDBkKpdV5UxP3B5bLb7uP%3D%3D6UQJRQ%40mail.gmail.com
2019-12-19 11:35:55 -08:00
Robert Haas 303640199d Fix minor problems with non-exclusive backup cleanup.
The previous coding imagined that it could call before_shmem_exit()
when a non-exclusive backup began and then remove the previously-added
handler by calling cancel_before_shmem_exit() when that backup
ended. However, this only works provided that nothing else in the
system has registered a before_shmem_exit() hook in the interim,
because cancel_before_shmem_exit() is documented to remove a callback
only if it is the latest callback registered. It also only works
if nothing can ERROR out between the time that sessionBackupState
is reset and the time that cancel_before_shmem_exit(), which doesn't
seem to be strictly true.

To fix, leave the handler installed for the lifetime of the session,
arrange to install it just once, and teach it to quietly do nothing if
there isn't a non-exclusive backup in process.

This is a bug, but for now I'm not going to back-patch, because the
consequences are minor. It's possible to cause a spurious warning
to be generated, but that doesn't really matter. It's also possible
to trigger an assertion failure, but production builds shouldn't
have assertions enabled.

Patch by me, reviewed by Kyotaro Horiguchi, Michael Paquier (who
preferred a different approach, but got outvoted), Fujii Masao,
and Tom Lane, and with comments by various others.

Discussion: http://postgr.es/m/CA+TgmobMjnyBfNhGTKQEDbqXYE3_rXWpc4CM63fhyerNCes3mA@mail.gmail.com
2019-12-19 09:06:54 -05:00
Tom Lane bf7427bdd3 Minimal portability fix for commit e1551f96e.
Older gcc versions are not happy with having multiple declarations
for the same typedef name (not struct name).  I'm a bit dubious
as to how well-thought-out that patch was at all, but for the moment
just fix it enough so I can get some work done today.

Discussion: https://postgr.es/m/20191218101338.GB325369@paquier.xyz
2019-12-18 10:22:50 -05:00
Michael Paquier e1551f96e6 Refactor attribute mappings used in logical tuple conversion
Tuple conversion support in tupconvert.c is able to convert rowtypes
between two relations, inner and outer, which are logically equivalent
but have a different ordering or even dropped columns (used mainly for
inheritance tree and partitions).  This makes use of attribute mappings,
which are simple arrays made of AttrNumber elements with a length
matching the number of attributes of the outer relation.  The length of
the attribute mapping has been treated as completely independent of the
mapping itself until now, making it easy to pass down an incorrect
mapping length.

This commit refactors the code related to attribute mappings and moves
it into an independent facility called attmap.c, extracted from
tupconvert.c.  This merges the attribute mapping with its length,
avoiding to try to guess what is the length of a mapping to use as this
is computed once, when the map is built.

This will avoid mistakes like what has been fixed in dc816e58, which has
used an incorrect mapping length by matching it with the number of
attributes of an inner relation (a child partition) instead of an outer
relation (a partitioned table).

Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20191121042556.GD153437@paquier.xyz
2019-12-18 16:23:02 +09:00
Robert Haas da41d71070 simplehash: Allow for use in frontend code.
Commit 48995040d5 removed the largest
barrier to use of simplehash in frontend code, but there's one more
problem: it uses elog(ERROR, ...) or elog(LOG, ...) in a couple of
places. Work around that by changing those to pg_log_error() and
pg_log_info() when FRONTEND is defined.

Patch by me, reviewed by Andres Freund.

Discussion: http://postgr.es/m/CA+Tgmob8oyh02NrZW=xCScB+5GyJ-jVowE3+TWTUmPF=FsGWTA@mail.gmail.com
2019-12-17 14:14:32 -05:00
Robert Haas 48995040d5 simplehash: Allow use of simplehash without MemoryContext.
If the SH_RAW_ALLOCATOR is defined, it will be used to allocate bytes
for the hash table, and no dependencies on MemoryContext will exist.
This means, in particular, that the SH_CREATE function will not take
a MemoryContext argument.

Patch by me, reviewed by Andres Freund.

Discussion: http://postgr.es/m/CA+Tgmob8oyh02NrZW=xCScB+5GyJ-jVowE3+TWTUmPF=FsGWTA@mail.gmail.com
2019-12-17 14:06:25 -05:00
Robert Haas 7dbfea3c45 Partially deduplicate interrupt handling for background processes.
Where possible, share signal handler code and main loop interrupt
checking. This saves quite a bit of code and should simplify
maintenance, too.

This commit intends not to change the way anything works, even
though that might allow more code to be unified. It does unify
a bunch of individual variables into a ShutdownRequestPending
flag that has is now used by a bunch of different process types,
though.

Patch by me, reviewed by Andres Freund and Daniel Gustafsson.

Discussion: http://postgr.es/m/CA+TgmoZwDk=BguVDVa+qdA6SBKef=PKbaKDQALTC_9qoz1mJqg@mail.gmail.com
2019-12-17 13:14:28 -05:00
Peter Geoghegan fcf3b6917b Rename nbtree tuple macros.
Rename two function-style macros, removing the word "inner".  This makes
things more consistent.
2019-12-16 17:49:45 -08:00
Michael Paquier 5d43c3c54d Fix query cancellation handling in psql
The refactoring done in a4fd3aa for query cancellation has messed up
with the logic in psql by mixing CancelRequested and cancel_pressed,
breaking for example \watch.  The former would be switched to true if a
cancellation request has been attempted and that it actually succeeded,
and the latter tracks if a cancellation attempt has been done.

This commit brings back the code of psql to a state consistent to what
it was before a4fd3aa, without giving up on the refactoring pieces
introduced.  It should be actually possible to merge more both flags as
their concepts are close enough, however note that psql's --single-step
mode relies on cancel_pressed to be always set, so this requires more
careful analysis left for later.

While on it, fix the declarations of CancelRequested (in cancel.c) and
cancel_pressed (in psql) to be volatile sig_atomic_t.  Previously,
both were declared as booleans, which should be fine on modern
platforms, but the C standard recommends the use of sig_atomic_t for
variables used in signal handlers.  Note that since its introduction in
a1792320, CancelRequested declaration was not volatile.

Reported-by: Jeff Janes
Author: Michael Paquier
Discussion: https://postgr.es/m/CAMkU=1zpoUDGKqWKuMWkj7t-bOCaJDx0r=5te_-d0B2HVLABXg@mail.gmail.com
2019-12-17 10:44:25 +09:00
Tom Lane b925a00f4e Fix "force_parallel_mode = regress" to work with ANALYZE + VERBOSE.
force_parallel_mode = regress is supposed to force use of a Gather
node without having any impact on EXPLAIN output.  But it failed to
accomplish that if both ANALYZE and VERBOSE are given, because that
enables per-worker output data that you wouldn't see if the Gather
hadn't been inserted.  Improve the logic so that we suppress the
per-worker data too.

This allows putting the new test case added by commit 5935917ce
back into the originally intended form (cf. 776a2c887, 22864f6e0).
We can also get rid of a kluge in subselect.sql, which previously
had to clean up after force_parallel_mode's failure to do what it
said on the tin.

Discussion: https://postgr.es/m/18445.1576177309@sss.pgh.pa.us
2019-12-16 20:14:35 -05:00
Peter Eisentraut 502423180a Fix build of Perl-using modules of Windows
Commit f14413b684 broke the build of
Perl-using modules on Windows.

Perl might have its own definitions of uid_t and gid_t, so we hide
ours, but then we can't use ours in our header files such as port.h
which don't see the Perl definition.

Hide our definition of getpeereid() on Windows in Perl-using modules,
using PLPERL_HAVE_UID_GID define.  That means we can't portably use
getpeeruid() is such modules right now, but there is no need anyway.
2019-12-16 11:48:01 +01:00
Peter Eisentraut f14413b684 Sort out getpeereid() and peer auth handling on Windows
The getpeereid() uses have so far been protected by HAVE_UNIX_SOCKETS,
so they didn't ever care about Windows support.  But in anticipation
of Unix-domain socket support on Windows, that needs to be handled
differently.

Windows doesn't support getpeereid() at this time, so we use the
existing not-supported code path.  We let configure do its usual thing
of picking up the replacement from libpgport, instead of the custom
overrides that it was doing before.

But then Windows doesn't have struct passwd, so this patch sprinkles
some additional #ifdef WIN32 around to make it work.  This is similar
to existing code that deals with this issue.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/5974caea-1267-7708-40f2-6009a9d653b0@2ndquadrant.com
2019-12-16 09:36:08 +01:00
Tom Lane 6ea364e7e7 Prevent overly-aggressive collapsing of joins to RTE_RESULT relations.
The RTE_RESULT simplification logic added by commit 4be058fe9 had a
flaw: it would collapse out a RTE_RESULT that is due to compute a
PlaceHolderVar, and reassign the PHV to the parent join level, even if
another input relation of the join contained a lateral reference to
the PHV.  That can't work because the PHV would be computed too late.
In practice it led to failures of internal sanity checks later in
planning (either assertion failures or errors such as "failed to
construct the join relation").

To fix, add code to check for the presence of such PHVs in relevant
portions of the query tree.  Notably, this required refactoring
range_table_walker so that a caller could ask to walk individual RTEs
not the whole list.  (It might be a good idea to refactor
range_table_mutator in the same way, if only to keep those functions
looking similar; but I didn't do so here as it wasn't necessary for
the bug fix.)

This exercise also taught me that find_dependent_phvs(), as it stood,
could only safely be used on the entire Query, not on subtrees.
Adjust its API to reflect that; which in passing allows it to have
a fast path for the common case of no PHVs anywhere.

Per report from Will Leinweber.  Back-patch to v12 where the bug
was introduced.

Discussion: https://postgr.es/m/CALLb-4xJMd4GZt2YCecMC95H-PafuWNKcmps4HLRx2NHNBfB4g@mail.gmail.com
2019-12-14 13:49:15 -05:00
Tom Lane 5935917ce5 Allow executor startup pruning to prune all child nodes.
Previously, if the startup pruning logic proved that all child nodes
of an Append or MergeAppend could be pruned, we still kept one, just
to keep EXPLAIN from failing.  The previous commit removed the
ruleutils.c limitation that required this kluge, so drop it.  That
results in less-confusing EXPLAIN output, as per a complaint from
Yuzuko Hosoya.

David Rowley

Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
2019-12-11 17:05:30 -05:00
Tom Lane 6ef77cf46e Further adjust EXPLAIN's choices of table alias names.
This patch causes EXPLAIN to always assign a separate table alias to the
parent RTE of an append relation (inheritance set); before, such RTEs
were ignored if not actually scanned by the plan.  Since the child RTEs
now always have that same alias to start with (cf. commit 55a1954da),
the net effect is that the parent RTE usually gets the alias used or
implied by the query text, and the children all get that alias with "_N"
appended.  (The exception to "usually" is if there are duplicate aliases
in different subtrees of the original query; then some of those original
RTEs will also have "_N" appended.)

This results in more uniform output for partitioned-table plans than
we had before: the partitioned table itself gets the original alias,
and all child tables have aliases with "_N", rather than the previous
behavior where one of the children would get an alias without "_N".

The reason for giving the parent RTE an alias, even if it isn't scanned
by the plan, is that we now use the parent's alias to qualify Vars that
refer to an appendrel output column and appear above the Append or
MergeAppend that computes the appendrel.  But below the append, Vars
refer to some one of the child relations, and are displayed that way.
This seems clearer than the old behavior where a Var that could carry
values from any child relation was displayed as if it referred to only
one of them.

While at it, change ruleutils.c so that the code paths used by EXPLAIN
deal in Plan trees not PlanState trees.  This effectively reverts a
decision made in commit 1cc29fe7c, which seemed like a good idea at
the time to make ruleutils.c consistent with explain.c.  However,
it's problematic because we'd really like to allow executor startup
pruning to remove all the children of an append node when possible,
leaving no child PlanState to resolve Vars against.  (That's not done
here, but will be in the next patch.)  This requires different handling
of subplans and initplans than before, but is otherwise a pretty
straightforward change.

Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
2019-12-11 17:05:18 -05:00
Alvaro Herrera ba79cb5dc8 Emit parameter values during query bind/execute errors
This makes such log entries more useful, since the cause of the error
can be dependent on the parameter values.

Author: Alexey Bashtanov, Álvaro Herrera
Discussion: https://postgr.es/m/0146a67b-a22a-0519-9082-bc29756b93a2@imap.cc
Reviewed-by: Peter Eisentraut, Andres Freund, Tom Lane
2019-12-11 18:03:35 -03:00
Peter Eisentraut 877b61e9ce Cosmetic cleaning of pg_config.h.win32
Clean up some comments (some generated by old versions of autoconf)
and some random ordering differences, so it's easier to diff this
against the default pg_config.h or pg_config.h.in.  Remove LOCALEDIR
handling from pg_config.h.win32 altogether because it's already in
pg_config_paths.h.
2019-12-10 21:24:25 +01:00
Alvaro Herrera 6cafde1bd4 Add backend-only appendStringInfoStringQuoted
This provides a mechanism to emit literal values in informative
messages, such as query parameters.  The new code is more complex than
what it replaces, primarily because it wants to be more efficient.
It also has the (currently unused) additional optional capability of
specifying a maximum size to print.

The new function lives out of common/stringinfo.c so that frontend users
of that file need not pull in unnecessary multibyte-encoding support
code.

Author: Álvaro Herrera and Alexey Bashtanov, after a suggestion from Andres Freund
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190920203905.xkv5udsd5dxfs6tr@alap3.anarazel.de
2019-12-10 17:12:56 -03:00
Jeff Davis 30d47723fd Fix comments in execGrouping.c
Commit 5dfc1981 missed updating some comments.

Also, fix a comment typo found in passing.

Author: Jeff Davis
Discussion: https://postgr.es/m/9723131d247b919f94699152647fa87ee0bc02c2.camel%40j-davis.com
2019-12-06 11:49:59 -08:00
Michael Paquier 7d0bcb0477 Fix handling of OpenSSL's SSL_clear_options
This function is supported down to OpenSSL 0.9.8, which is the oldest
version supported since 593d4e4 (from Postgres 10 onwards), and is used
since e3bdb2d (from 11 onwards).  It is defined as a macro from OpenSSL
0.9.8 to 1.0.2, and as a function in 1.1.0 and newer versions.  However,
the configure check present is only adapted for functions.  So, even if
the code would be able to compile, configure fails to detect the macro,
causing it to be ignored when compiling the code with OpenSSL from 0.9.8
to 1.0.2.

The code needs a configure check as per a364dfa, which has fixed a
compilation issue with a past version of LibreSSL in NetBSD 5.1.  On
HEAD, just remove the configure check as the last release of NetBSD 5 is
from 2014 (and we have no more buildfarm members for it).  In 11 and 12,
improve the configure logic so as both macros and functions are
correctly detected.  This makes NetBSD 5 still work on already-released
branches, but not for 13 onwards.

The patch for HEAD is from me, and Daniel has written the version to use
for the back-branches.

Author: Michael Paquier, Daniel Gustaffson
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20191205083252.GE5064@paquier.xyz
Discussion: https://postgr.es/m/98F7F99E-1129-41D8-B86B-FE3B1E286881@yesql.se
Backpatch-through: 11
2019-12-06 15:13:55 +09:00
Michael Paquier 28f4bba66b Remove configure check for OpenSSL's SSL_get_current_compression()
This function has been added in OpenSSL 0.9.8, which is the oldest
version supported on HEAD, so checking for it at configure time is
useless.  Both the frontend and backend code did not even bother to use
it.

Reported-by: Daniel Gustafsson
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/20191205083252.GE5064@paquier.xyz
Discussion: https://postgr.es/m/98F7F99E-1129-41D8-B86B-FE3B1E286881@yesql.se
2019-12-06 09:41:32 +09:00
Robert Haas 42f362967d Minor comment improvements for instrumentation.h
Remove a duplicated word. Add "of" or "# of" in a couple places
for clarity and consistency. Start comments with a lower case
letter as we do elsewhere in this file.

Rafia Sabih
2019-12-05 07:56:29 -05:00
Alvaro Herrera 0b9466fce2 Offer pnstrdup to frontend code
We already had it on the backend.  Frontend can also use it now.

Discussion: https://postgr.es/m/20191204144021.GA17976@alvherre.pgsql
2019-12-04 19:36:06 -03:00
Michael Paquier 9989d37d1c Remove XLogFileNameP() from the tree
XLogFileNameP() is a wrapper routine able to build a palloc'd string for
a WAL segment name, which is used for error string generation.  There
were several code paths where it gets called in a critical section,
where memory allocation is not allowed.  This results in triggering
an assertion failure instead of generating the wanted error message.

Another, more annoying, problem is that if the allocation to generate
the WAL segment name fails on OOM, then the failure would be escalated
to a PANIC.

This removes the routine and all its callers are replaced with a logic
using a fixed-size buffer.  This way, all the existing mistakes are
fixed and future ones are prevented.

Author: Masahiko Sawada
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CA+fd4k5gC9H4uoWMLg9K_QfNrnkkdEw+-AFveob9YX7z8JnKTA@mail.gmail.com
2019-12-03 15:06:04 +09:00
Tom Lane ce76c0ba53 Add a reverse-translation column number array to struct AppendRelInfo.
This provides for cheaper mapping of child columns back to parent
columns.  The one existing use-case in examine_simple_variable()
would hardly justify this by itself; but an upcoming bug fix will
make use of this array in a mainstream code path, and it seems
likely that we'll find other uses for it as we continue to build
out the partitioning infrastructure.

Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
2019-12-02 18:05:29 -05:00
Michael Paquier a4fd3aa719 Refactor query cancellation code into src/fe_utils/
Originally, this code was duplicated in src/bin/psql/ and
src/bin/scripts/, but it can be useful for other frontend applications,
like pgbench.  This refactoring offers the possibility to setup a custom
callback which would get called in the signal handler for SIGINT or when
the interruption console events happen on Windows.

Author: Fabien Coelho, with contributions from Michael Paquier
Reviewed-by: Álvaro Herrera, Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1910311939430.27369@lancre
2019-12-02 11:18:56 +09:00
Tom Lane c35b714caf Fix misbehavior with expression indexes on ON COMMIT DELETE ROWS tables.
We implement ON COMMIT DELETE ROWS by truncating tables marked that
way, which requires also truncating/rebuilding their indexes.  But
RelationTruncateIndexes asks the relcache for up-to-date copies of any
index expressions, which may cause execution of eval_const_expressions
on them, which can result in actual execution of subexpressions.
This is a bad thing to have happening during ON COMMIT.  Manuel Rigger
reported that use of a SQL function resulted in crashes due to
expectations that ActiveSnapshot would be set, which it isn't.
The most obvious fix perhaps would be to push a snapshot during
PreCommit_on_commit_actions, but I think that would just open the door
to more problems: CommitTransaction explicitly expects that no
user-defined code can be running at this point.

Fortunately, since we know that no tuples exist to be indexed, there
seems no need to use the real index expressions or predicates during
RelationTruncateIndexes.  We can set up dummy index expressions
instead (we do need something that will expose the right data type,
as there are places that build index tupdescs based on this), and
just ignore predicates and exclusion constraints.

In a green field it'd likely be better to reimplement ON COMMIT DELETE
ROWS using the same "init fork" infrastructure used for unlogged
relations.  That seems impractical without catalog changes though,
and even without that it'd be too big a change to back-patch.
So for now do it like this.

Per private report from Manuel Rigger.  This has been broken forever,
so back-patch to all supported branches.
2019-12-01 13:09:26 -05:00
Tomas Vondra c676e659b2 Fix choose_best_statistics to check clauses individually
When picking the best extended statistics object for a list of clauses,
it's not enough to look at attnums extracted from the clause list as a
whole. Consider for example this query with OR clauses:

   SELECT * FROM t WHERE (t.a = 1) OR (t.b = 1) OR (t.c = 1)

with a statistics defined on columns (a,b). Relying on attnums extracted
from the whole OR clause, we'd consider the statistics usable. That does
not work, as we see the conditions as a single OR-clause, referencing an
attribute not covered by the statistic, leading to empty list of clauses
to be estimated using the statistics and an assert failure.

This changes choose_best_statistics to check which clauses are actually
covered, and only using attributes from the fully covered ones. For the
previous example this means the statistics object will not be considered
as compatible with the OR-clause.

Backpatch to 12, where MCVs were introduced. The issue does not affect
older versions because functional dependencies don't handle OR clauses.

Author: Tomas Vondra
Reviewed-by: Dean Rasheed
Reported-By: Manuel Rigger
Discussion: https://postgr.es/m/CA+u7OA7H5rcE2=8f263w4NZD6ipO_XOrYB816nuLXbmSTH9pQQ@mail.gmail.com
Backpatch-through: 12
2019-11-28 22:20:45 +01:00
Peter Eisentraut 4513d8b07b Move configure --disable-float8-byval to pg_config_manual.h
This build option was once useful to maintain compatibility with
version-0 functions, but those are no longer supported, so this option
is no longer useful for end users.  We keep the option available to
developers in pg_config_manual.h so that it is easy to test the
pass-by-reference code paths without having to fire up a 32-bit
machine.

Discussion: https://www.postgresql.org/message-id/flat/f3e1e576-2749-bbd7-2d57-3f9dcf75255a@2ndquadrant.com
2019-11-27 12:27:20 +01:00
Alvaro Herrera 0dc8ead463 Refactor WAL file-reading code into WALRead()
XLogReader, walsender and pg_waldump all had their own routines to read
data from WAL files to memory, with slightly different approaches
according to the particular conditions of each environment.  There's a
lot of commonality, so we can refactor that into a single routine
WALRead in XLogReader, and move the differences to a separate (simpler)
callback that just opens the next WAL-segment.  This results in a
clearer (ahem) code flow.

The error reporting needs are covered by filling in a new error-info
struct, WALReadError, and it's the caller's responsibility to act on it.
The backend has WALReadRaiseError() to do so.

We no longer ever need to seek in this interface; switch to using
pg_pread().

Author: Antonin Houska, with contributions from Álvaro Herrera
Reviewed-by: Michaël Paquier, Kyotaro Horiguchi
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
2019-11-25 15:04:54 -03:00
Amit Kapila e0487223ec Make the order of the header file includes consistent.
Similar to commits 14aec03502, 7e735035f2 and dddf4cdc33, this commit
makes the order of header file inclusion consistent in more places.

Author: Vignesh C
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
2019-11-25 08:08:57 +05:30
Michael Paquier 4cb658af70 Refactor reloption handling for index AMs in-core
This reworks the reloption parsing and build of a couple of index AMs by
creating new structures for each index AM's options.  This split was
already done for BRIN, GIN and GiST (which actually has a fillfactor
parameter), but not for hash, B-tree and SPGiST which relied on
StdRdOptions due to an overlap with the default option set.

This saves a couple of bytes for rd_options in each relcache entry with
indexes making use of relation options, and brings more consistency
between all index AMs.  While on it, add a couple of AssertMacro() calls
to make sure that utility macros to grab values of reloptions are used
with the expected index AM.

Author: Nikolay Shaplov
Reviewed-by: Amit Langote, Michael Paquier, Álvaro Herrera, Dent John
Discussion: https://postgr.es/m/4127670.gFlpRb6XCm@x200m
2019-11-25 09:40:53 +09:00
Tom Lane 8b7ae5a82d Stabilize the results of pg_notification_queue_usage().
This function wasn't touched in commit 51004c717, but that turns out
to be a bad idea, because its results now include any dead space
that exists in the NOTIFY queue on account of our being lazy about
advancing the queue tail.  Notably, the isolation tests now fail
if run twice without a server restart between, because async-notify's
first test of the function will already show a positive value.
It seems likely that end users would be equally unhappy about the
result's instability.  To fix, just make the function call
asyncQueueAdvanceTail before computing its result.  That should end
in producing the same value as before, and it's hard to believe that
there's any practical use-case where pg_notification_queue_usage()
is called so often as to create a performance degradation, especially
compared to what we did before.

Out of paranoia, also mark this function parallel-restricted (it
was volatile, but parallel-safe by default, before).  Although the
code seems to work fine when run in a parallel worker, that's outside
the design scope of async.c, and it's a bit scary to have intentional
side-effects happening in a parallel worker.  There seems no plausible
use-case where it'd be important to try to parallelize this, so let's
not take any risk of introducing new bugs.

In passing, re-pgindent async.c and run reformat-dat-files on
pg_proc.dat, just because I'm a neatnik.

Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
2019-11-24 14:09:33 -05:00
Joe Conway f7a2002e82 Add object TRUNCATE hook
All operations with acl permissions checks should have a corresponding hook
so that, for example, mandatory access control (MAC) may be enforced by an
extension. The command TRUNCATE is missing this hook, so add it. Patch by
Yuli Khodorkovskiy with some editorialization by me. Based on the discussion
not back-patched. A separate patch will exercise the hook in the sepgsql
extension.

Author: Yuli Khodorkovskiy
Reviewed-by: Joe Conway
Discussion: https://postgr.es/m/CAFL5wJcomybj1Xdw7qWmPJRpGuFukKgNrDb6uVBaCMgYS9dkaA%40mail.gmail.com
2019-11-23 10:39:20 -05:00
Tom Lane 4a0aab14dc Defend against self-referential views in relation_is_updatable().
While a self-referential view doesn't actually work, it's possible
to create one, and it turns out that this breaks some of the
information_schema views.  Those views call relation_is_updatable(),
which neglected to consider the hazards of being recursive.  In
older PG versions you get a "stack depth limit exceeded" error,
but since v10 it'd recurse to the point of stack overrun and crash,
because commit a4c35ea1c took out the expression_returns_set() call
that was incidentally checking the stack depth.

Since this function is only used by information_schema views, it
seems like it'd be better to return "not updatable" than suffer
an error.  Hence, add tracking of what views we're examining,
in just the same way that the nearby fireRIRrules() code detects
self-referential views.  I added a check_stack_depth() call too,
just to be defensive.

Per private report from Manuel Rigger.  Back-patch to all
supported versions.
2019-11-21 16:21:43 -05:00
Peter Eisentraut 2e4db241bf Remove configure --disable-float4-byval
This build option was only useful to maintain compatibility for
version-0 functions, but those are no longer supported, so this option
can be removed.

float4 is now always pass-by-value; the pass-by-reference code path is
completely removed.

Discussion: https://www.postgresql.org/message-id/flat/f3e1e576-2749-bbd7-2d57-3f9dcf75255a@2ndquadrant.com
2019-11-21 18:29:21 +01:00
Fujii Masao 43a54a3bcc Bump WAL version.
Oversight in commit e6d8069522. Since that commit changed the format of
XLOG_DBASE_DROP WAL record, XLOG_PAGE_MAGIC needs to be bumped.

Spotted by Michael Paquier
2019-11-21 22:17:28 +09:00
Fujii Masao e6d8069522 Make DROP DATABASE command generate less WAL records.
Previously DROP DATABASE generated as many XLOG_DBASE_DROP WAL records
as the number of tablespaces that the database to drop uses. This caused
the scans of shared_buffers as many times as the number of the tablespaces
during recovery because WAL replay of one XLOG_DBASE_DROP record needs
that full scan. This could make the recovery time longer especially
when shared_buffers is large.

This commit changes DROP DATABASE so that it generates only one
XLOG_DBASE_DROP record, and registers the information of all the tablespaces
into it. Then, WAL replay of XLOG_DBASE_DROP record needs full scan of
shared_buffers only once, and which may improve the recovery performance.

Author: Fujii Masao
Reviewed-by: Kirk Jamison, Simon Riggs
Discussion: https://postgr.es/m/CAHGQGwF8YwNH0ZaL+2wjZPkj+ji9UhC+Z4ScnG97WKtVY5L9iw@mail.gmail.com
2019-11-21 21:10:37 +09:00
Amit Kapila 9290ad198b Track statistics for spilling of changes from ReorderBuffer.
This adds the statistics about transactions spilled to disk from
ReorderBuffer.  Users can query the pg_stat_replication view to check
these stats.

Author: Tomas Vondra, with bug-fixes and minor changes by Dilip Kumar
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2019-11-21 08:06:51 +05:30
Tom Lane 2ddedcafca Reduce match_pattern_prefix()'s dependencies on index opfamilies.
Historically, the planner's LIKE/regex index optimizations were only
carried out for specific index opfamilies.  That's never been a great
idea from the standpoint of extensibility, but it didn't matter so
much as long as we had no practical way to extend such behaviors anyway.
With the addition of planner support functions, and in view of ongoing
work to support additional table and index AMs, it seems like a good
time to relax this.

Hence, recast the decisions in match_pattern_prefix() so that rather
than decide which operators to generate by looking at what the index
opfamily contains, we decide which operators to generate a-priori
and then see if the opfamily supports them.  This is much more
defensible from a semantic standpoint anyway, since we know the
semantics of the chosen operators precisely, and we only need to
assume that the opfamily correctly implements operators it claims
to support.

The existing "pattern" opfamilies put a crimp in this approach, since
we need to select the pattern operators if we want those to work.
So we still have to special-case those opfamilies.  But that seems
all right, since in view of the addition of collations, the pattern
opfamilies seem like a legacy hack that nobody will be building on.

The only immediate effect of this change, so far as the core code is
concerned, is that anchored LIKE/regex patterns can be mapped onto
BRIN index searches, and exact-match patterns can be mapped onto hash
indexes, not only btree and spgist indexes as before.  That's not a
terribly exciting result, but it does fix an omission mentioned in
the ancient comments here.

Note: no catversion bump, even though this touches pg_operator.dat,
because it's only adding OID macros not changing the contents of
postgres.bki.

Per consideration of a report from Manuel Rigger.

Discussion: https://postgr.es/m/CA+u7OA7nnGYy8rY0vdTe811NuA+Frr9nbcBO9u2Z+JxqNaud+g@mail.gmail.com
2019-11-20 14:13:04 -05:00
Michael Paquier f9cb8bd3f2 Fix comment in xact.h
xl_xact_relfilenodes refers to a number of relations, not XIDs, whose
relfilenodes are processed.

Author: Yu Kimura
Discussion: https://postgr.es/m/a6ba6cf6bd0c990e019f008bae83437f@oss.nttdata.com
2019-11-20 17:48:31 +09:00
Amit Kapila cec2edfa78 Add logical_decoding_work_mem to limit ReorderBuffer memory usage.
Instead of deciding to serialize a transaction merely based on the
number of changes in that xact (toplevel or subxact), this makes
the decisions based on amount of memory consumed by the changes.

The memory limit is defined by a new logical_decoding_work_mem GUC,
so for example we can do this

    SET logical_decoding_work_mem = '128kB'

to reduce the memory usage of walsenders or set the higher value to
reduce disk writes. The minimum value is 64kB.

When adding a change to a transaction, we account for the size in
two places. Firstly, in the ReorderBuffer, which is then used to
decide if we reached the total memory limit. And secondly in the
transaction the change belongs to, so that we can pick the largest
transaction to evict (and serialize to disk).

We still use max_changes_in_memory when loading changes serialized
to disk. The trouble is we can't use the memory limit directly as
there might be multiple subxact serialized, we need to read all of
them but we don't know how many are there (and which subxact to
read first).

We do not serialize the ReorderBufferTXN entries, so if there is a
transaction with many subxacts, most memory may be in this type of
objects. Those records are not included in the memory accounting.

We also do not account for INTERNAL_TUPLECID changes, which are
kept in a separate list and not evicted from memory. Transactions
with many CTID changes may consume significant amounts of memory,
but we can't really do much about that.

The current eviction algorithm is very simple - the transaction is
picked merely by size, while it might be useful to also consider age
(LSN) of the changes for example. With the new Generational memory
allocator, evicting the oldest changes would make it more likely
the memory gets actually pfreed.

The logical_decoding_work_mem can be set in postgresql.conf, in which
case it serves as the default for all publishers on that instance.

Author: Tomas Vondra, with changes by Dilip Kumar and Amit Kapila
Reviewed-by: Dilip Kumar and Amit Kapila
Tested-By: Vignesh C
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2019-11-19 07:32:36 +05:30
Michael Paquier 50d22de932 Cleanup code in reloptions.h regarding reloption handling
reloptions.h includes since ba748f7 a set of macros to handle reloption
types in a way similar to how parseRelOptions() works.  They have never
been used in the core code, and we have more simple methods now to parse
and fill in rd_options for a given relation depending on its relkind, so
remove this interface to simplify things.

Per discussion between Amit Langote, Álvaro Herrera and me.

Discussion: https://postgr.es/m/CA+HiwqE6zbNO92az6pp5GiTw4tr-9rfCE0t84whQSP+YwSKjMQ@mail.gmail.com
2019-11-14 13:59:59 +09:00
Michael Paquier 1bbd608fda Split handling of reloptions for partitioned tables
Partitioned tables do not have relation options yet, but, similarly to
what's done for views which have their own parsing table, it could make
sense to introduce new parameters for some of the existing default ones
like fillfactor, autovacuum, etc.  Splitting things has the advantage to
make the information stored in rd_options include only the necessary
information, reducing the amount of memory used for a relcache entry
with partitioned tables if new reloptions are introduced at this level.

Author:  Nikolay Shaplov
Reviewed-by: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/1627387.Qykg9O6zpu@x200m
2019-11-14 12:34:28 +09:00
Fujii Masao 7b8a899bde Make pg_waldump report more detail information about PREPARE TRANSACTION record.
This commit changes xact_desc() so that it reports the detail information about
PREPARE TRANSACTION record, like GID (global transaction identifier),
timestamp at prepare transaction, delete-on-abort/commit relations,
XID of subtransactions, and invalidation messages. These are helpful
when diagnosing 2PC-related troubles.

Author: Fujii Masao
Reviewed-by: Michael Paquier, Andrey Lepikhov, Kyotaro Horiguchi, Julien Rouhaud, Alvaro Herrera
Discussion: https://postgr.es/m/CAHGQGwEvhASad4JJnCv=0dW2TJypZgW_Vpb-oZik2a3utCqcrA@mail.gmail.com
2019-11-13 16:59:17 +09:00
Amit Kapila 1379fd537f Introduce the 'force' option for the Drop Database command.
This new option terminates the other sessions connected to the target
database and then drop it.  To terminate other sessions, the current user
must have desired permissions (same as pg_terminate_backend()).  We don't
allow to terminate the sessions if prepared transactions, active logical
replication slots or subscriptions are present in the target database.

Author: Pavel Stehule with changes by me
Reviewed-by: Dilip Kumar, Vignesh C, Ibrar Ahmed, Anthony Nowocien,
Ryan Lambert and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
2019-11-13 08:25:33 +05:30
Alvaro Herrera 5c46e7d82e pg_stat_{ssl,gssapi}: Show only processes with connections
It is pointless to show in those views auxiliary processes that don't
open network connections.

A small incompatibility is that anybody joining pg_stat_activity and
pg_stat_ssl/pg_stat_gssapi will have to use a left join if they want to
see such auxiliary processes.

Author: Euler Taveira
Discussion: https://postgr.es/m/20190904151535.GA29108@alvherre.pgsql
2019-11-12 18:48:41 -03:00
Tom Lane 7a0574b50e Fix ecpglib.h to declare bool consistently with c.h.
This completes the task begun in commit 1408d5d86, to synchronize
ECPG's exported definitions with the definition of bool used by
c.h (and, therefore, the one actually in use in the ECPG library).
On practically all modern platforms, ecpglib.h will now just
include <stdbool.h>, which should surprise nobody anymore.
That removes a header-inclusion-order hazard for ECPG clients,
who previously might get build failures or unexpected behavior
depending on whether they'd included <stdbool.h> themselves,
and if so, whether before or after ecpglib.h.

On platforms where sizeof(_Bool) is not 1 (only old PPC-based
Mac systems, as far as I know), things are still messy, as
inclusion of <stdbool.h> could still break ECPG client code.
There doesn't seem to be any clean fix for that, and given the
probably-negligible population of users who would care anymore,
it's not clear we should go far out of our way to cope with it.
This change at least fixes some header-inclusion-order hazards
for our own code, since c.h and ecpglib.h previously disagreed
on whether bool should be char or unsigned char.

To implement this with minimal invasion of ECPG client namespace,
move the choice of whether to rely on <stdbool.h> into configure,
and have it export a configuration symbol PG_USE_STDBOOL.

ecpglib.h no longer exports definitions for TRUE and FALSE,
only their lowercase brethren.  We could undo that if we get
push-back about it.

Ideally we'd back-patch this as far as v11, which is where c.h
started to rely on <stdbool.h>.  But the odds of creating problems
for formerly-working ECPG client code seem about as large as the
odds of fixing any non-working cases, so we'll just do this in HEAD.

Discussion: https://postgr.es/m/CAA4eK1LmaKO7Du9M9Lo=kxGU8sB6aL8fa3sF6z6d5yYYVe3BuQ@mail.gmail.com
2019-11-12 13:00:04 -05:00
Peter Eisentraut bbaa823272 Rerun autoheader
This puts pg_config.h.in content back into the "correct" order.
2019-11-11 09:50:07 +01:00
Andres Freund aae50236e4 Pass ItemPointer not HeapTuple to IndexBuildCallback.
Not all AMs use HeapTuples internally, making it inconvenient to pass
a HeapTuple. As the index callbacks really only need the TID, not the
full tuple, modify callback to only take ItemPointer.

Author: Ashwin Agrawal
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CALfoeis6=8ehuR=VNtHvj3z16cYfCwPdTcpaxU+sfSUJ5QgR3g@mail.gmail.com
2019-11-08 11:49:29 -08:00
Alvaro Herrera 71a8a4f6e3 Add backtrace support for error reporting
Add some support for automatically showing backtraces in certain error
situations in the server.  Backtraces are shown on assertion failure;
also, a new setting backtrace_functions can be set to a list of C
function names, and all ereport()s and elog()s from the mentioned
functions will have backtraces generated.  Finally, the function
errbacktrace() can be manually added to an ereport() call to generate a
backtrace for that call.

Authors: Peter Eisentraut, Álvaro Herrera
Discussion: https://postgr.es/m//5f48cb47-bf1e-05b6-7aae-3bf2cd01586d@2ndquadrant.com
Discussion: https://postgr.es/m/CAMsr+YGL+yfWE=JvbUbnpWtrRZNey7hJ07+zT4bYJdVp4Szdrg@mail.gmail.com
2019-11-08 15:44:20 -03:00
Tom Lane a7145f6bc8 Fix integer-overflow edge case detection in interval_mul and pgbench.
This patch adopts the overflow check logic introduced by commit cbdb8b4c0
into two more places.  interval_mul() failed to notice if it computed a
new microseconds value that was one more than INT64_MAX, and pgbench's
double-to-int64 logic had the same sorts of edge-case problems that
cbdb8b4c0 fixed in the core code.

To make this easier to get right in future, put the guts of the checks
into new macros in c.h, and add commentary about how to use the macros
correctly.

Back-patch to all supported branches, as we did with the previous fix.

Yuya Watari

Discussion: https://postgr.es/m/CAJ2pMkbkkFw2hb9Qb1Zj8d06EhWAQXFLy73St4qWv6aX=vqnjw@mail.gmail.com
2019-11-07 11:22:58 -05:00
Peter Eisentraut effa40281b Remove HAVE_LONG_LONG_INT
The presence of long long int is now implied in the requirement for
C99 and the configure check for the same.

We keep the define hard-coded in ecpg_config.h for backward
compatibility with ecpg-using user code.

Discussion: https://www.postgresql.org/message-id/flat/5cdd6a2b-b2c7-c6f6-344c-a406d5c1a254%402ndquadrant.com
2019-11-07 13:30:04 +01:00
Peter Eisentraut 581a55889b Fix nested error handling in PG_FINALLY
We need to pop the error stack before running the user-supplied
PG_FINALLY code.  Otherwise an error in the cleanup code would end up
at the same sigsetjmp() invocation and result in an infinite error
handling loop.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-11-07 09:56:47 +01:00
Thomas Munro 7815e7efdb Add reusable routine for making arrays unique.
Introduce qunique() and qunique_arg(), which can be used after qsort()
and qsort_arg() respectively to remove duplicate values.  Use it where
appropriate.

Author: Thomas Munro
Reviewed-by: Tom Lane (in an earlier version)
Discussion: https://postgr.es/m/CAEepm%3D2vmFTNpAmwbGGD2WaryM6T3hSDVKQPfUwjdD_5XY6vAA%40mail.gmail.com
2019-11-07 17:00:48 +13:00
Tomas Vondra 6e3e6cc0e8 Allow sampling of statements depending on duration
This allows logging a sample of statements, without incurring excessive
log traffic (which may impact performance).  This can be useful when
analyzing workloads with lots of short queries.

The sampling is configured using two new GUC parameters:

 * log_min_duration_sample - minimum required statement duration

 * log_statement_sample_rate - sample rate (0.0 - 1.0)

Only statements with duration exceeding log_min_duration_sample are
considered for sampling. To enable sampling, both those GUCs have to
be set correctly.

The existing log_min_duration_statement GUC has a higher priority, i.e.
statements with duration exceeding log_min_duration_statement will be
always logged, irrespectedly of how the sampling is configured. This
means only configurations

  log_min_duration_sample < log_min_duration_statement

do actually sample the statements, instead of logging everything.

Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing, Tomas Vondra
Discussion: https://postgr.es/m/bbe0a1a8-a8f7-3be2-155a-888e661cc06c@anayrat.info
2019-11-06 19:11:07 +01:00
Tom Lane 22e44e8dbc Minor code review for tuple slot rewrite.
Avoid creating transiently-inconsistent slot states where possible,
by not setting TTS_FLAG_SHOULDFREE until after the slot actually has
a free'able tuple pointer, and by making sure that we reset tts_nvalid
and related derived state before we replace the tuple contents.  This
would only matter if something were to examine the slot after we'd
suffered some kind of error (e.g. out of memory) while manipulating
the slot.  We typically don't do that, so these changes might just be
cosmetic --- but even if so, it seems like good future-proofing.

Also remove some redundant Asserts, and add a couple for consistency.

Back-patch to v12 where all this code was rewritten.

Discussion: https://postgr.es/m/16095-c3ff2e5283b8dba5@postgresql.org
2019-11-06 12:00:17 -05:00
Andres Freund 26aaf97b68 Make StringInfo available to frontend code.
There's plenty places in frontend code that could benefit from a
string buffer implementation. Some because it yields simpler and
faster code, and some others because of the desire to share code
between backend and frontend.

While there is a string buffer implementation available to frontend
code, libpq's PQExpBuffer, it is clunkier than stringinfo, it
introduces a libpq dependency, doesn't allow for sharing between
frontend and backend code, and has a higher API/ABI stability
requirement due to being exposed via libpq.

Therefore it seems best to just making StringInfo being usable by
frontend code. There's not much to do for that, except for rewriting
two subsequent elog/ereport calls into others types of error
reporting, and deciding on a maximum string length.

For the maximum string size I decided to privately define MaxAllocSize
to the same value as used in the backend. It seems likely that we'll
want to reconsider this for both backend and frontend code in the not
too far away future.

For now I've left stringinfo.h in lib/, rather than common/, to reduce
the likelihood of unnecessary breakage. We could alternatively decide
to provide a redirecting stringinfo.h in lib/, or just not provide
compatibility.

Author: Andres Freund
Reviewed-By: Kyotaro Horiguchi, Daniel Gustafsson
Discussion: https://postgr.es/m/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de
2019-11-05 14:56:40 -08:00
Tom Lane 529ebb20aa Generate EquivalenceClass members for partitionwise child join rels.
Commit d25ea0127 got rid of what I thought were entirely unnecessary
derived child expressions in EquivalenceClasses for EC members that
mention multiple baserels.  But it turns out that some of the child
expressions that code created are necessary for partitionwise joins,
else we fail to find matching pathkeys for Sort nodes.  (This happens
only for certain shapes of the resulting plan; it may be that
partitionwise aggregation is also necessary to show the failure,
though I'm not sure of that.)

Reverting that commit entirely would be quite painful performance-wise
for large partition sets.  So instead, add code that explicitly
generates child expressions that match only partitionwise child join
rels we have actually generated.

Per report from Justin Pryzby.  (Amit Langote noticed the problem
earlier, though it's not clear if he recognized then that it could
result in a planner error, not merely failure to exploit partitionwise
join, in the code as-committed.)  Back-patch to v12 where commit
d25ea0127 came in.

Amit Langote, with lots of kibitzing from me

Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
Discussion: https://postgr.es/m/20191011143703.GN10470@telsasoft.com
2019-11-05 11:42:24 -05:00
Michael Paquier 3534fa2233 Refactor code building relation options
Historically, the code to build relation options has been shaped the
same way in multiple code paths by using a set of datums in input with
the options parsed with a static table which is then filled with the
option values.  This introduces a new common routine in reloptions.c to
do most of the legwork for the in-core code paths.

Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CA+HiwqGsoSn_uTPPYT19WrtR7oYpYtv4CdS0xuedTKiHHWuk_g@mail.gmail.com
2019-11-05 09:17:05 +09:00
Peter Eisentraut 3967737624 Add some assertions to view reloption macros
In these macros, the rd_options pointer is cast to ViewOption *.  Add
some assertions that the passed-in relation is actually a view before
doing that.

Author: Nikolay Shaplov <dhyan@nataraj.su>
Discussion: https://www.postgresql.org/message-id/flat/3634983.eHpMQ1mJnI@x200m
2019-11-01 13:25:38 +01:00
Peter Eisentraut 604bd36711 PG_FINALLY
This gives an alternative way of catching exceptions, for the common
case where the cleanup code is the same in the error and non-error
cases.  So instead of

    PG_TRY();
    {
        ... code that might throw ereport(ERROR) ...
    }
    PG_CATCH();
    {
        cleanup();
	PG_RE_THROW();
    }
    PG_END_TRY();
    cleanup();

one can write

    PG_TRY();
    {
        ... code that might throw ereport(ERROR) ...
    }
    PG_FINALLY();
    {
        cleanup();
    }
    PG_END_TRY();

Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-11-01 11:18:03 +01:00
Peter Eisentraut 7302514088 Add const qualifiers to internal range type APIs
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/dc9b45fa-b950-fadc-4751-85d6f729df55%402ndquadrant.com
2019-10-31 07:48:21 +01:00
Noah Misch b804521344 Fix copy-paste defect in comment.
Commit a7471bd85c introduced it.
2019-10-26 12:55:16 -07:00
Noah Misch e653c714c2 Update comment about __sync_lock_test_and_set() bug.
State the earliest known fixed version, so we can someday judge the
workaround to be obsolete.
2019-10-26 12:55:06 -07:00
Peter Eisentraut 2fc2a88e67 Remove obsolete information schema tables
Remove SQL_LANGUAGES, which was eliminated in SQL:2008, and
SQL_PACKAGES and SQL_SIZING_PROFILES, which were eliminated in
SQL:2011.  Since they were dropped by the SQL standard, the
information in them was no longer updated and therefore no longer
useful.

This also removes the feature-package association information in
sql_feature_packages.txt, but for the time begin we are keeping the
information which features are in the Core package (that is, mandatory
SQL features).  Maybe at some point someone wants to invent a way to
store that that does not involve using the "package" mechanism
anymore.

Discussion https://www.postgresql.org/message-id/flat/91334220-7900-071b-9327-0c6ecd012017%402ndquadrant.com
2019-10-25 21:37:14 +02:00
Tom Lane 22f6f2c1cc Improve management of statement timeouts.
Commit f8e5f156b added private state in postgres.c to track whether
a statement timeout is running.  This seems like bad design to me;
timeout.c's private state should be the single source of truth about
that.  We already fixed one bug associated with failure to keep those
states in sync (cf. be42015fc), and I've got little faith that we
won't find more in future.  So get rid of postgres.c's local variable
by exposing a way to ask timeout.c whether a timeout is running.
(Obviously, such an inquiry is subject to race conditions, but it
seems fine for the purpose at hand.)

To make get_timeout_active() as cheap as possible, add a flag in
the per-timeout struct showing whether that timeout is active.
This allows some small savings elsewhere in timeout.c, mainly
elimination of unnecessary searches of the active_timeouts array.

While at it, fix enable_statement_timeout to not call disable_timeout
when statement_timeout is 0 and the timeout is not running.  This
avoids a useless deschedule-and-reschedule-timeouts cycle, which
represents a significant savings (at least one kernel call) when
there is any other active timeout.  Right now, there usually isn't,
but there are proposals around to change that.

Discussion: https://postgr.es/m/16035-456e6e69ebfd4374@postgresql.org
2019-10-25 11:41:16 -04:00
Alexander Korotkov 52ad1e6599 Refactor jsonpath's compareDatetime()
This commit refactors come ridiculous coding in compareDatetime().  Also, it
provides correct cross-datatype comparison even when one of values overflows
during cast.  That eliminates dilemma on whether we should suppress overflow
errors during cast.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/32308.1569455803%40sss.pgh.pa.us
Discussion: https://postgr.es/m/a5629d0c-8162-7559-16aa-0c8390d6ba5f%40postgrespro.ru
Author: Nikita Glukhov, Alexander Korotkov
2019-10-21 23:07:07 +03:00
Peter Eisentraut 5d3587d14b Fix most -Wundef warnings
In some cases #if was used instead of #ifdef in an inconsistent style.
Cleaning this up also helps when analyzing cases like
38d8dce61f where this makes a
difference.

There are no behavior changes here, but the change in pg_bswap.h would
prevent possible accidental misuse by third-party code.

Discussion: https://www.postgresql.org/message-id/flat/3b615ca5-c595-3f1d-fdf7-a429e564f614%402ndquadrant.com
2019-10-19 18:31:38 +02:00
Noah Misch 30ee5d17c2 For all ppc compilers, implement compare_exchange and fetch_add with asm.
This is more like how we handle s_lock.h and arch-x86.h.

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20191005173400.GA3979129@rfd.leadboat.com
2019-10-18 20:20:52 -07:00
Noah Misch 89b4d7744c For PowerPC instruction "addi", use constraint "b".
Without "b", a variant of the tas() code miscompiles on macOS 10.4.
This may also fix a compilation failure involving macOS 10.1.  Today's
compilers have been allocating acceptable registers with or without this
change, but this future-proofs the code by precisely conveying the
acceptable registers.  Back-patch to 9.4 (all supported versions).

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20191009063900.GA4066266@rfd.leadboat.com
2019-10-18 20:20:28 -07:00
Michael Paquier f25968c496 Remove last traces of heap_open/close in the tree
Since pluggable storage has been introduced, those two routines have
been replaced by table_open/close, with some compatibility macros still
present to allow extensions to compile correctly with v12.

Some code paths using the old routines still remained, so replace them.
Based on the discussion done, the consensus reached is that it is better
to remove those compatibility macros so as nothing new uses the old
routines, so remove also the compatibility macros.

Discussion: https://postgr.es/m/20191017014706.GF5605@paquier.xyz
2019-10-19 11:18:15 +09:00
Alvaro Herrera 1752e35163 Fix parallel restore of FKs to partitioned tables
When an FK constraint is created, it needs the index on the referenced
table to exist and be valid.  When doing parallel pg_restore and the
referenced table was partitioned, this condition can sometimes not be
met, because pg_dump didn't emit sufficient object dependencies to
ensure so; this means that parallel pg_restore would fail in certain
conditions.  Fix by having pg_dump make the FK constraint object
dependent on the partition attachment objects for the constraint's
referenced index.

This has been broken since f56f8f8da6, so backpatch to Postgres 12.

Discussion: https://postgr.es/m/20191005224333.GA9738@alvherre.pgsql
2019-10-17 09:58:01 +02:00
Michael Paquier 1de4fd1092 Refresh some incorrect links in pg_crc.c/h
Author: Vignesh C
Discussion: https://postgr.es/m/CALDaNm0LPk9vTGTBPBRv0=fX=94o4r6-DuBbHNeCN2AH5bufLw@mail.gmail.com
2019-10-16 15:10:14 +09:00
Tom Lane 9abb2bfc04 In the postmaster, rely on the signal infrastructure to block signals.
POSIX sigaction(2) can be told to block a set of signals while a
signal handler executes.  Make use of that instead of manually
blocking and unblocking signals in the postmaster's signal handlers.
This should save a few cycles, and it also prevents recursive
invocation of signal handlers when many signals arrive in close
succession.  We have seen buildfarm failures that seem to be due to
postmaster stack overflow caused by such recursion (exacerbated by
a Linux PPC64 kernel bug).

This doesn't change anything about the way that it works on Windows.
Somebody might consider adjusting port/win32/signal.c to let it work
similarly, but I'm not in a position to do that.

For the moment, just apply to HEAD.  Possibly we should consider
back-patching this, but it'd be good to let it age awhile first.

Discussion: https://postgr.es/m/14878.1570820201@sss.pgh.pa.us
2019-10-13 15:48:26 -04:00
Peter Eisentraut b4675a8ae2 Fix use of term "verifier"
Within the context of SCRAM, "verifier" has a specific meaning in the
protocol, per RFCs.  The existing code used "verifier" differently, to
mean whatever is or would be stored in pg_auth.rolpassword.

Fix this by using the term "secret" for this, following RFC 5803.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/be397b06-6e4b-ba71-c7fb-54cae84a7e18%402ndquadrant.com
2019-10-12 21:41:59 +02:00
Peter Eisentraut 38d8dce61f Remove some code for old unsupported versions of MSVC
As of d9dd406fe2, we require MSVC 2013,
which means _MSC_VER >= 1800.  This means that conditionals about
older versions of _MSC_VER can be removed or simplified.

Previous code was also in some cases handling MinGW, where _MSC_VER is
not defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,
leading to some compiler warnings.  This should now be handled better.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-10-08 10:50:54 +02:00
Michael Paquier a7471bd85c Update some outdated links about XLC and UNIX specification
Author: Vignesh C
Discussion: https://postgr.es/m/CALDaNm3Dy=dTdx8UCVw=DWbzLzmRUC1dkq45=heOZDUg3U_PtA@mail.gmail.com
2019-10-08 14:31:30 +09:00
Michael Paquier 491bb81fb8 Clarify some comments about ntstatus.h in win32_port.h
Some comments in this file referred to outdated links.  This simplifies
the outdated comment blocks and refreshes the links.

Reported-by: Vignesh C
Author: Juan José Santamaría Flecha
Discussion: https://postgr.es/m/46C03E17-16F7-4C38-B148-029AC7448E96@gmail.com
2019-10-08 13:59:53 +09:00
Peter Eisentraut 400d5ffcaf Simplify PGAC_STRUCT_TIMEZONE Autoconf macro
Since 63bd0db121 we don't use tzname
anymore, so we don't need to check for it.  Instead, just keep the
part of PGAC_STRUCT_TIMEZONE that we need, which is the check for
struct tm.tm_zone.

Discussion: https://www.postgresql.org/message-id/flat/5eb11a37-f3ca-5fb7-308f-4485dec25a2e%402ndquadrant.com
2019-10-07 16:47:23 +02:00
Tomas Vondra 36425ece5d Change MemoryContextMemAllocated to return Size
Commit f2369bc610 switched most of the memory accounting from int64 to
Size, but it forgot to change the MemoryContextMemAllocated return type.
So this fixes that omission.

Discussion: https://www.postgresql.org/message-id/11238.1570200198%40sss.pgh.pa.us
2019-10-05 20:49:39 +02:00
Andres Freund d986d4e87f Fix crash caused by EPQ happening with a before update trigger present.
When ExecBRUpdateTriggers()'s GetTupleForTrigger() follows an EPQ
chain the former needs to run the result tuple through the junkfilter
again, and update the slot containing the new version of the tuple to
contain that new version. The input tuple may already be in the
junkfilter's output slot, which used to be OK - we don't need the
previous version anymore. Unfortunately ff11e7f4b9 started to use
ExecCopySlot() to update newslot, and ExecCopySlot() doesn't support
copying a slot into itself, leading to a slot in a corrupt
state, which then can cause crashes or other symptoms.

Fix this by skipping the ExecCopySlot() when copying into itself.

While we could have easily made ExecCopySlot() handle that case, it
seems better to add an assert forbidding doing so instead. As the goal
of copying might be to make the contents of one slot independent from
another, it seems failure prone to handle doing so silently.

A follow-up commit will add tests for the obviously under-covered
combination of EPQ and triggers. Done as a separate commit as it might
make sense to backpatch them further than this bug.

Also remove confusion with confusing variable names for slots in
ExecBRDeleteTriggers() and ExecBRUpdateTriggers().

Bug: #16036
Reported-By: Антон Власов
Author: Andres Freund
Discussion: https://postgr.es/m/16036-28184c90d952fb7f@postgresql.org
Backpatch: 12-, where ff11e7f4b9 was merged
2019-10-04 13:50:49 -07:00
Robert Haas 2e8b6bfa90 Rename some toasting functions based on whether they are heap-specific.
The old names for the attribute-detoasting functions names included
the word "heap," which seems outdated now that the heap is only one of
potentially many table access methods.

On the other hand, toast_insert_or_update and toast_delete are
heap-specific, so rename them by adding "heap_" as a prefix.

Not all of the work of making the TOAST system fully accessible to AMs
other than the heap is done yet, but there seems to be little harm in
getting this renaming out of the way now. Commit
8b94dab066 already divided up the
functions among various files partially according to whether it was
intended that they should be heap-specific or AM-agnostic, so this is
just clarifying the division contemplated by that commit.

Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
2019-10-04 14:24:46 -04:00
Tomas Vondra f2369bc610 Use Size instead of int64 to track allocated memory
Commit 5dd7fc1519 added block-level memory accounting, but used int64 variable to
track the amount of allocated memory. That is incorrect, because we have Size for
exactly these purposes, but it was mostly harmless until c477f3e449 which changed
how we handle with repalloc() when downsizing the chunk. Previously we've ignored
these cases and just kept using the original chunk, but now we need to update the
accounting, and the code was doing this:

    context->mem_allocated += blksize - oldblksize;

Both blksize and oldblksize are Size (so unsigned) which means the subtraction
underflows, producing a very high positive value. On 64-bit platforms (where Size
has the same size as mem_alllocated) this happens to work because the result wraps
to the right value, but on (some) 32-bit platforms this fails.

This fixes two things - it changes mem_allocated (and related variables) to Size,
and it splits the update to two separate steps, to prevent any underflows.

Discussion: https://www.postgresql.org/message-id/15151.1570163761%40sss.pgh.pa.us
2019-10-04 16:10:56 +02:00
Robert Haas 967e276e9f Remove AtSubStart_Notify.
Allocate notify-related state lazily instead. This makes trivial
subtransactions noticeably faster.

Patch by me, reviewed and tested by Dilip Kumar, Kyotaro Horiguchi,
and Jeevan Ladhe.

Discussion: https://postgr.es/m/CA+TgmobE1J22S1eC-6N-je9LgrcwZypkwp+zH6JXo9mc=4Nk3A@mail.gmail.com
2019-10-04 08:19:25 -04:00
Andrew Gierth b7a1c5539a Selectively include window frames in expression walks/mutates.
query_tree_walker and query_tree_mutator were skipping the
windowClause of the query, without regard for the fact that the
startOffset and endOffset in a WindowClause node are expression trees
that need to be processed. This was an oversight in commit ec4be2ee6
from 2010 which added the expression fields; the main symptom is that
function parameters in window frame clauses don't work in inlined
functions.

Fix (as conservatively as possible since this needs to not break
existing out-of-tree callers) and add tests.

Backpatch all the way, since this has been broken since 9.0.

Per report from Alastair McKinley; fix by me with kibitzing and review
from Tom Lane.

Discussion: https://postgr.es/m/DB6PR0202MB2904E7FDDA9D81504D1E8C68E3800@DB6PR0202MB2904.eurprd02.prod.outlook.com
2019-10-03 10:54:52 +01:00
Michael Paquier 9555cc8d2b Revert hooks for session start and end, take two
The location of the session end hook has been chosen so as it is
possible to allow modules to do their own transactions, however any
trying to any any subsystem which went through before_shmem_exit()
would cause issues, limiting the pluggability of the hook.

Per discussion with Tom Lane and Andres Freund.

Discussion: https://postgr.es/m/18722.1569906636@sss.pgh.pa.us
2019-10-02 09:55:27 +09:00
Tomas Vondra 540f316809 Blind attempt to fix pglz_maximum_compressed_size
Commit 11a078cf87 triggered failures on big-endian machines, and the
only plausible place for an issue seems to be that TOAST_COMPRESS_SIZE
calls VARSIZE instead of VARSIZE_ANY. So try fixing that blindly.

Discussion: https://www.postgresql.org/message-id/20191001131803.j6uin7nho7t6vxzy%40development
2019-10-01 16:53:04 +02:00
Tomas Vondra 11a078cf87 Optimize partial TOAST decompression
Commit 4d0e994eed added support for partial TOAST decompression, so the
decompression is interrupted after producing the requested prefix. For
prefix and slices near the beginning of the entry, this may saves a lot
of decompression work.

That however only deals with decompression - the whole compressed entry
was still fetched and re-assembled, even though the compression used
only a small fraction of it. This commit improves that by computing how
much compressed data may be needed to decompress the requested prefix,
and then fetches only the necessary part.

We always need to fetch a bit more compressed data than the requested
(uncompressed) prefix, because the prefix may not be compressible at all
and pglz itself adds a bit of overhead. That means this optimization is
most effective when the requested prefix is much smaller than the whole
compressed entry.

Author: Binguo Bao
Reviewed-by: Andrey Borodin, Tomas Vondra, Paul Ramsey
Discussion: https://www.postgresql.org/message-id/flat/CAL-OGkthU9Gs7TZchf5OWaL-Gsi=hXqufTxKv9qpNG73d5na_g@mail.gmail.com
2019-10-01 14:28:28 +02:00
Michael Paquier e788bd924c Add hooks for session start and session end, take two
These hooks can be used in loadable modules.  A simple test module is
included.

The first attempt was done with cd8ce3a but we lacked handling for
NO_INSTALLCHECK in the MSVC scripts (problem solved afterwards by
431f1599) so the buildfarm got angry.  This also fixes a couple of
issues noticed upon review compared to the first attempt, so the code
has slightly changed, resulting in a more simple test module.

Author: Fabrízio de Royes Mello, Yugo Nagata
Reviewed-by: Andrew Dunstan, Michael Paquier, Aleksandr Parfenov
Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp
Discussion: https://postgr.es/m/20190823042602.GB5275@paquier.xyz
2019-10-01 12:15:25 +09:00
Tomas Vondra 5dd7fc1519 Add transparent block-level memory accounting
Adds accounting of memory allocated in a memory context. Compared to
various ad hoc solutions, the main advantage is that the accounting is
transparent and does not require direct control over allocations (this
matters for use cases where the allocations happen in user code, like
for example aggregate states allocated in a transition functions).

To reduce overhead, the accounting happens at the block level (not for
individual chunks) and only the context immediately owning the block is
updated. When inquiring about amount of memory allocated in a context,
we have to recursively walk all children contexts.

This "lazy" accounting works well for cases with relatively small number
of contexts in the relevant subtree and/or with infrequent inquiries.

Author: Jeff Davis
Reivewed-by: Tomas Vondra, Melanie Plageman, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/flat/027a129b8525601c6a680d27ce3a7172dab61aab.camel@j-davis.com
2019-10-01 03:13:39 +02:00
Alexander Korotkov bffe1bd684 Implement jsonpath .datetime() method
This commit implements jsonpath .datetime() method as it's specified in
SQL/JSON standard.  There are no-argument and single-argument versions of
this method.  No-argument version selects first of ISO datetime formats
matching input string.  Single-argument version accepts template string as
its argument.

Additionally to .datetime() method itself this commit also implements
comparison ability of resulting date and time values.  There is some difficulty
because exising jsonb_path_*() functions are immutable, while comparison of
timezoned and non-timezoned types involves current timezone.  At first, current
timezone could be changes in session.  Moreover, timezones themselves are not
immutable and could be updated.  This is why we let existing immutable functions
throw errors on such non-immutable comparison.  In the same time this commit
provides jsonb_path_*_tz() functions which are stable and support operations
involving timezones.  As new functions are added to the system catalog,
catversion is bumped.

Support of .datetime() method was the only blocker prevents T832 from being
marked as supported.  sql_features.txt is updated correspondingly.

Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Heavily revised by me.  Comments were adjusted by Liudmila Mantrova.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Alexander Korotkov, Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 22:51:51 +03:00
Alexander Korotkov 6dda292d4d Allow datetime values in JsonbValue
SQL/JSON standard allows manipulation with datetime values.  So, it appears to
be convinient to allow datetime values to be represented in JsonbValue struct.
These datetime values are allowed for temporary representation only.  During
serialization datetime values are converted into strings.

SQL/JSON requires writing timestamps with timezone in the same timezone offset
as they were parsed.  This is why we allow storage of timezone offset in
JsonbValue struct.  For the same reason timezone offset argument is added to
JsonEncodeDateTime() function.

Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Revised by me.  Comments were adjusted by Liudmila Mantrova.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov, Liudmila Mantrova
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 22:51:51 +03:00
Alexander Korotkov 5bc450629b Error suppression support for upcoming jsonpath .datetime() method
Add support of error suppression in some date and time manipulation functions
as it's required for jsonpath .datetime() method support.  This commit doesn't
use PG_TRY()/PG_CATCH() in order to implement that.  Instead, it provides
internal versions of date and time functions used, which support error
suppression.

Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Alexander Korotkov, Nikita Glukhov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 22:51:51 +03:00
Alexander Korotkov 66c74f8b6e Implement parse_datetime() function
This commit adds parse_datetime() function, which implements datetime
parsing with extended features demanded by upcoming jsonpath .datetime()
method:

 * Dynamic type identification based on template string,
 * Support for standard-conforming 'strict' mode,
 * Timezone offset is returned as separate value.

Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Revised by me.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-25 22:51:51 +03:00
Alvaro Herrera 773df883e8 Support reloptions of enum type
All our current in core relation options of type string (not many,
admittedly) behave in reality like enums.  But after seeing an
implementation for enum reloptions, it's clear that strings are messier,
so introduce the new reloption type.  Switch all string options to be
enums instead.

Fortunately we have a recently introduced test module for reloptions, so
we don't lose coverage of string reloptions, which may still be used by
third-party modules.

Authors: Nikolay Shaplov, Álvaro Herrera
Reviewed-by: Nikita Glukhov, Aleksandr Parfenov
Discussion: https://postgr.es/m/43332102.S2V5pIjXRx@x200m
2019-09-25 15:56:52 -03:00
Alvaro Herrera caba97a9d9 Split out recovery confing-writing code from pg_basebackup
... into a new file, fe_utils/recovery_gen.c.

This can later be used by pg_rewind.

Authors: Paul Guo, Jimmy Yih, Ashwin Agrawal.  A few tweaks by Álvaro Herrera
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CAEET0ZEffUkXc48pg2iqARQgGRYDiiVxDu+yYek_bTwJF+q=Uw@mail.gmail.com
2019-09-25 14:35:24 -03:00
Michael Paquier 69f9410807 Allow definition of lock mode for custom reloptions
Relation options can define a lock mode other than AccessExclusiveMode
since 47167b7, but modules defining custom relation options did not
really have a way to enforce that.  Correct that by extending the
current API set so as modules can define a custom lock mode.

Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
2019-09-25 10:13:52 +09:00
Alvaro Herrera 709d003fbd Rework WAL-reading supporting structs
The state-tracking of WAL reading in various places was pretty messy,
mostly because the ancient physical-replication WAL reading code wasn't
using the XLogReader abstraction.  This led to some untidy code.  Make
it prettier by creating two additional supporting structs,
WALSegmentContext and WALOpenSegment which keep track of WAL-reading
state.  This makes code cleaner, as well as supports more future
cleanup.

Author: Antonin Houska
Reviewed-by: Álvaro Herrera and (older versions) Robert Haas
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
2019-09-24 16:39:53 -03:00
Fujii Masao 6d05086c0a Speedup truncations of relation forks.
When a relation is truncated, shared_buffers needs to be scanned
so that any buffers for the relation forks are invalidated in it.
Previously, shared_buffers was scanned for each relation forks, i.e.,
MAIN, FSM and VM, when VACUUM truncated off any empty pages
at the end of relation or TRUNCATE truncated the relation in place.
Since shared_buffers needed to be scanned multiple times,
it could take a long time to finish those commands especially
when shared_buffers was large.

This commit changes the logic so that shared_buffers is scanned only
one time for those three relation forks.

Author: Kirk Jamison
Reviewed-by: Masahiko Sawada, Thomas Munro, Alvaro Herrera, Takayuki Tsunakawa and Fujii Masao
Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24
2019-09-24 17:31:26 +09:00
Andres Freund 30d1379658 Fix ExprState's tag to be of type NodeTag rather than Node.
This appears to have been an oversight in b8d7f053c5. As it's
effectively harmless, though confusing, only fix in master.

Author: Andres Freund
2019-09-23 15:28:13 -07:00
Tom Lane 5ac0d93600 Fix failure to zero-pad the result of bitshiftright().
If the bitstring length is not a multiple of 8, we'd shift the
rightmost bits into the pad space, which must be zeroes --- bit_cmp,
for one, depends on that.  This'd lead to the result failing to
compare equal to what it should compare equal to, as reported in
bug #16013 from Daryl Waycott.

This is, if memory serves, not the first such bug in the bitstring
functions.  In hopes of making it the last one, do a bit more work
than minimally necessary to fix the bug:

* Add assertion checks to bit_out() and varbit_out() to complain if
they are given incorrectly-padded input.  This will improve the
odds that manual testing of any new patch finds problems.

* Encapsulate the padding-related logic in macros to make it
easier to use.

Also, remove unnecessary padding logic from bit_or() and bitxor().
Somebody had already noted that we need not re-pad the result of
bit_and() since the inputs are required to be the same length,
but failed to extrapolate that to the other two.

Also, move a comment block that once was near the head of varbit.c
(but people kept putting other stuff in front of it), to put it in
the header block.

Note for the release notes: if anyone has inconsistent data as a
result of saving the output of bitshiftright() in a table, it's
possible to fix it with something like
UPDATE mytab SET bitcol = ~(~bitcol) WHERE bitcol != ~(~bitcol);

This has been broken since day one, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/16013-c2765b6996aacae9@postgresql.org
2019-09-22 17:45:59 -04:00
Tom Lane c160b8928c Straighten out leakproofness markings on text comparison functions.
Since we introduced the idea of leakproof functions, texteq and textne
were marked leakproof but their sibling text comparison functions were
not.  This inconsistency seemed justified because texteq/textne just
relied on memcmp() and so could easily be seen to be leakproof, while
the other comparison functions are far more complex and indeed can
throw input-dependent errors.

However, that argument crashed and burned with the addition of
nondeterministic collations, because now texteq/textne may invoke
the exact same varstr_cmp() infrastructure as the rest.  It makes no
sense whatever to give them different leakproofness markings.

After a certain amount of angst we've concluded that it's all right
to consider varstr_cmp() to be leakproof, mostly because the other
choice would be disastrous for performance of many queries where
leakproofness matters.  The input-dependent errors should only be
reachable for corrupt input data, or so we hope anyway; certainly,
if they are reachable in practice, we've got problems with requirements
as basic as maintaining a btree index on a text column.

Hence, run around to all the SQL functions that derive from varstr_cmp()
and mark them leakproof.  This should result in a useful gain in
flexibility/performance for queries in which non-leakproofness degrades
the efficiency of the query plan.

Back-patch to v12 where nondeterministic collations were added.
While this isn't an essential bug fix given the determination
that varstr_cmp() is leakproof, we might as well apply it now that
we've been forced into a post-beta4 catversion bump.

Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
2019-09-21 16:56:30 -04:00
Tom Lane 2810396312 Fix up handling of nondeterministic collations with pattern_ops opclasses.
text_pattern_ops and its siblings can't be used with nondeterministic
collations, because they use the text_eq operator which will not behave
as bitwise equality if applied with a nondeterministic collation.  The
initial implementation of that restriction was to insert a run-time test
in the related comparison functions, but that is inefficient, may throw
misleading errors, and will throw errors in some cases that would work.
It seems sufficient to just prevent the combination during CREATE INDEX,
so do that instead.

Lacking any better way to identify the opclasses involved, we need to
hard-wire tests for them, which requires hand-assigned values for their
OIDs, which forces a catversion bump because they previously had OIDs
that would be assigned automatically.  That's slightly annoying in the
v12 branch, but fortunately we're not at rc1 yet, so just do it.

Back-patch to v12 where nondeterministic collations were added.

In passing, run make reformat-dat-files, which found some unrelated
whitespace issues (slightly different ones in HEAD and v12).

Peter Eisentraut, with small corrections by me

Discussion: https://postgr.es/m/22566.1568675619@sss.pgh.pa.us
2019-09-21 16:29:17 -04:00
Alvaro Herrera 1a2983231d Split out code into new getKeyJsonValueFromContainer()
The new function stashes its output value in a JsonbValue that can be
passed in by the caller, which enables some of them to pass
stack-allocated structs -- saving palloc cycles.  It also allows some
callers that know they are handling a jsonb object to use this new jsonb
object-specific API, instead of going through generic container
findJsonbValueFromContainer.

Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru
2019-09-20 20:18:11 -03:00
Alexander Korotkov 6cae9d2c10 Improve handling of NULLs in KNN-GiST and KNN-SP-GiST
This commit improves subject in two ways:

 * It removes ugliness of 02f90879e7, which stores distance values and null
   flags in two separate arrays after GISTSearchItem struct.  Instead we pack
   both distance value and null flag in IndexOrderByDistance struct.  Alignment
   overhead should be negligible, because we typically deal with at most few
   "col op const" expressions in ORDER BY clause.
 * It fixes handling of "col op NULL" expression in KNN-SP-GiST.  Now, these
   expression are not passed to support functions, which can't deal with them.
   Instead, NULL result is implicitly assumed.  It future we may decide to
   teach support functions to deal with NULL arguments, but current solution is
   bugfix suitable for backpatch.

Reported-by: Nikita Glukhov
Discussion: https://postgr.es/m/826f57ee-afc7-8977-c44c-6111d18b02ec%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4
2019-09-19 21:48:39 +03:00
Fujii Masao 33a94bae60 Remove unused smgrdounlinkfork() function.
smgrdounlinkfork() became dead code as the result of commit ece01aae47,
but it was left in place just in case we want it someday. However no users
have appeared in 7 years, so it's time to remove this unused function.

Author: Kirk Jamison
Discussion: https://www.postgresql.org/message-id/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24
2019-09-18 21:05:33 +09:00
Tom Lane d5b90cd648 Fix bogus handling of XQuery regex option flags.
The SQL spec defers to XQuery to define what the option flags are
for LIKE_REGEX patterns.  XQuery says that:
* 's' allows the dot character to match newlines, which by
  default it will not;
* 'm' allows ^ and $ to match at newlines, not only at the
  start/end of the whole string.
Thus, these are *not* inverses as they are for the similarly-named
POSIX options, and neither one corresponds to the POSIX 'n' option.
Fortunately, Spencer's library does expose these two behaviors as
separately twiddlable flags, so we just have to fix the mapping from
JSP flag bits to REG flag bits.  I also chose to rename the symbol
for 's' to DOTALL, to make it clearer that it's not the inverse
of MLINE.

Also, XQuery says that if the 'q' flag "is used together with the m, s,
or x flag, that flag has no effect".  I read this as saying that 'q'
overrides the other flags; whoever wrote our code seems to have read
it backwards.

Lastly, while XQuery's 'x' flag is related to what Spencer's code
does for REG_EXPANDED, it's not the same or a subset.  It seems best
to treat XQuery's 'x' as unimplemented for now.  Maybe later we can
expand our regex code to offer 'x'-style parsing as a separate option.

While at it, refactor the jsonpath code so that (a) there's only
one copy of the flag transformation logic not two, and (b) the
processing of flags is independent of the order in which the flags
are written.

We need some documentation updates to go with this, but I'll
tackle that separately.

Back-patch to v12 where this code originated.

Discussion: https://postgr.es/m/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg@mail.gmail.com
Reference: https://www.w3.org/TR/2017/REC-xpath-functions-31-20170321/#flags
2019-09-17 15:39:51 -04:00
Alexander Korotkov d589f94460 Support for FF1-FF6 datetime format patterns
SQL Standard 2016 defines FF1-FF9 format patters for fractions of seconds in
jsonpath .datetime() method and CAST (... FORMAT ...) SQL clause.  Parsing
engine of upcoming .datetime() method will be shared with to_date()/
to_timestamp().

This patch implements FF1-FF6 format patterns for upcoming jsonpath .datetime()
method.  to_date()/to_timestamp() functions will also get support of this
format patterns as positive side effect.  FF7-FF9 are not supported due to
lack of precision in our internal timestamp representation.

Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Heavily revised by me.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
2019-09-16 21:14:32 +03:00
Noah Misch 87e9fae069 Revert "For all ppc compilers, implement pg_atomic_fetch_add_ with inline asm."
This reverts commit e7ff59686e.  It
defined pg_atomic_fetch_add_u32_impl() without defining
pg_atomic_compare_exchange_u32_impl(), which is incompatible with
src/include/port/atomics/fallback.h.  Per buildfarm member prairiedog.

Discussion: https://postgr.es/m/7517.1568470247@sss.pgh.pa.us
2019-09-14 19:38:41 -07:00
Noah Misch e7ff59686e For all ppc compilers, implement pg_atomic_fetch_add_ with inline asm.
This is more like how we handle s_lock.h and arch-x86.h.  This does not
materially affect code generation for gcc 7.2.0 or xlc 13.1.3.

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20190831071157.GA3251746@rfd.leadboat.com
2019-09-13 19:34:30 -07:00
Noah Misch dd50f1a432 Replace xlc __fetch_and_add() with inline asm.
PostgreSQL has been unusable when built with xlc 13 and newer, which are
incompatible with our use of __fetch_and_add().  Back-patch to 9.5,
which introduced pg_atomic_fetch_add_u32().

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20190831071157.GA3251746@rfd.leadboat.com
2019-09-13 19:34:06 -07:00
Alvaro Herrera ce5d04b646 Fix under-parenthesized macro definitions
Lack of parens in the definitions could cause a statement using these
macros to have unexpected semantics.  In current code no bug is
apparent, but best to fix the definitions to avoid problems down the
line.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/19795.1568400476@sss.pgh.pa.us
2019-09-13 16:26:55 -03:00
Alvaro Herrera 6212276e43 Fix progress reporting of CLUSTER / VACUUM FULL
The progress state was being clobbered once the first index completed
being rebuilt, causing the final phases of the operation not show
anything in the progress view.  This was inadvertently broken in
03f9e5cba0, which added progress tracking for REINDEX.

(The reason this bugfix is this small is that I had already noticed this
problem when writing monitoring for CREATE INDEX, and had already worked
around it, as can be seen in discussion starting at
https://postgr.es/m/20190329150218.GA25010@alvherre.pgsql Fixing the
problem is just a matter of fixing one place touched by the REINDEX
monitoring.)

Reported by: Álvaro Herrera
Author: Álvaro Herrera
Discussion: https://postgr.es/m/20190801184333.GA21369@alvherre.pgsql
2019-09-13 14:54:26 -03:00
Peter Geoghegan 3b6b54f178 Fix nbtree page split rmgr desc routine.
Include newitemoff in rmgr desc output for nbtree page split records.
In passing, correct an obsolete comment that claimed that newitemoff is
only logged for _L variant nbtree page split WAL records.

Both issues were oversights in commit 2c03216d83, which revamped the
WAL format.

Author: Peter Geoghegan
Backpatch: 9.5-, where the WAL format was revamped.
2019-09-12 15:45:08 -07:00
Peter Geoghegan 614cdeaa89 Reorder two nbtree.h function prototypes.
Make the function prototype order consistent with the definition order
in nbtinsert.c.
2019-09-12 09:59:16 -07:00
Tom Lane 9a86f03b4e Rearrange postmaster's startup sequence for better syslogger results.
This is a second try at what commit 57431a911 tried to do, namely,
launch the syslogger before we open postmaster sockets so that our
messages about the sockets end up in the syslogger files.  That
commit fell foul of a bunch of subtle issues caused by trying to
launch a postmaster child process before creating shared memory.
Rather than messing with that interaction, let's postpone opening
the sockets till after we launch the syslogger.

This would not have been terribly safe before commit 7de19fbc0,
because we relied on socket opening to detect whether any competing
postmasters were using the same port number.  But now that we choose
IPC keys without regard to the port number, there's no interaction
to worry about.

Also delay creation of the external PID file (if requested) till after
the sockets are open, since external code could plausibly be relying
on that ordering of events.  And postpone most of the work of
RemovePgTempFiles() so that that potentially-slow processing still
happens after we make the external PID file.  We have to be a bit
careful about that last though: as noted in the discussion subsequent to
bug #15804, EXEC_BACKEND builds still have to clear the parameter-file
temp dir before launching the syslogger.

Patch by me; thanks to Michael Paquier for review/testing.

Discussion: https://postgr.es/m/15804-3721117bf40fb654@postgresql.org
2019-09-11 11:43:01 -04:00
Tomas Vondra d06215d03b Allow setting statistics target for extended statistics
When building statistics, we need to decide how many rows to sample and
how accurate the resulting statistics should be. Until now, it was not
possible to explicitly define statistics target for extended statistics
objects, the value was always computed from the per-attribute targets
with a fallback to the system-wide default statistics target.

That's a bit inconvenient, as it ties together the statistics target set
for per-column and extended statistics. In some cases it may be useful
to require larger sample / higher accuracy for extended statics (or the
other way around), but with this approach that's not possible.

So this commit introduces a new command, allowing to specify statistics
target for individual extended statistics objects, overriding the value
derived from per-attribute targets (and the system default).

  ALTER STATISTICS stat_name SET STATISTICS target_value;

When determining statistics target for an extended statistics object we
first look at this explicitly set value. When this value is -1, we fall
back to the old formula, looking at the per-attribute targets first and
then the system default. This means the behavior is backwards compatible
with older PostgreSQL releases.

Author: Tomas Vondra
Discussion: https://postgr.es/m/20190618213357.vli3i23vpkset2xd@development
Reviewed-by: Kirk Jamison, Dean Rasheed
2019-09-11 00:25:51 +02:00
Andres Freund 27cc7cd2bc Reorder EPQ work, to fix rowmark related bugs and improve efficiency.
In ad0bda5d24 I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams.  But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.

As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.

As a third issue, the test coverage for EPQ was clearly insufficient.

Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.

To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.

To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).

As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.

Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.

Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
    https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
    https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
2019-09-09 05:14:11 -07:00
Alexander Korotkov 02f90879e7 Fix handling of NULL distances in KNN-GiST
In order to implement NULL LAST semantic GiST previously assumed distance to
the NULL value to be Inf.  However, our distance functions can return Inf and
NaN for non-null values.  In such cases, NULL LAST semantic appears to be
broken.  This commit fixes that by introducing separate array of null flags for
distances.

Backpatch to all supported versions.

Discussion: https://postgr.es/m/CAPpHfdsNvNdA0DBS%2BwMpFrgwT6C3-q50sFVGLSiuWnV3FqOJuQ%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 9.4
2019-09-08 22:08:12 +03:00
Tom Lane ca70bdaefe Fix issues around strictness of SIMILAR TO.
As a result of some long-ago quick hacks, the SIMILAR TO operator
and the corresponding flavor of substring() interpreted "ESCAPE NULL"
as selecting the default escape character '\'.  This is both
surprising and not per spec: the standard is clear that these
functions should return NULL for NULL input.

Additionally, because of inconsistency of the strictness markings
of 3-argument substring() and similar_escape(), the planner could not
inline the SQL definition of substring(), resulting in a substantial
performance penalty compared to the underlying POSIX substring()
function.

The simplest fix for this would be to change the strictness marking
of similar_escape(), but if we do that we risk breaking existing views
that depend on that function.  Hence, leave similar_escape() as-is
as a compatibility function, and instead invent a new function
similar_to_escape() that comes in two strict variants.

There are a couple of other behaviors in this area that are also
not per spec, but they are documented and seem generally at least
as sane as the spec's definition, so leave them alone.  But improve
the documentation to describe them fully.

Patch by me; thanks to Álvaro Herrera and Andrew Gierth for review
and discussion.

Discussion: https://postgr.es/m/14047.1557708214@sss.pgh.pa.us
2019-09-07 14:21:59 -04:00
Robert Haas bd124996ef Create an API for inserting and deleting rows in TOAST tables.
This moves much of the non-heap-specific logic from toast_delete and
toast_insert_or_update into a helper functions accessible via a new
header, toast_helper.h.  Using the functions in this module, a table
AM can implement creation and deletion of TOAST table rows with
much less code duplication than was possible heretofore.  Some
table AMs won't want to use the TOAST logic at all, but for those
that do this will make that easier.

Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
2019-09-06 10:38:51 -04:00
Tom Lane 7de19fbc0b Use data directory inode number, not port, to select SysV resource keys.
This approach provides a much tighter binding between a data directory
and the associated SysV shared memory block (and SysV or named-POSIX
semaphores, if we're using those).  Key collisions are still possible,
but only between data directories stored on different filesystems,
so the situation should be negligible in practice.  More importantly,
restarting the postmaster with a different port number no longer
risks failing to identify a relevant shared memory block, even when
postmaster.pid has been removed.  A standalone backend is likewise
much more certain to detect conflicting leftover backends.

(In the longer term, we might now think about deprecating the port as
a cluster-wide value, so that one postmaster could support sockets
with varying port numbers.  But that's for another day.)

The hazards fixed here apply only on Unix systems; our Windows code
paths already use identifiers derived from the data directory path
name rather than the port.

src/test/recovery/t/017_shm.pl, which intends to test key-collision
cases, has been substantially rewritten since it can no longer use
two postmasters with identical port numbers to trigger the case.
Instead, use Perl's IPC::SharedMem module to create a conflicting
shmem segment directly.  The test script will be skipped if that
module is not available.  (This means that some older buildfarm
members won't run it, but I don't think that that results in any
meaningful coverage loss.)

Patch by me; thanks to Noah Misch and Peter Eisentraut for discussion
and review.

Discussion: https://postgr.es/m/16908.1557521200@sss.pgh.pa.us
2019-09-05 13:31:46 -04:00
Robert Haas 8b94dab066 Split tuptoaster.c into three separate files.
detoast.c/h contain functions required to detoast a datum, partially
or completely, plus a few other utility functions for examining the
size of toasted datums.

toast_internals.c/h contain functions that are used internally to the
TOAST subsystem but which (mostly) do not need to be accessed from
outside.

heaptoast.c/h contains code that is intrinsically specific to the
heap AM, either because it operates on HeapTuples or is based on the
layout of a heap page.

detoast.c and toast_internals.c are placed in
src/backend/access/common rather than src/backend/access/heap.  At
present, both files still have dependencies on the heap, but that will
be improved in a future commit.

Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
2019-09-05 13:15:10 -04:00
Peter Eisentraut 74a308cf52 Use explicit_bzero
Use the explicit_bzero() function in places where it is important that
security information such as passwords is cleared from memory.  There
might be other places where it could be useful; this is just an
initial collection.

For platforms that don't have explicit_bzero(), provide various
fallback implementations.  (explicit_bzero() itself isn't standard,
but as Linux/glibc, FreeBSD, and OpenBSD have it, it's the most common
spelling, so it makes sense to make that the invocation point.)

Discussion: https://www.postgresql.org/message-id/flat/42d26bde-5d5b-c90d-87ae-6cab875f73be%402ndquadrant.com
2019-09-05 08:30:42 +02:00
Alvaro Herrera 25dcc9d35d Make XLogReaderInvalReadState static
This function is only used by xlogreader.c itself, so there's no need to
export it.  It was introduced by commit 3b02ea4f07 with the apparent
intention that it could be used externally, but I couldn't find any
external code calling it.

I (Álvaro) couldn't resist the urge to sort nearby function prototypes
properly while at it.

Author: Antonin Houska
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
2019-09-03 17:41:43 -04:00
Alvaro Herrera fe66125974 Remove 'msg' parameter from convert_tuples_by_name
The message was included as a parameter when this function was added in
dcb2bda9b7, but I don't think it has ever served any useful purpose.
Let's stop spreading it pointlessly.

Reviewed by Amit Langote and Peter Eisentraut.

Discussion: https://postgr.es/m/20190806224728.GA17233@alvherre.pgsql
2019-09-03 14:47:29 -04:00
Michael Paquier 7dedfd22b7 Add overflow-safe math inline functions for unsigned integers
Similarly to the signed versions added in 4d6ad31, this adds a set of
inline functions for overflow checks with unsigned integers, including
uint16, uint32 and uint64.  This relies on compiler built-in overflow
checks by default if available.  The behavior of unsigned integers is
well-defined so the fallback implementations checks are simple for
additions and subtractions.  Multiplications avoid division-based checks
which are expensive if possible, still this can happen for uint64 if
128-bit integers are not available.

While on it, the code in common/int.h is reorganized to avoid too many
duplicated comments.  The new macros will be used in a follow-up patch.

All thanks to Andres Freund for the input provided.

Author: Fabien Coelho, Michael Paquier
Discussion: https://postgr.es/m/20190830073423.GB2354@paquier.xyz
2019-09-02 09:38:23 +09:00
Thomas Munro 720b59b55b Avoid catalog lookups in RelationAllowsEarlyPruning().
RelationAllowsEarlyPruning() performed a catalog scan, but is used
in two contexts where that was a bad idea:

1.  In heap_page_prune_opt(), which runs very frequently in some large
    scans.  This caused major performance problems in a field report
    that was easy to reproduce.

2.  In TestForOldSnapshot(), which runs while we hold a buffer content
    lock.  It's not clear if this was guaranteed to be free of buffer
    deadlock risk.

The check was introduced in commit 2cc41acd8 and defended against a
real problem: 9.6's hash indexes have no page LSN and so we can't
allow early pruning (ie the snapshot-too-old feature).  We can remove
the check from all later releases though: hash indexes are now logged,
and there is no way to create UNLOGGED indexes on regular logged
tables.

If a future release allows such a combination, it might need to put
a similar check in place, but it'll need some more thought.

Back-patch to 10.

Author: Thomas Munro
Reviewed-by: Tom Lane, who spotted the second problem
Discussion: https://postgr.es/m/CA%2BhUKGKT8oTkp5jw_U4p0S-7UG9zsvtw_M47Y285bER6a2gD%2Bg%40mail.gmail.com
Discussion: https://postgr.es/m/CAA4eK1%2BWy%2BN4eE5zPm765h68LrkWc3Biu_8rzzi%2BOYX4j%2BiHRw%40mail.gmail.com
2019-08-28 16:18:29 +12:00
Tom Lane 348778ddbc Make comment in fmgr.h match the one in fmgr.c.
Incompletely quoting an API spec does nobody any good.  Noted by
Paul Jungwirth.  Looks like the discrepancy was my fault originally :-(

Discussion: https://postgr.es/m/CA+renyU_J8TU_d3Kr0PkuOgFbpypextendu7a+_d5NOfVdvDeA@mail.gmail.com
2019-08-26 14:32:48 -04:00
Peter Geoghegan 091bd6befc Update comments on nbtree stack struct.
Adjust the struct comment that describes how page splits use their
descent stack to cascade up the tree from the leaf level.

In passing, fix up some unrelated nbtree comments that had typos or were
obsolete.
2019-08-21 13:50:27 -07:00
Peter Eisentraut c45643d618 Remove configure detection of crypt()
crypt() hasn't been needed since crypt detection was removed from
PostgreSQL, so these configure checks are not necessary.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/21f88934-f00c-27f6-a9d8-7ea06d317781%402ndquadrant.com
2019-08-21 21:36:54 +02:00
Alvaro Herrera 8f75e8e446 Fix typo
In early development patches, "replication origins" were called "identifiers";
almost everything was renamed, but these references to the old terminology
went unnoticed.

Reported-by: Craig Ringer
2019-08-21 11:12:44 -04:00
Andres Freund 4c01a11103 Add fmgr.h include to selfuncs.h.
Necessary after fb3b098f. That previously escaped notice, because all
including sites already include fmgr.h some other way.

Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/17463.1566153454@sss.pgh.pa.us
2019-08-19 12:51:38 -07:00
Michael Paquier c96581abe4 Fix inconsistencies and typos in the tree, take 11
This fixes various typos in docs and comments, and removes some orphaned
definitions.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/5da8e325-c665-da95-21e0-c8a99ea61fbf@gmail.com
2019-08-19 16:21:39 +09:00
Tom Lane 927f34ce8a Avoid conflicts with library versions of inet_net_ntop() and friends.
Prefix inet_net_ntop and sibling routines with "pg_" to ensure that
they aren't mistaken for C-library functions.  This fixes warnings
from cpluspluscheck on some platforms, and should help reduce reader
confusion everywhere, since our functions aren't exactly interchangeable
with the library versions (they may have different ideas about address
family codes).

This shouldn't be fixing any actual bugs, unless somebody's linker
is misbehaving, so no need to back-patch.

Discussion: https://postgr.es/m/20518.1559494394@sss.pgh.pa.us
2019-08-18 19:27:23 -04:00
Tom Lane 232720be9b Fix incidental warnings from cpluspluscheck.
Remove use of "register" keyword in hashfn.c.  It's obsolescent
according to recent C++ compilers, and no modern C compiler pays
much attention to it either.

Also fix one cosmetic warning about signed vs unsigned comparison.

Discussion: https://postgr.es/m/20518.1559494394@sss.pgh.pa.us
2019-08-18 19:01:40 -04:00
Peter Eisentraut d78d452bc5 Improve Assert output
If an assertion expression contained a macro, the failed assertion
message would print the expanded macro, which is usually unhelpful and
confusing.  Restructure the Assert macros to not expand any macros
when constructing the failure message.

This also fixes that the existing output for Assert et al. shows
the *inverted* condition, which is also confusing and not how
assertions usually work.

Discussion: https://www.postgresql.org/message-id/flat/6c68efe3-117a-dcc1-73d4-18ba1ec532e2%402ndquadrant.com
2019-08-17 12:50:50 +02:00
Andres Freund fb3b098fe8 Remove fmgr.h includes from headers that don't really need it.
Most of the fmgr.h includes were obsoleted by 352a24a1f9. A
few others can be obsoleted using the underlying struct type in an
implementation detail.

Author: Andres Freund
Discussion: https://postgr.es/m/20190803193733.g3l3x3o42uv4qj7l@alap3.anarazel.de
2019-08-16 10:35:31 -07:00
Andres Freund 6a04d345fd Don't include utils/array.h from acl.h.
For most uses of acl.h the details of how "Acl" internally looks like
are irrelevant. It might make sense to move a lot of the
implementation details into a separate header at a later point.

The main motivation of this change is to avoid including fmgr.h (via
array.h, which needs it for exposed structs) in a lot of files that
otherwise don't need it. A subsequent commit will remove the fmgr.h
include from a lot of files.

Directly include utils/array.h and utils/expandeddatum.h from the
files that need them, but previously included them indirectly, via
acl.h.

Author: Andres Freund
Discussion: https://postgr.es/m/20190803193733.g3l3x3o42uv4qj7l@alap3.anarazel.de
2019-08-16 10:33:30 -07:00
Andres Freund 0ae2dc4db2 Remove redundant prototypes for SQL callable functions.
These aren't needed after 352a24a1f9. The remaining prototypes are
not defined on the SQL level.

Author: Andres Freund
Discussion: https://postgr.es/m/20190803193733.g3l3x3o42uv4qj7l@alap3.anarazel.de
2019-08-16 10:17:32 -07:00
Peter Geoghegan 9c02cf5661 Remove block number field from nbtree stack.
The initial value of the nbtree stack downlink block number field
recorded during an initial descent of the tree wasn't actually used.
Both _bt_getstackbuf() callers overwrote the value with their own value.

Remove the block number field from the stack struct, and add a child
block number argument to _bt_getstackbuf() in its place.  This makes the
overall design of _bt_getstackbuf() clearer.

Author: Peter Geoghegan
Reviewed-By: Anastasia Lubennikova
Discussion: https://postgr.es/m/CAH2-Wzmx+UbXt2YNOUCZ-a04VdXU=S=OHuAuD7Z8uQq-PXTYUg@mail.gmail.com
2019-08-14 11:32:35 -07:00
Michael Paquier 66bde49d96 Fix inconsistencies and typos in the tree, take 10
This addresses some issues with unnecessary code comments, fixes various
typos in docs and comments, and removes some orphaned structures and
definitions.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/9aabc775-5494-b372-8bcb-4dfc0bd37c68@gmail.com
2019-08-13 13:53:41 +09:00
Tom Lane 3c926587b5 Remove EState.es_range_table_array.
Now that list_nth is O(1), there's no good reason to maintain a
separate array of RTE pointers rather than indexing into
estate->es_range_table.  Deleting the array doesn't save all that
much either; but just on cleanliness grounds, it's better not to
have duplicate representations of the identical information.

Discussion: https://postgr.es/m/14960.1565384592@sss.pgh.pa.us
2019-08-12 11:58:35 -04:00
Tom Lane 5ee190f8ec Rationalize use of list_concat + list_copy combinations.
In the wake of commit 1cff1b95a, the result of list_concat no longer
shares the ListCells of the second input.  Therefore, we can replace
"list_concat(x, list_copy(y))" with just "list_concat(x, y)".

To improve call sites that were list_copy'ing the first argument,
or both arguments, invent "list_concat_copy()" which produces a new
list sharing no ListCells with either input.  (This is a bit faster
than "list_concat(list_copy(x), y)" because it makes the result list
the right size to start with.)

In call sites that were not list_copy'ing the second argument, the new
semantics mean that we are usually leaking the second List's storage,
since typically there is no remaining pointer to it.  We considered
inventing another list_copy variant that would list_free the second
input, but concluded that for most call sites it isn't worth worrying
about, given the relative compactness of the new List representation.
(Note that in cases where such leakage would happen, the old code
already leaked the second List's header; so we're only discussing
the size of the leak not whether there is one.  I did adjust two or
three places that had been troubling to free that header so that
they manually free the whole second List.)

Patch by me; thanks to David Rowley for review.

Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
2019-08-12 11:20:18 -04:00
Tom Lane 1661a40505 Cosmetic improvements in setup of planner's per-RTE arrays.
Merge setup_append_rel_array into setup_simple_rel_arrays.  There's no
particularly good reason to keep them separate, and it's inconsistent
with the lack of separation in expand_planner_arrays.  The only apparent
benefit was that the fast path for trivial queries in query_planner()
doesn't need to set up the append_rel_array; but all we're saving there
is an if-test and NULL assignment, which surely ought to be negligible.

Also improve some obsolete comments.

Discussion: https://postgr.es/m/17220.1565301350@sss.pgh.pa.us
2019-08-09 12:33:43 -04:00
Michael Paquier b8f2da0ac5 Refactor logic to remove trailing CR/LF characters from strings
b654714 has reworked the way trailing CR/LF characters are removed from
strings.  This commit introduces a new routine in common/string.c and
refactors the code so as the logic is in a single place, mostly.

Author: Michael Paquier
Reviewed-by: Bruce Momjian
Discussion: https://postgr.es/m/20190801031820.GF29334@paquier.xyz
2019-08-09 11:05:14 +09:00
Peter Geoghegan 98eab30b93 Show specific OID suggestion in unused_oids output.
Commit a6417078 established a new project policy around OID assignment:
new patches are encouraged to choose a random OID in the 8000..9999
range when a manually-assigned OID is required (if multiple OIDs are
required, a consecutive block of OIDs starting from the random point
should be used).  Catalog entries added by committed patches that use
OIDs from this "unstable" range are renumbered after feature freeze.
This practice minimizes OID collisions among concurrently-developed
patches.

Show a specific random OID suggestion when the unused_oids script is
run.  This makes it easy for patch authors to use a random OID from the
unstable range, per the new policy.

Author: Julien Rouhaud, Peter Geoghegan
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/CAH2-WzkkRs2ScmuBQ7xWi7xzp7fC1B3w0Nt8X+n4rBw5k+Z=zA@mail.gmail.com
2019-08-05 11:47:34 -07:00
Noah Misch ffa2d37e5f Require the schema qualification in pg_temp.type_name(arg).
Commit aa27977fe2 introduced this
restriction for pg_temp.function_name(arg); do likewise for types
created in temporary schemas.  Programs that this breaks should add
"pg_temp." schema qualification or switch to arg::type_name syntax.
Back-patch to 9.4 (all supported versions).

Reviewed by Tom Lane.  Reported by Tom Lane.

Security: CVE-2019-10208
2019-08-05 07:48:41 -07:00
Michael Paquier 8548ddc61b Fix inconsistencies and typos in the tree, take 9
This addresses more issues with code comments, variable names and
unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com
2019-08-05 12:14:58 +09:00
Tomas Vondra 75506195da Revert "Add log_statement_sample_rate parameter"
This reverts commit 88bdbd3f74.

As committed, statement sampling used the existing duration threshold
(log_min_duration_statement) when decide which statements to sample.
The issue is that even the longest statements are subject to sampling,
and so may not end up logged. An improvement was proposed, introducing
a second duration threshold, but it would not be backwards compatible.
So we've decided to revert this feature - the separate threshold should
be part of the feature itself.

Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
2019-08-04 23:38:27 +02:00
Alvaro Herrera 489247b0e6 Improve pruning of a default partition
When querying a partitioned table containing a default partition, we
were wrongly deciding to include it in the scan too early in the
process, failing to exclude it in some cases.  If we reinterpret the
PruneStepResult.scan_default flag slightly, we can do a better job at
detecting that it can be excluded.  The change is that we avoid setting
the flag for that pruning step unless the step absolutely requires the
default partition to be scanned (in contrast with the previous
arrangement, which was to set it unless the step was able to prune it).
So get_matching_partitions() must explicitly check the partition that
each returned bound value corresponds to in order to determine whether
the default one needs to be included, rather than relying on the flag
from the final step result.

Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp
2019-08-04 11:18:45 -04:00
Andres Freund 2abd7ae9b2 Fix representation of hash keys in Hash/HashJoin nodes.
In 5f32b29c18 I changed the creation of HashState.hashkeys to
actually use HashState as the parent (instead of HashJoinState, which
was incorrect, as they were executed below HashState), to fix the
problem of hashkeys expressions otherwise relying on slot types
appropriate for HashJoinState, rather than HashState as would be
correct. That reliance was only introduced in 12, which is why it
previously worked to use HashJoinState as the parent (although I'd be
unsurprised if there were problematic cases).

Unfortunately that's not a sufficient solution, because before this
commit, the to-be-hashed expressions referenced inner/outer as
appropriate for the HashJoin, not Hash. That didn't have obvious bad
consequences, because the slots containing the tuples were put into
ecxt_innertuple when hashing a tuple for HashState (even though Hash
doesn't have an inner plan).

There are less common cases where this can cause visible problems
however (rather than just confusion when inspecting such executor
trees). E.g. "ERROR: bogus varno: 65000", when explaining queries
containing a HashJoin where the subsidiary Hash node's hash keys
reference a subplan. While normally hashkeys aren't displayed by
EXPLAIN, if one of those expressions references a subplan, that
subplan may be printed as part of the Hash node - which then failed
because an inner plan was referenced, and Hash doesn't have that.

It seems quite possible that there's other broken cases, too.

Fix the problem by properly splitting the expression for the HashJoin
and Hash nodes at plan time, and have them reference the proper
subsidiary node. While other workarounds are possible, fixing this
correctly seems easy enough. It was a pretty ugly hack to have
ExecInitHashJoin put the expression into the already initialized
HashState, in the first place.

I decided to not just split inner/outer hashkeys inside
make_hashjoin(), but also to separate out hashoperators and
hashcollations at plan time. Otherwise we would have ended up having
two very similar loops, one at plan time and the other during executor
startup. The work seems to more appropriately belong to plan time,
anyway.

Reported-By: Nikita Glukhov, Alexander Korotkov
Author: Andres Freund
Reviewed-By: Tom Lane, in an earlier version
Discussion: https://postgr.es/m/CAPpHfdvGVegF_TKKRiBrSmatJL2dR9uwFCuR+teQ_8tEXU8mxg@mail.gmail.com
Backpatch: 12-
2019-08-02 00:02:46 -07:00
Jeff Davis 6ae4e8eae7 Allow simplehash to use already-calculated hash values.
Add _lookup_hash and _insert_hash functions for callers that have
already calculated the hash value of the key.

The immediate use case is for hash algorithms that write to disk in
partitions. The hash value can be calculated once, used to perform a
lookup, used to select the partition, then written to the partition
along with the tuple. When the tuple is read back, the hash value does
not need to be recalculated.

Author: Jeff Davis
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/48abe675e1330f0c264ab2fe0d4ff23eb244f9ef.camel%40j-davis.com
2019-08-01 16:05:49 -07:00
Tom Lane 7266d0997d Allow functions-in-FROM to be pulled up if they reduce to constants.
This allows simplification of the plan tree in some common usage
patterns: we can get rid of a join to the function RTE.

In principle we could pull up any immutable expression, but restricting
it to Consts avoids the risk that multiple evaluations of the expression
might cost more than we can save.  (Possibly this could be improved in
future --- but we've more or less promised people that putting a function
in FROM guarantees single evaluation, so we'd have to tread carefully.)

To do this, we need to rearrange when eval_const_expressions()
happens for expressions in function RTEs.  I moved it to
inline_set_returning_functions(), which already has to iterate over
every function RTE, and in consequence renamed that function to
preprocess_function_rtes().  A useful consequence is that
inline_set_returning_function() no longer has to do this for itself,
simplifying that code.

In passing, break out pull_up_simple_subquery's code that knows where
everything that needs pullup_replace_vars() processing is, so that
the new pull_up_constant_function() routine can share it.  We'd
gotten away with one-and-a-half copies of that code so far, since
pull_up_simple_values() could assume that a lot of cases didn't apply
to it --- but I don't think pull_up_constant_function() can make any
simplifying assumptions.  Might as well make pull_up_simple_values()
use it too.

(Possibly this refactoring should go further: maybe we could share
some of the code to fill in the pullup_replace_vars_context struct?
For now, I left it that the callers fill that completely.)

Note: the one existing test case that this patch changes has to be
changed because inlining its function RTEs would destroy the point
of the test, namely to check join order.

Alexander Kuzmenkov and Aleksandr Parfenov, reviewed by
Antonin Houska and Anastasia Lubennikova, and whacked around
some more by me

Discussion: https://postgr.es/m/402356c32eeb93d4fed01f66d6c7fe2d@postgrespro.ru
2019-08-01 18:50:22 -04:00
Peter Geoghegan a8d6a95eb9 Bump catversion.
Oversight in commit 71dcd743.
2019-08-01 12:29:19 -07:00
Peter Geoghegan 71dcd74386 Add sort support routine for the inet data type.
Add sort support for inet, including support for abbreviated keys.
Testing has shown that this reduces the time taken to sort medium to
large inet/cidr inputs by ~50-60% in realistic cases.

Author: Brandur Leach
Reviewed-By: Peter Geoghegan, Edmund Horner
Discussion: https://postgr.es/m/CABR_9B-PQ8o2MZNJ88wo6r-NxW2EFG70M96Wmcgf99G6HUQ3sw@mail.gmail.com
2019-08-01 09:34:14 -07:00
Tom Lane 4886da8327 Mark advisory-lock functions as parallel restricted, not parallel unsafe.
There seems no good reason not to allow a parallel leader to execute
these functions.  (The workers still can't, though.  Although the code
would work, any such lock would go away at worker exit, which is not
the documented behavior of advisory locks.)

Discussion: https://postgr.es/m/11847.1564496844@sss.pgh.pa.us
2019-08-01 11:36:21 -04:00
Andres Freund 870b1d6800 Remove superfluous newlines in function prototypes.
These were introduced by pgindent due to fixe to broken
indentation (c.f. 8255c7a5ee). Previously the mis-indentation of
function prototypes was creatively used to reduce indentation in a few
places.

As that formatting only exists in master and REL_12_STABLE, it seems
better to fix it in both, rather than having some odd indentation in
v12 that somebody might copy for future patches or such.

Author: Andres Freund
Discussion: https://postgr.es/m/20190728013754.jwcbe5nfyt3533vx@alap3.anarazel.de
Backpatch: 12-
2019-07-31 00:05:21 -07:00
Heikki Linnakangas a29834beb1 Allow table AM's to use rd_amcache, too.
The rd_amcache allows an index AM to cache arbitrary information in a
relcache entry. This commit moves the cleanup of rd_amcache so that it
can also be used by table AMs. Nothing takes advantage of that yet, but
I'm sure it'll come handy for anyone writing new table AMs.

Backpatch to v12, where table AM interface was introduced.

Reviewed-by: Julien Rouhaud
2019-07-30 21:43:27 +03:00
Michael Paquier 04cf0bfc90 Fix memory leak coming from simple lists built in reindexdb
When building a list of relations for a parallel processing of a schema
or a database (or just a single-entry list for the non-parallel case
with the database name), the list is allocated and built on-the-fly for
each database processed, leaking after one database-level reindex is
done.  This accumulates leaks when processing all databases, and could
become a visible issue with thousands of relations.

This is fixed by introducing a new routine in simple_list.c to free all
the elements in a simple list made of strings or OIDs.  The header of
the list may be using a variable declaration or an allocated pointer,
so we don't have a routine to free this part to keep the interface
simple.

Per report from coverity for an issue introduced by 5ab892c, and
valgrind complains about the leak as well.  The idea to introduce a new
routine in simple_list.c is from Tom Lane.

Author: Michael Paquier
Reviewed-by: Tom Lane
2019-07-30 10:54:48 +09:00
Michael Paquier eb43f3d193 Fix inconsistencies and typos in the tree
This is numbered take 8, and addresses again a set of issues with code
comments, variable names and unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/b137b5eb-9c95-9c2f-586e-38aba7d59788@gmail.com
2019-07-29 12:28:30 +09:00
Michael Paquier 7cce159349 Fix handling of expressions and predicates in REINDEX CONCURRENTLY
When copying the definition of an index rebuilt concurrently for the new
entry, the index information was taken directly from the old index using
the relation cache.  In this case, predicates and expressions have
some post-processing to prepare things for the planner, which loses some
information including the collations added in any of them.

This inconsistency can cause issues when attempting for example a table
rewrite, and makes the new indexes rebuilt concurrently inconsistent
with the old entries.

In order to fix the problem, fetch expressions and predicates directly
from the catalog of the old entry, and fill in IndexInfo for the new
index with that.  This makes the process more consistent with
DefineIndex(), and the code is refactored with the addition of a routine
to create an IndexInfo node.

Reported-by: Manuel Rigger
Author: Michael Paquier
Discussion: https://postgr.es/m/CA+u7OA5Hp0ra235F3czPom_FyAd-3+XwSJmX95r1+sRPOJc9VQ@mail.gmail.com
Backpatch-through: 12
2019-07-29 09:58:49 +09:00
Heikki Linnakangas 6655a7299d Use full 64-bit XID for checking if a deleted GiST page is old enough.
Otherwise, after a deleted page gets even older, it becomes unrecyclable
again. B-tree has the same problem, and has had since time immemorial,
but let's at least fix this in GiST, where this is new.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:24:07 +03:00
Tom Lane a0555ddab9 Install dependencies to prevent dropping partition key columns.
The logic in ATExecDropColumn that rejects dropping partition key
columns is quite an inadequate defense, because it doesn't execute
in cases where a column needs to be dropped due to cascade from
something that only the column, not the whole partitioned table,
depends on.  That leaves us with a badly broken partitioned table;
even an attempt to load its relcache entry will fail.

We really need to have explicit pg_depend entries that show that the
column can't be dropped without dropping the whole table.  Hence,
add those entries.  In v12 and HEAD, bump catversion to ensure that
partitioned tables will have such entries.  We can't do that in
released branches of course, so in v10 and v11 this patch affords
protection only to partitioned tables created after the patch is
installed.  Given the lack of field complaints (this bug was found
by fuzz-testing not by end users), that's probably good enough.

In passing, fix ATExecDropColumn and ATPrepAlterColumnType
messages to be more specific about which partition key column
they're complaining about.

Per report from Manuel Rigger.  Back-patch to v10 where partitioned
tables were added.

Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com
Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
2019-07-22 14:55:40 -04:00
Peter Eisentraut 7961886580 Revert "initdb: Change authentication defaults"
This reverts commit 09f08930f0.

The buildfarm client needs some adjustments first.
2019-07-22 19:28:25 +02:00
Peter Eisentraut 09f08930f0 initdb: Change authentication defaults
Change the defaults for the pg_hba.conf generated by initdb to "peer"
for local (if supported, else "md5") and "md5" for host.

(Changing from "md5" to SCRAM is left as a separate exercise.)

"peer" is currently not supported on AIX, HP-UX, and Windows.  Users
on those operating systems will now either have to provide a password
to initdb or choose a different authentication method when running
initdb.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/bec17f0a-ddb1-8b95-5e69-368d9d0a3390%40postgresql.org
2019-07-22 15:14:27 +02:00
Peter Eisentraut 19781729f7 Make identity sequence management more robust
Some code could get confused when certain catalog state involving both
identity and serial sequences was present, perhaps during an attempt
to upgrade the latter to the former.  Specifically, dropping the
default of a serial column maintains the ownership of the sequence by
the column, and so it would then be possible to afterwards make the
column an identity column that would now own two sequences.  This
causes the code that looks up the identity sequence to error out,
making the new identity column inoperable until the ownership of the
previous sequence is released.

To fix this, make the identity sequence lookup only consider sequences
with the appropriate dependency type for an identity sequence, so it
only ever finds one (unless something else is broken).  In the above
example, the old serial sequence would then be ignored.  Reorganize
the various owned-sequence-lookup functions a bit to make this
clearer.

Reported-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/470c54fc8590be4de0f41b0d295fd6390d5e8a6c.camel@cybertec.at
2019-07-22 12:07:10 +02:00
Michael Paquier 23bccc823d Fix inconsistencies and typos in the tree
This is numbered take 7, and addresses a set of issues with code
comments, variable names and unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/dff75442-2468-f74f-568c-6006e141062f@gmail.com
2019-07-22 10:01:50 +09:00
David Rowley 3373c71553 Speed up finding EquivalenceClasses for a given set of rels
Previously in order to determine which ECs a relation had members in, we
had to loop over all ECs stored in PlannerInfo's eq_classes and check if
ec_relids mentioned the relation.  For the most part, this was fine, as
generally, unless queries were fairly complex, the overhead of performing
the lookup would have not been that significant.  However, when queries
contained large numbers of joins and ECs, the overhead to find the set of
classes matching a given set of relations could become a significant
portion of the overall planning effort.

Here we allow a much more efficient method to access the ECs which match a
given relation or set of relations.  A new Bitmapset field in RelOptInfo
now exists to store the indexes into PlannerInfo's eq_classes list which
each relation is mentioned in.  This allows very fast lookups to find all
ECs belonging to a single relation.  When we need to lookup ECs belonging
to a given pair of relations, we can simply bitwise-AND the Bitmapsets from
each relation and use the result to perform the lookup.

We also take the opportunity to write a new implementation of
generate_join_implied_equalities which makes use of the new indexes.
generate_join_implied_equalities_for_ecs must remain as is as it can be
given a custom list of ECs, which we can't easily determine the indexes of.

This was originally intended to fix the performance penalty of looking up
foreign keys matching a join condition which was introduced by 100340e2d.
However, we're speeding up much more than just that here.

Author: David Rowley, Tom Lane
Reviewed-by: Tom Lane, Tomas Vondra
Discussion: https://postgr.es/m/6970.1545327857@sss.pgh.pa.us
2019-07-21 17:30:58 +12:00
Tomas Vondra e38a55ba46 Rework examine_opclause_expression to use varonleft
The examine_opclause_expression function needs to return information on
which side of the operator we found the Var, but the variable was called
"isgt" which is rather misleading (it assumes the operator is either
less-than or greater-than, but it may be equality or something else).
Other places in the planner use a variable called "varonleft" for this
purpose, so just adopt the same convention here.

The code also assumed we don't care about this flag for equality, as
(Var = Const) and (Const = Var) should be the same thing. But that does
not work for cross-type operators, in which case we need to pass the
parameters to the procedure in the right order. So just use the same
code for all types of expressions.

This means we don't need to care about the selectivity estimation
function anymore, at least not in this code. We should only get the
supported cases here (thanks to statext_is_compatible_clause).

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:37:30 +02:00
Peter Geoghegan d004147eb3 Fix nbtree metapage cache upgrade bug.
Commit 857f9c36cd, which taught nbtree VACUUM to avoid unnecessary
index scans, bumped the nbtree version number from 2 to 3, while adding
the ability for nbtree indexes to be upgraded on-the-fly.  Various
assertions that assumed that an nbtree index was always on version 2 had
to be changed to accept any supported version (version 2 or 3 on
Postgres 11).

However, a few assertions were missed in the initial commit, all of
which were in code paths that cache a local copy of the metapage
metadata, where the index had been expected to be on the current version
(no longer version 2) as a generic sanity check.  Rather than simply
update the assertions, follow-up commit 0a64b45152 intentionally made
the metapage caching code update the per-backend cached metadata version
without changing the on-disk version at the same time.  This could even
happen when the planner needed to determine the height of a B-Tree for
costing purposes.  The assertions only fail on Postgres v12 when
upgrading from v10, because they were adjusted to use the authoritative
shared memory metapage by v12's commit dd299df8.

To fix, remove the cache-only upgrade mechanism entirely, and update the
assertions themselves to accept any supported version (go back to using
the cached version in v12).  The fix is almost a full revert of commit
0a64b45152 on the v11 branch.

VACUUM only considers the authoritative metapage, and never bothers with
a locally cached version, whereas everywhere else isn't interested in
the metapage fields that were added by commit 857f9c36cd.  It seems
unlikely that this bug has affected any user on v11.

Reported-By: Christoph Berg
Bug: #15896
Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org
Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
2019-07-18 13:22:56 -07:00
Tom Lane bc8393cf27 Further adjust SPITupleTable to provide a public row-count field.
Now that commit fec0778c8 drew a clear line between public and private
fields in SPITupleTable, it seems pretty silly that the count of valid
tuples isn't on the public side of that line.  The reason why not was
that there wasn't such a count.  For reasons lost in the mists of time,
spi.c preferred to keep a count of remaining free entries in the array.
But that seems pretty pointless: it's unlike the way we handle similar
code everywhere else, and it involves extra subtractions that surely
outweigh having to do a comparison rather than test-for-zero to check
for array-full.

Hence, rearrange so that this code does the expansible array logic
the same as everywhere else, with a count of valid entries alongside
the allocated array length.  And document the count as public.

I looked for core-code callers where it would make sense to start
relying on tuptable->numvals rather than the separate SPI_processed
variable.  Right now there don't seem to be places where it'd be
a win to do so without more code restructuring than I care to
undertake today.  In principle, though, having SPITupleTables be
fully self-contained should be helpful down the line.

Discussion: https://postgr.es/m/16852.1563395722@sss.pgh.pa.us
2019-07-18 10:37:13 -04:00
Tomas Vondra e8b6ae2130 Fix handling of opclauses in extended statistics
We expect opclauses to have exactly one Var and one Const, but the code
was checking the Const by calling is_pseudo_constant_clause() which is
incorrect - we need a proper constant.

Fixed by using plain IsA(x,Const) to check type of the node. We need to
do these checks in two places, so move it into a separate function that
can be called in both places.

Reported by Andreas Seltenreich, based on crash reported by sqlsmith.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:29:38 +02:00
Andres Freund 21039555cd tableam: comment improvements.
Author: Brad DeJong
Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com
Backpatch: 12-
2019-07-17 19:48:47 -07:00
Tom Lane fec0778c80 Clarify the distinction between public and private SPITupleTable fields.
The fields that we consider public are "tupdesc" and "vals", which
historically are in the middle of the struct.  Move them to the front
(this should be perfectly safe to do in HEAD) and add comments to make
it quite clear which fields are public or not.

Also adjust spi.sgml's documentation of the struct to match.
That doc had bit-rotted somewhat, as it was missing some fields.
(Arguably we should just remove all the private fields from the docs,
but for now I refrained.)

Daniel Gustafsson, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/0D19F836-B743-4340-B6A2-F148CA3DD1F0@yesql.se
2019-07-17 14:55:13 -04:00
Tom Lane d97b714a21 Avoid using lcons and list_delete_first where it's easy to do so.
Formerly, lcons was about the same speed as lappend, but with the new
List implementation, that's not so; with a long List, data movement
imposes an O(N) cost on lcons and list_delete_first, but not lappend.

Hence, invent list_delete_last with semantics parallel to
list_delete_first (but O(1) cost), and change various places to use
lappend and list_delete_last where this can be done without much
violence to the code logic.

There are quite a few places that construct result lists using lcons not
lappend.  Some have semantic rationales for that; I added comments about
it to a couple that didn't have them already.  In many such places though,
I think the coding is that way only because back in the dark ages lcons
was faster than lappend.  Hence, switch to lappend where this can be done
without causing semantic changes.

In ExecInitExprRec(), this results in aggregates and window functions that
are in the same plan node being executed in a different order than before.
Generally, the executions of such functions ought to be independent of
each other, so this shouldn't result in visibly different query results.
But if you push it, as one regression test case does, you can show that
the order is different.  The new order seems saner; it's closer to
the order of the functions in the query text.  And we never documented
or promised anything about this, anyway.

Also, in gistfinishsplit(), don't bother building a reverse-order list;
it's easy now to iterate backwards through the original list.

It'd be possible to go further towards removing uses of lcons and
list_delete_first, but it'd require more extensive logic changes,
and I'm not convinced it's worth it.  Most of the remaining uses
deal with queues that probably never get long enough to be worth
sweating over.  (Actually, I doubt that any of the changes in this
patch will have measurable performance effects either.  But better
to have good examples than bad ones in the code base.)

Patch by me, thanks to David Rowley and Daniel Gustafsson for review.

Discussion: https://postgr.es/m/21272.1563318411@sss.pgh.pa.us
2019-07-17 11:15:34 -04:00
Thomas Munro dfd0121dc7 Move some md.c-specific logic from smgr.c to md.c.
Potential future SMGR implementations may not want to create
tablespace directories when creating an SMGR relation.  Move that
logic to mdcreate().  Move the initialization of md-specific
data structures from smgropen() to a new callback mdopen().

Author: Thomas Munro
Reviewed-by: Shawn Debnath (as part of an earlier patch set)
Discussion: https://postgr.es/m/CA%2BhUKG%2BOZqOiOuDm5tC5DyQZtJ3FH4%2BFSVMqtdC4P1atpJ%2Bqhg%40mail.gmail.com
2019-07-17 15:00:22 +12:00
Tom Lane c245776906 Remove lappend_cell...() family of List functions.
It seems worth getting rid of these functions because they require the
caller to retain a ListCell pointer into a List that it's modifying,
which is a dangerous practice with the new List implementation.
(The only other List-modifying function that takes a ListCell pointer
as input is list_delete_cell, which nowadays is preferentially used
via the constrained API foreach_delete_current.)

There was only one remaining caller of these functions after commit
2f5b8eb5a, and that was some fairly ugly GEQO code that can be much
more clearly expressed using a list-index variable and list_insert_nth.
Hence, rewrite that code, and remove the functions.

Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
2019-07-16 13:12:24 -04:00
Tom Lane 2f5b8eb5a2 Clean up some ad-hoc code for sorting and de-duplicating Lists.
heap.c and relcache.c contained nearly identical copies of logic
to insert OIDs into an OID list while preserving the list's OID
ordering (and rejecting duplicates, in one case but not the other).

The comments argue that this is faster than qsort for small numbers
of OIDs, which is at best unproven, and seems even less likely to be
true now that lappend_cell_oid has to move data around.  In any case
it's ugly and hard-to-follow code, and if we do have a lot of OIDs
to consider, it's O(N^2).

Hence, replace with simply lappend'ing OIDs to a List, then list_sort
the completed List, then remove adjacent duplicates if necessary.
This is demonstrably O(N log N) and it's much simpler for the
callers.  It's possible that this would be somewhat inefficient
if there were a very large number of duplicates, but that seems
unlikely in the existing usage.

This adds list_deduplicate_oid and list_oid_cmp infrastructure
to list.c.  I didn't bother with equivalent functionality for
integer or pointer Lists, but such could always be added later
if we find a use for it.

Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
2019-07-16 12:04:06 -04:00
Tom Lane 569ed7f483 Redesign the API for list sorting (list_qsort becomes list_sort).
In the wake of commit 1cff1b95a, the obvious way to sort a List
is to apply qsort() directly to the array of ListCells.  list_qsort
was building an intermediate array of pointers-to-ListCells, which
we no longer need, but getting rid of it forces an API change:
the comparator functions need to do one less level of indirection.

Since we're having to touch the callers anyway, let's do two additional
changes: sort the given list in-place rather than making a copy (as
none of the existing callers have any use for the copying behavior),
and rename list_qsort to list_sort.  It was argued that the old name
exposes more about the implementation than it should, which I find
pretty questionable, but a better reason to rename it is to be sure
we get the attention of any external callers about the need to fix
their comparator functions.

While we're at it, change four existing callers of qsort() to use
list_sort instead; previously, they all had local reinventions
of list_qsort, ie build-an-array-from-a-List-and-qsort-it.
(There are some other places where changing to list_sort perhaps
would be worthwhile, but they're less obviously wins.)

Discussion: https://postgr.es/m/29361.1563220190@sss.pgh.pa.us
2019-07-16 11:51:44 -04:00
Michael Paquier 0896ae561b Fix inconsistencies and typos in the tree
This is numbered take 7, and addresses a set of issues around:
- Fixes for typos and incorrect reference names.
- Removal of unneeded comments.
- Removal of unreferenced functions and structures.
- Fixes regarding variable name consistency.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/10bfd4ac-3e7c-40ab-2b2e-355ed15495e8@gmail.com
2019-07-16 13:23:53 +09:00
Tom Lane 1cff1b95ab Represent Lists as expansible arrays, not chains of cons-cells.
Originally, Postgres Lists were a more or less exact reimplementation of
Lisp lists, which consist of chains of separately-allocated cons cells,
each having a value and a next-cell link.  We'd hacked that once before
(commit d0b4399d8) to add a separate List header, but the data was still
in cons cells.  That makes some operations -- notably list_nth() -- O(N),
and it's bulky because of the next-cell pointers and per-cell palloc
overhead, and it's very cache-unfriendly if the cons cells end up
scattered around rather than being adjacent.

In this rewrite, we still have List headers, but the data is in a
resizable array of values, with no next-cell links.  Now we need at
most two palloc's per List, and often only one, since we can allocate
some values in the same palloc call as the List header.  (Of course,
extending an existing List may require repalloc's to enlarge the array.
But this involves just O(log N) allocations not O(N).)

Of course this is not without downsides.  The key difficulty is that
addition or deletion of a list entry may now cause other entries to
move, which it did not before.

For example, that breaks foreach() and sister macros, which historically
used a pointer to the current cons-cell as loop state.  We can repair
those macros transparently by making their actual loop state be an
integer list index; the exposed "ListCell *" pointer is no longer state
carried across loop iterations, but is just a derived value.  (In
practice, modern compilers can optimize things back to having just one
loop state value, at least for simple cases with inline loop bodies.)
In principle, this is a semantics change for cases where the loop body
inserts or deletes list entries ahead of the current loop index; but
I found no such cases in the Postgres code.

The change is not at all transparent for code that doesn't use foreach()
but chases lists "by hand" using lnext().  The largest share of such
code in the backend is in loops that were maintaining "prev" and "next"
variables in addition to the current-cell pointer, in order to delete
list cells efficiently using list_delete_cell().  However, we no longer
need a previous-cell pointer to delete a list cell efficiently.  Keeping
a next-cell pointer doesn't work, as explained above, but we can improve
matters by changing such code to use a regular foreach() loop and then
using the new macro foreach_delete_current() to delete the current cell.
(This macro knows how to update the associated foreach loop's state so
that no cells will be missed in the traversal.)

There remains a nontrivial risk of code assuming that a ListCell *
pointer will remain good over an operation that could now move the list
contents.  To help catch such errors, list.c can be compiled with a new
define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents
whenever that could possibly happen.  This makes list operations
significantly more expensive so it's not normally turned on (though it
is on by default if USE_VALGRIND is on).

There are two notable API differences from the previous code:

* lnext() now requires the List's header pointer in addition to the
current cell's address.

* list_delete_cell() no longer requires a previous-cell argument.

These changes are somewhat unfortunate, but on the other hand code using
either function needs inspection to see if it is assuming anything
it shouldn't, so it's not all bad.

Programmers should be aware of these significant performance changes:

* list_nth() and related functions are now O(1); so there's no
major access-speed difference between a list and an array.

* Inserting or deleting a list element now takes time proportional to
the distance to the end of the list, due to moving the array elements.
(However, it typically *doesn't* require palloc or pfree, so except in
long lists it's probably still faster than before.)  Notably, lcons()
used to be about the same cost as lappend(), but that's no longer true
if the list is long.  Code that uses lcons() and list_delete_first()
to maintain a stack might usefully be rewritten to push and pop at the
end of the list rather than the beginning.

* There are now list_insert_nth...() and list_delete_nth...() functions
that add or remove a list cell identified by index.  These have the
data-movement penalty explained above, but there's no search penalty.

* list_concat() and variants now copy the second list's data into
storage belonging to the first list, so there is no longer any
sharing of cells between the input lists.  The second argument is
now declared "const List *" to reflect that it isn't changed.

This patch just does the minimum needed to get the new implementation
in place and fix bugs exposed by the regression tests.  As suggested
by the foregoing, there's a fair amount of followup work remaining to
do.

Also, the ENABLE_LIST_COMPAT macros are finally removed in this
commit.  Code using those should have been gone a dozen years ago.

Patch by me; thanks to David Rowley, Jesper Pedersen, and others
for review.

Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
2019-07-15 13:41:58 -04:00
Thomas Munro 67b9b3ca32 Provide XLogRecGetFullXid().
In order to be able to work with FullTransactionId values during replay
without increasing the size of the WAL, infer the epoch.  In general we
can't do that safely, but during replay we can because we know that
nextFullXid can't advance concurrently.

Prevent frontend code from seeing this new function, due to the above
restriction.  Perhaps in future it will be possible to extract the value
entirely from independent WAL records, and then this restriction can be
lifted.

Author: Thomas Munro, based on earlier code from Andres Freund
Discussion: https://postgr.es/m/CA%2BhUKG%2BmLmuDjMi6o1dxkKvGRL56Y2Rz%2BiXAcrZV03G9ZuFQ8Q%40mail.gmail.com
2019-07-15 17:04:29 +12:00
Peter Eisentraut 5925e55498 Add gen_random_uuid function
This adds a built-in function to generate UUIDs.

PostgreSQL hasn't had a built-in function to generate a UUID yet,
relying on external modules such as uuid-ossp and pgcrypto to provide
one.  Now that we have a strong random number generator built-in, we
can easily provide a version 4 (random) UUID generation function.

This patch takes the existing function gen_random_uuid() from pgcrypto
and makes it a built-in function.  The pgcrypto implementation now
internally redirects to the built-in one.

Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/6a65610c-46fc-2323-6b78-e8086340a325@2ndquadrant.com
2019-07-14 14:30:27 +02:00
Alexander Korotkov 565f339000 Forgotten catversion bump
6254c55f81, c085e1c1cb and 075f0a880f all change system catalog.  But
catversion bump is missed in all of them.  So, do catversion bump now.

Also, I need mention patch reviewer Fabien Coelho, who has been missed in
commit messages of 6254c55f81, c085e1c1cb and 075f0a880f.
2019-07-14 15:22:21 +03:00
Alexander Korotkov 075f0a880f Add support for <-> (box, point) operator to SP-GiST box_ops
Opclass support functions already can handle this operator, just catalog
adjustment appears to be required.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 15:09:23 +03:00
Alexander Korotkov c085e1c1cb Add support for <-> (box, point) operator to GiST box_ops
Index-based calculation of this operator is exact.  So, signature of
gist_bbox_distance() function is changes so that caller is responsible for
setting *recheck flag.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 15:09:15 +03:00
Alexander Korotkov 6254c55f81 Add missing commutators for distance operators
Some of <-> operators between geometric types have their commutators missed.
This commit adds them.  The motivation is upcoming kNN support for some of those
operators.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 14:55:01 +03:00
Thomas Munro 1321509fa4 Introduce timed waits for condition variables.
Provide ConditionVariableTimedSleep(), like ConditionVariableSleep()
but with a timeout argument.

Author: Shawn Debnath
Reviewed-by: Kyotaro Horiguchi, Thomas Munro
Discussion: https://postgr.es/m/eeb06007ccfe46e399df6af18bfcd15a@EX13D05UWC002.ant.amazon.com
2019-07-13 13:51:05 +12:00
Amit Kapila bd56cd75d2 Fix few typos and minor wordsmithing in tableam comments.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
2019-07-10 07:52:51 +05:30
Robert Haas 554106b116 tableam: Provide helper functions for relation sizing.
Most block-based table AMs will need the exact same implementation of
the relation_size callback as the heap, and if they use a standard
page layout, they will likely need an implementation of the
relation_estimate_size callback that is very similar to that of the
heap.  Rearrange to facilitate code reuse.

Patch by me, reviewed by Michael Paquier, Daniel Gustafsson, and
Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZ6DBPnP1E-vRpQZUJQijJFD54F+SR_pxGiAAS-MyrigA@mail.gmail.com
2019-07-08 14:51:53 -04:00
Michael Paquier 6b8548964b Fix inconsistencies in the code
This addresses a couple of issues in the code:
- Typos and inconsistencies in comments and function declarations.
- Removal of unreferenced function declarations.
- Removal of unnecessary compile flags.
- A cleanup error in regressplans.sh.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/0c991fdf-2670-1997-c027-772a420c4604@gmail.com
2019-07-08 13:15:09 +09:00
Michael Paquier d1a040543b Remove more unreferenced function declarations
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoDuAYsRb3Q9aobkFZ6DZMWxsyg4HOmgkwgeWNfSkTwGxw@mail.gmail.com
2019-07-07 09:58:33 +09:00
Tom Lane 79b94716e7 Remove unreferenced function declarations.
These seem to be leftovers from old patches, perhaps.

Masahiko Sawada

Discussion: https://postgr.es/m/CAD21AoDuAYsRb3Q9aobkFZ6DZMWxsyg4HOmgkwgeWNfSkTwGxw@mail.gmail.com
2019-07-05 19:28:45 -04:00
Tom Lane 0ab1a2e39b Remove dead encoding-conversion functions.
The code for conversions SQL_ASCII <-> MULE_INTERNAL and
SQL_ASCII <-> UTF8 was unreachable, because we long ago changed
the wrapper functions pg_do_encoding_conversion() et al so that
they have hard-wired behaviors for conversions involving SQL_ASCII.
(At least some of those fast paths date back to 2002, though it
looks like we may not have been totally consistent about this until
later.)  Given the lack of complaints, nobody is dissatisfied with
this state of affairs.  Hence, let's just remove the unreachable code.

Also, change CREATE CONVERSION so that it rejects attempts to
define such conversions.  Since we consider that SQL_ASCII represents
lack of knowledge about the encoding in use, such a conversion would
be semantically dubious even if it were reachable.

Adjust a couple of regression test cases that had randomly decided
to rely on these conversion functions rather than any other ones.

Discussion: https://postgr.es/m/41163.1559156593@sss.pgh.pa.us
2019-07-05 14:17:27 -04:00
Michael Paquier 313f87a171 Add min() and max() aggregates for pg_lsn
This is useful for monitoring, when it comes for example to calculations
of WAL retention with replication slots and delays with a set of
standbys.

Bump catalog version.

Author: Fabrízio de Royes Mello
Reviewed-by: Surafel Temesgen
Discussion: https://postgr.es/m/CAFcNs+oc8ZoHhowA4rR1GGCgG8QNgK_TOwPRVYQo5rYy8_PXzA@mail.gmail.com
2019-07-05 12:21:11 +09:00
Michael Paquier 8a810a177c Update hardcoded DH parameters to IANA standards
The source defining the current fallback and hardcoded DH parameters
has disappeared from the web a long time ago, and RFC 3526 defines the
most current Diffie-Hellman MODP groups, so update to those new values.

Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/5E60AC9A-CB10-4851-9EF2-7209490A164C@yesql.se
2019-07-05 10:47:32 +09:00
Tomas Vondra 08aa131c7a Simplify pg_mcv_list (de)serialization
The serialization format of multivariate MCV lists included alignment in
order to allow direct access to part of the serialized data, but despite
multiple fixes (see for example commits d85e0f366a and ea4e1c0e8f) this
proved to be problematic.

This commit abandons alignment in the serialized format, and just copies
everything during deserialization.  We now also track amount of memory
needed after deserialization (including alignment), which allows us to
deserialize the MCV list in a single pass.

Bump catversion, as this affects contents of pg_statistic_ext_data.

Backpatch to 12, where multi-column MCV lists were introduced.

Author: Tomas Vondra
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2201.1561521148@sss.pgh.pa.us
2019-07-05 01:32:49 +02:00
Tomas Vondra 4d66285adc Fix pg_mcv_list_items() to produce text[]
The function pg_mcv_list_items() returns values stored in MCV items. The
items may contain columns with different data types, so the function was
generating text array-like representation, but in an ad-hoc way without
properly escaping various characters etc.

Fixed by simply building a text[] array, which also makes it easier to
use from queries etc.

Requires changes to pg_proc entry, so bump catversion.

Backpatch to 12, where multi-column MCV lists were introduced.

Author: Tomas Vondra
Reviewed-by: Dean Rasheed
Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
2019-07-05 01:32:46 +02:00
Peter Eisentraut 7b925e1270 Sync our Snowball stemmer dictionaries with current upstream
The main change is a new stemmer for Greek.  There are minor changes
in the Danish and French stemmers.

Author: Panagiotis Mavrogiorgos <pmav99@gmail.com>
2019-07-04 13:26:48 +02:00
Michael Paquier cfc40d384a Introduce safer encoding and decoding routines for base64.c
This is a follow-up refactoring after 09ec55b and b674211, which has
proved that the encoding and decoding routines used by SCRAM have a
poor interface when it comes to check after buffer overflows.  This adds
an extra argument in the shape of the length of the result buffer for
each routine, which is used for overflow checks when encoding or
decoding an input string.  The original idea comes from Tom Lane.

As a result of that, the encoding routine can now fail, so all its
callers are adjusted to generate proper error messages in case of
problems.

On failure, the result buffer gets zeroed.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190623132535.GB1628@paquier.xyz
2019-07-04 16:08:09 +09:00
Tom Lane 615cebc94b Stamp HEAD as 13devel.
Let the hacking begin ...
2019-07-01 12:50:55 -04:00
Tom Lane 9e1c9f9594 pgindent run prior to branching v12.
pgperltidy and reformat-dat-files too, though the latter didn't
find anything to change.
2019-07-01 12:37:52 -04:00
Michael Paquier c74d49d41c Fix many typos and inconsistencies
Author: Alexander Lakhin
Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
2019-07-01 10:00:23 +09:00
Noah Misch 459c3cdb4a Don't read fields of a misaligned ExpandedObjectHeader or AnyArrayType.
UBSan complains about this.  Instead, cast to a suitable type requiring
only 4-byte alignment.  DatumGetAnyArrayP() already assumes one can cast
between AnyArrayType and ArrayType, so this doesn't introduce a new
assumption.  Back-patch to 9.5, where AnyArrayType was introduced.

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20190629210334.GA1244217@rfd.leadboat.com
2019-06-30 17:34:17 -07:00
Peter Eisentraut 21f428ebde Don't call data type input functions in GUC check hooks
Instead of calling pg_lsn_in() in check_recovery_target_lsn and
timestamptz_in() in check_recovery_target_time, reorganize the
respective code so that we don't raise any errors in the check hooks.
The previous code tried to use PG_TRY/PG_CATCH to handle errors in a
way that is not safe, so now the code contains no ereport() calls and
can operate safely within the GUC error handling system.

Moreover, since the interpretation of the recovery_target_time string
may depend on the time zone, we cannot do the final processing of that
string until all the GUC processing is done.  Instead,
check_recovery_target_time() now does some parsing for syntax
checking, but the actual conversion to a timestamptz value is done
later in the recovery code that uses it.

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20190611061115.njjwkagvxp4qujhp%40alap3.anarazel.de
2019-06-30 10:27:43 +02:00
Peter Eisentraut 666cbae16d Remove explicit error handling for obsolete date/time values
The date/time values 'current', 'invalid', and 'undefined' were
removed a long time ago, but the code still contains explicit error
handling for the transition.  To simplify the code and avoid having to
handle these values everywhere, just remove the recognition of these
tokens altogether now.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-06-30 10:27:35 +02:00
Michael Paquier 322c5bfdc3 Remove remaining traces of Rand_OpenSSL() from the tree
fe0a0b5 has removed the last use of this routine from pgcrypto, leading
to a useless symbol definition and an extra configure check.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/20190626142544.GN1714@paquier.xyz
2019-06-27 08:25:26 +09:00
Thomas Munro a2dec37480 Remove misleading comment from pathnodes.h.
As of commit e5253fdc, it is no longer true that the leader always
executes the subplan of a Gather Merge node.  Remove comment to that
effect.

Back-patch to 11.

Discussion: https://postgr.es/m/CA%2BhUKGJEaZJYezXAOutuiWT%2BfxCA44%2BoKtVPAND2ubLiigR%3D-w%40mail.gmail.com
2019-06-25 09:21:54 +12:00
Peter Eisentraut 82be666ee3 Update unicode_norm_table.h to Unicode 12.1.0 2019-06-24 22:50:56 +02:00
Tom Lane 1323bfce55 Fix spinlock assembly code for MIPS so it works on MIPS r6.
Original MIPS-I processors didn't have the LL/SC instructions (nor any
other userland synchronization primitive).  If the build toolchain
targets that ISA variant by default, as an astonishingly large fraction
of MIPS platforms still do, the assembler won't take LL/SC without
coercion in the form of a ".set mips2" instruction.  But we issued that
unconditionally, making it an ISA downgrade for chips later than MIPS2.
That breaks things for the latest MIPS r6 ISA, which encodes these
instructions differently.  Adjust the code so we don't change ISA level
if it's >= 2.

Note that this patch doesn't change what happens on an actual MIPS-I
processor: either the kernel will emulate these instructions
transparently, or you'll get a SIGILL failure.  That tradeoff seemed
fine in 2002 when this code was added (cf 3cbe6b247), and it's even
more so today when MIPS-I is basically extinct.  But let's add a
comment about that.

YunQiang Su (with cosmetic adjustments by me).  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/15844-8f62fe7e163939b3@postgresql.org
2019-06-22 20:31:50 -04:00
Alexander Korotkov 261a5c1928 Support 'q' flag in jsonpath 'like_regex' predicate
SQL/JSON standard defines that jsonpath 'like_regex' predicate should support
the same set of flags as XQuery/XPath.  It appears that implementation of 'q'
flag was missed.  This commit fixes that.

Discussion: https://postgr.es/m/CAPpHfdtyfPsxLYiTjp5Ov8T5xGsB5t3CwE5%2B3PS%3DLLwA%2BxTJog%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov
2019-06-19 22:41:57 +03:00
Tom Lane 0ab7110bcb Stamp 12beta2. 2019-06-17 17:12:29 -04:00
Michael Paquier 3412030205 Fix more typos and inconsistencies in the tree
Author: Alexander Lakhin
Discussion: https://postgr.es/m/0a5419ea-1452-a4e6-72ff-545b1a5a8076@gmail.com
2019-06-17 16:13:16 +09:00
Tom Lane 6973b058bc Further fix privileges on pg_statistic_ext[_data].
We don't need to restrict column privileges on pg_statistic_ext;
all of that data is OK to read publicly.  What we *do* need to do,
which was overlooked by 6cbfb784c, is revoke public read access on
pg_statistic_ext_data; otherwise we still have the same security
hole we started with.

Catversion bump to ensure that installations calling themselves
beta2 will have this fix.

Diagnosis/correction by Dean Rasheed and Tomas Vondra, but I'm
going to go ahead and push this fix ASAP so we get more buildfarm
cycles on it.

Discussion: https://postgr.es/m/8833.1560647898@sss.pgh.pa.us
2019-06-16 11:00:23 -04:00
Tomas Vondra aa087ec64f Add pg_stats_ext view for extended statistics
Regular per-column statistics are stored in pg_statistics catalog, which
is however rather difficult to read, so we also have pg_stats view with
a human-reablable version of the data.

For extended statistic the catalog was fairly easy to read, so we did
not have such human-readable view so far.  Commit 9b6babfa2d however did
split the catalog into two, which makes querying harder.  Furthermore,
we want to show the multi-column MCV list in a way similar to per-column
stats (and not as a bytea value).

This commit introduces pg_stats_ext view, joining the two catalogs and
massaging the data to produce human-readable output similar to pg_stats.
It also considers RLS and access privileges - the data is shown only when
the user has access to all columns the extended statistic is defined on.

Bumped CATVERSION due to adding new system view.

Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
2019-06-16 01:20:39 +02:00
Tomas Vondra 6cbfb784c3 Rework the pg_statistic_ext catalog
Since extended statistic got introduced in PostgreSQL 10, there was a
single catalog pg_statistic_ext storing both the definitions and built
statistic.  That's however problematic when a user is supposed to have
access only to the definitions, but not to user data.

Consider for example pg_dump on a database with RLS enabled - if the
pg_statistic_ext catalog respects RLS (which it should, if it contains
user data), pg_dump would not see any records and the result would not
define any extended statistics.  That would be a surprising behavior.

Until now this was not a pressing issue, because the existing types of
extended statistic (functional dependencies and ndistinct coefficients)
do not include any user data directly.  This changed with introduction
of MCV lists, which do include most common combinations of values.

The easiest way to fix this is to split the pg_statistic_ext catalog
into two - one for definitions, one for the built statistic values.
The new catalog is called pg_statistic_ext_data, and we're maintaining
a 1:1 relationship with the old catalog - either there are matching
records in both catalogs, or neither of them.

Bumped CATVERSION due to changing system catalog definitions.

Author: Dean Rasheed, with improvements by me
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui%3DVdx4N%3D%3DVV5XOK5dsXfnGgVOz_JhAicB%3DZA%40mail.gmail.com
2019-06-16 01:20:31 +02:00
Michael Paquier 96719e52b1 Use OpenSSL-specific ifdefs in sha2.h
In order to separate OpenSSL's SHA symbols, this header has been using
USE_SSL, which is equivalent to USE_OPENSSL.  There is now only one SSL
implementation included in the tree, so this works fine, but when
adding a new SSL implementation this would run into failures.

Author: Daniel Gustafsson
Discussion: https://postgr.es/m/0DF29010-CE26-4F51-85A6-9C8ABF5536F9@yesql.se
2019-06-14 09:00:36 +09:00
Tom Lane 3c8f8f6ebe Mark ReplicationSlotCtl as PGDLLIMPORT.
Also MyReplicationSlot, in branches where it wasn't already.

This was discussed in the thread that resulted in c572599c6, but
for some reason nobody pulled the trigger.  Now that we have another
request for the same thing, we should just do it.

Craig Ringer

Discussion: https://postgr.es/m/CAMsr+YFTsq-86MnsNng=mPvjjh5EAbzfMK0ptJPvzyvpFARuRg@mail.gmail.com
Discussion: https://postgr.es/m/345138875.20190611151943@cybertec.at
2019-06-13 10:53:34 -04:00
Alvaro Herrera b976845815 Fix double-word typos
Discussion: https://postgr.es/m/20190612184527.GA24266@alvherre.pgsql
Reviewed-by: Michaël Paquier
2019-06-13 10:03:56 -04:00
Robert Haas 132a1c101a tableam: Fix index_build_range_scan parameter name.
All of the other code thinks that the 8th parameter is the number of
blocks, but this declaration thinks that it's the ending block number.
Repair this inconsistency.

Patch by me, reviewed by Andres Freund.

Discussion: http://postgr.es/m/CA+TgmoY49ManQWnJtiwkuytXBkmyTuDFqb74Pr4Zn2Nq9TuNBQ@mail.gmail.com
2019-06-10 20:04:48 -04:00
Noah Misch 44982e7d09 Reconcile nodes/*funcs.c with PostgreSQL 12 work.
One would have needed out-of-tree code to observe the defects.  Remove
unreferenced fields instead of completing their support functions.
Since in-tree code can't reach _readIntoClause(), no catversion bump.
2019-06-09 14:00:36 -07:00
Michael Paquier cf4263cc6c Switch position of some declarations in libpq.h
This makes the header more consistent with the surroundings, with
declarations associated to a given file grouped together.

Author: Daniel Gustafsson
Discussion: https://postgr.es/m/20190608012439.GB7228@paquier.xyz
2019-06-09 11:33:52 +09:00
Noah Misch 31d250e049 Update stale comments, and fix comment typos. 2019-06-08 10:12:26 -07:00
Amit Kapila 92c4abc736 Fix assorted inconsistencies.
There were a number of issues in the recent commits which include typos,
code and comments mismatch, leftover function declarations.  Fix them.

Reported-by: Alexander Lakhin
Author: Alexander Lakhin, Amit Kapila and Amit Langote
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/ef0c0232-0c1d-3a35-63d4-0ebd06e31387@gmail.com
2019-06-08 08:16:38 +05:30
Michael Paquier 35b2d4bc0e Move be-gssapi-common.h into src/include/libpq/
The file has been introduced in src/backend/libpq/ as of b0b39f72, but
all backend-side headers of libpq are located in src/include/libpq/.
Note that the identification path on top of the file referred to
src/include/libpq/ from the start.

Author: Michael Paquier
Reviewed-by: Stephen Frost
Discussion: https://postgr.es/m/20190607043415.GE1736@paquier.xyz
2019-06-08 09:59:02 +09:00
Heikki Linnakangas cd96389d71 Fix confusion on different kinds of slots in IndexOnlyScans.
We used the same slot to store a tuple from the index, and to store a
tuple from the table. That's not OK. It worked with the heap, because
heapam_getnextslot() stores a HeapTuple to the slot, and doesn't care how
large the tts_values/nulls arrays are. But when I played with a toy table
AM implementation that used a virtual tuple, it caused memory overruns.

In the passing, tidy up comments on the ioss_PscanLen fields.
2019-06-06 09:46:52 +03:00
Peter Eisentraut c880096dc1 Add command column to pg_stat_progress_create_index
This allows determining which command is running, similar to
pg_stat_progress_cluster.

Discussion: https://www.postgresql.org/message-id/flat/f0e56b3b-74b7-6cbc-e207-a5ed6bee18dc%402ndquadrant.com
2019-06-04 09:29:02 +02:00
Michael Paquier 041a2642e5 Fix some typos and inconsistencies in tableam.h
The defined callback definitions have been using references to heap for
a couple of variables and comments.  This makes the whole interface more
consistent by using "table" which is more generic.

A variable storing index information was misspelled as well.

Author: Michael Paquier
Discussion: https://postgr.es/m/20190601190946.GB1905@paquier.xyz
2019-06-04 09:48:25 +09:00
Peter Eisentraut 05d36b68ed Update SQL conformance information about JSON path
Reviewed-by: Oleg Bartunov <obartunov@postgrespro.ru>
2019-06-03 21:36:04 +02:00
Michael Paquier 1fb6f62a84 Fix typos in various places
Author: Andrea Gelmini
Reviewed-by: Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/20190528181718.GA39034@glet
2019-06-03 13:44:03 +09:00
David Rowley 72b6223f76 Fix incorrect parameter name in comment
Author: Antonin Houska
Discussion: https://postgr.es/m/22370.1559293357@localhost
2019-05-31 13:30:05 -04:00
Andres Freund 13002bf0bc Remove unnecessary (and wrong) forward declaration.
Interestingly only C++ compilers have, so far, complained about this
odd forward declaration. This originated when IndexBuildCallback was
defined in another file, but now is completely unnecessary (but was
wrong before too, cpluspluscheck just wouldn't have noticed).

Reported-By: Tom Lane
Discussion: https://postgr.es/m/53941.1559239260@sss.pgh.pa.us
2019-05-30 13:44:38 -07:00
Amit Kapila 9679345f3c Fix typos.
Reported-by: Alexander Lakhin
Author: Alexander Lakhin
Reviewed-by: Amit Kapila and Tom Lane
Discussion: https://postgr.es/m/7208de98-add8-8537-91c0-f8b089e2928c@gmail.com
2019-05-26 18:28:18 +05:30
Andres Freund 73b8c3bd28 tableam: Rename wrapper functions to match callback names.
Some of the wrapper functions didn't match the callback names. Many of
them due to staying "consistent" with historic naming of the wrapped
functionality. We decided that for most cases it's more important to
be for tableam to be consistent going forward, than with the past.

The one exception is beginscan/endscan/...  because it'd have looked
odd to have systable_beginscan/endscan/... with a different naming
scheme, and changing the systable_* APIs would have caused way too
much churn (including breaking a lot of external users).

Author: Ashwin Agrawal, with some small additions by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CALfoeiugyrXZfX7n0ORCa4L-m834dzmaE8eFdbNR6PMpetU4Ww@mail.gmail.com
2019-05-23 16:32:36 -07:00
Tom Lane db6e2b4c52 Initial pgperltidy run for v12.
Make all the perl code look nice, too (for some value of "nice").
2019-05-22 13:36:19 -04:00
Tom Lane 8255c7a5ee Phase 2 pgindent run for v12.
Switch to 2.1 version of pg_bsd_indent.  This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.

Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22 13:04:48 -04:00
Tom Lane be76af171c Initial pgindent run for v12.
This is still using the 2.0 version of pg_bsd_indent.
I thought it would be good to commit this separately,
so as to document the differences between 2.0 and 2.1 behavior.

Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
2019-05-22 12:55:34 -04:00
Tom Lane 166f69f769 Fix O(N^2) performance issue in pg_publication_tables view.
The original coding of this view relied on a correlated IN sub-query.
Our planner is not very bright about correlated sub-queries, and even
if it were, there's no way for it to know that the output of
pg_get_publication_tables() is duplicate-free, making the de-duplicating
semantics of IN unnecessary.  Hence, rewrite as a LATERAL sub-query.
This provides circa 100X speedup for me with a few hundred published
tables (the whole regression database), and things would degrade as
roughly O(published_relations * all_relations) beyond that.

Because the rules.out expected output changes, force a catversion bump.
Ordinarily we might not want to do that post-beta1; but we already know
we'll be doing a catversion bump before beta2 to fix pg_statistic_ext
issues, so it's pretty much free to fix it now instead of waiting for v13.

Per report and fix suggestion from PegoraroF10.

Discussion: https://postgr.es/m/1551385426763-0.post@n3.nabble.com
2019-05-22 11:47:02 -04:00
Tom Lane 4fbf809e13 In transam.h, don't expose static inline functions to frontend code.
That leads to unsatisfied external references if the C compiler fails
to elide unused static functions.  Apparently, we have no buildfarm
members building HEAD that have that issue ... but such compilers still
exist in the wild.  Need to do something about that.

In passing, fix Berkeley-era typo in comment.

Discussion: https://postgr.es/m/27054.1558533367@sss.pgh.pa.us
2019-05-22 10:38:21 -04:00
Robert Haas 1171d7d585 tableam: Move heap-specific logic from needs_toast_table below tableam.
This allows table AMs to completely suppress TOAST table creation, or
to modify the conditions under which they are created.

Patch by me.  Reviewed by Andres Freund.

Discussion: http://postgr.es/m/CA+Tgmoa4O2n=yphqD2pERUnYmUO84bH1SqMsA-nSxBGsZ7gWfA@mail.gmail.com
2019-05-21 11:57:13 -04:00
Tom Lane a240570b1e Stamp 12beta1. 2019-05-20 16:37:22 -04:00
Andres Freund 78d6a9cbd3 Fix and improve SnapshotType comments.
The comment for SNAPSHOT_SELF was unfortunately explaining
SNAPSHOT_DIRTY, as reported by Sergei. Also expand a few comments, and
include a few more comments from heapam_visibility.c, so they're in an
AM independent place.

Reported-By: Sergei Kornilov
Author: Andres Freund
Discussion: https://postgr.es/m/9152241558192351@sas1-d856b3d759c7.qloud-c.yandex.net
2019-05-19 16:18:44 -07:00
Andres Freund c3b23ae457 Don't to predicate lock for analyze scans, refactor scan option passing.
Before this commit, when ANALYZE was run on a table and serializable
was used (either by virtue of an explicit BEGIN TRANSACTION ISOLATION
LEVEL SERIALIZABLE, or default_transaction_isolation being set to
serializable) a null pointer dereference lead to a crash.

The analyze scan doesn't need a snapshot (nor predicate locking), but
before this commit a scan only contained information about being a
bitmap or sample scan.

Refactor the option passing to the scan_begin callback to use a
bitmask instead. Alternatively we could have added a new boolean
parameter, but that seems harder to read. Even before this issue
various people (Heikki, Tom, Robert) suggested doing so.

These changes don't change the scan APIs outside of tableam. The flags
argument could be exposed, it's not necessary to fix this
problem. Also the wrapper table_beginscan* functions encapsulate most
of that complexity.

After these changes fixing the bug is trivial, just don't acquire
predicate lock for analyze style scans. That was already done for
bitmap heap scans.  Add an assert that a snapshot is passed when
acquiring the predicate lock, so this kind of bug doesn't require
running with serializable.

Also add a comment about sample scans currently requiring predicate
locking the entire relation, that previously wasn't remarked upon.

Reported-By: Joe Wildish
Author: Andres Freund
Discussion:
    https://postgr.es/m/4EA80A20-E9BF-49F1-9F01-5B66CAB21453@elusive.cx
    https://postgr.es/m/20190411164947.nkii4gaeilt4bui7@alap3.anarazel.de
    https://postgr.es/m/20190518203102.g7peu2fianukjuxm@alap3.anarazel.de
2019-05-19 15:10:28 -07:00
Andres Freund 147e3722f7 tableam: Avoid relying on relation size to determine validity of tids.
Instead add a tableam callback to do so. To avoid adding per
validation overhead, pass a scan to tuple_tid_valid. In heap's case
we'd otherwise incurred a RelationGetNumberOfBlocks() call for each
tid - which'd have added noticable overhead to nodeTidscan.c.

Author: Andres Freund
Reviewed-By: Ashwin Agrawal
Discussion: https://postgr.es/m/20190515185447.gno2jtqxyktylyvs@alap3.anarazel.de
2019-05-17 18:56:55 -07:00
Andres Freund 7f44ede594 tableam: Don't assume that every AM uses md.c style storage.
Previously various parts of the code routed size requests through
RelationGetNumberOfBlocks[InFork]. That works if md.c is used by the
AM, but not otherwise.

Add a tableam callback to return the size of the table. As not every
AM will use postgres' BLCKSZ, have it return bytes, and have
RelationGetNumberOfBlocksInFork() round the byte size up into blocks.

To allow code outside of the AM to determine the actual relation size
map InvalidForkNumber the total size of a relation, as not every AM
might just need the postgres defined forks.

A few users of RelationGetNumberOfBlocks() ought to be converted away
from that. One case, the use of it to determine whether a tid is
valid, will be fixed in a follow up commit. Others will have to wait
for v13.

Author: Andres Freund
Discussion: https://postgr.es/m/20190423225201.3bbv6tbqzkb5w7cw@alap3.anarazel.de
2019-05-17 18:56:47 -07:00
Tom Lane 6630ccad7a Restructure creation of run-time pruning steps.
Previously, gen_partprune_steps() always built executor pruning steps
using all suitable clauses, including those containing PARAM_EXEC
Params.  This meant that the pruning steps were only completely safe
for executor run-time (scan start) pruning.  To prune at executor
startup, we had to ignore the steps involving exec Params.  But this
doesn't really work in general, since there may be logic changes
needed as well --- for example, pruning according to the last operator's
btree strategy is the wrong thing if we're not applying that operator.
The rules embodied in gen_partprune_steps() and its minions are
sufficiently complicated that tracking their incremental effects in
other logic seems quite impractical.

Short of a complete redesign, the only safe fix seems to be to run
gen_partprune_steps() twice, once to create executor startup pruning
steps and then again for run-time pruning steps.  We can save a few
cycles however by noting during the first scan whether we rejected
any clauses because they involved exec Params --- if not, we don't
need to do the second scan.

In support of this, refactor the internal APIs in partprune.c to make
more use of passing information in the GeneratePruningStepsContext
struct, rather than as separate arguments.

This is, I hope, the last piece of our response to a bug report from
Alan Jackson.  Back-patch to v11 where this code came in.

Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
2019-05-17 19:44:34 -04:00
Tom Lane 8a0f0ad540 Remove no-longer-used typedef.
struct ClonedConstraint is no longer needed, so delete it.

Discussion: https://postgr.es/m/18102.1557947143@sss.pgh.pa.us
2019-05-15 17:26:52 -04:00
Tom Lane fc9a62af3f Move logging.h and logging.c from src/fe_utils/ to src/common/.
The original placement of this module in src/fe_utils/ is ill-considered,
because several src/common/ modules have dependencies on it, meaning that
libpgcommon and libpgfeutils now have mutual dependencies.  That makes it
pointless to have distinct libraries at all.  The intended design is that
libpgcommon is lower-level than libpgfeutils, so only dependencies from
the latter to the former are acceptable.

We already have the precedent that fe_memutils and a couple of other
modules in src/common/ are frontend-only, so it's not stretching anything
out of whack to treat logging.c as a frontend-only module in src/common/.
To the extent that such modules help provide a common frontend/backend
environment for the rest of common/ to use, it's a reasonable design.
(logging.c does not yet provide an ereport() emulation, but one can
dream.)

Hence, move these files over, and revert basically all of the build-system
changes made by commit cc8d41511.  There are no places that need to grow
new dependencies on libpgcommon, further reinforcing the idea that this
is the right solution.

Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
2019-05-14 14:20:10 -04:00
Peter Eisentraut 037165ca95 Update SQL features/conformance information to SQL:2016 2019-05-14 15:44:37 +02:00
Heikki Linnakangas 22251686f0 Detect internal GiST page splits correctly during index build.
As we descend the GiST tree during insertion, we modify any downlinks on
the way down to include the new tuple we're about to insert (if they don't
cover it already). Modifying an existing downlink might cause an internal
page to split, if the new downlink tuple is larger than the old one. If
that happens, we need to back up to the parent and re-choose a page to
insert to. We used to detect that situation, thanks to the NSN-LSN
interlock normally used to detect concurrent page splits, but that got
broken by commit 9155580fd5. With that commit, we now use a dummy constant
LSN value for every page during index build, so the LSN-NSN interlock no
longer works. I thought that was OK because there can't be any other
backends modifying the index during index build, but missed that the
insertion itself can modify the page we're inserting to. The consequence
was that we would sometimes insert the new tuple to an incorrect page, one
whose downlink doesn't cover the new tuple.

To fix, add a flag to the stack that keeps track of the state while
descending tree, to indicate that a page was split, and that we need to
retry the descend from the parent.

Thomas Munro first reported that the contrib/intarray regression test was
failing occasionally on the buildfarm after commit 9155580fd5. The failure
was intermittent, because the gistchoose() function is not deterministic,
and would only occasionally create the right circumstances for this bug to
cause the failure.

Patch by Anastasia Lubennikova, with some changes by me to make it work
correctly also when the internal page split also causes the "grandparent"
to be split.

Discussion: https://www.postgresql.org/message-id/CA%2BhUKGJRzLo7tZExWfSbwM3XuK7aAK7FhdBV0FLkbUG%2BW0v0zg%40mail.gmail.com
2019-05-14 13:18:44 +03:00
Michael Paquier 7e19929ea2 Fix duplicated words in comments
Author: Stephen Amell
Discussion: https://postgr.es/m/539fa271-21b3-777e-a468-d96cffe9c768@gmail.com
2019-05-14 09:37:35 +09:00
Peter Geoghegan ae7291acbc Standardize ItemIdData terminology.
The term "item pointer" should not be used to refer to ItemIdData
variables, since that is needlessly ambiguous.  Only
ItemPointerData/ItemPointer variables should be called item pointers.

To fix, establish the convention that ItemIdData variables should always
be referred to either as "item identifiers" or "line pointers".  The
term "item identifier" already predominates in docs and translatable
messages, and so should be the preferred alternative there.

Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
2019-05-13 15:53:39 -07:00
Robert Haas 221b377f09 Improve comment for att_isnull.
The comment implies that a 1 in the null bitmap indicates a null value,
but actually a 0 in the null bitmap indicates a null value. Try to
be more clear.

Patch by me; proposed wording reviewed by Alvaro Herrera and Tom Lane.

Discussion: http://postgr.es/m/CA+TgmobHOP8r6cG+UnsDFMrS30-m=jRrCBhgw-nFkn0k9QnFsg@mail.gmail.com
2019-05-13 13:13:24 -04:00
Tom Lane 85ccb6899c Rearrange pgstat_bestart() to avoid failures within its critical section.
We long ago decided to design the shared PgBackendStatus data structure to
minimize the cost of writing status updates, which means that writers just
have to increment the st_changecount field twice.  That isn't hooked into
any sort of resource management mechanism, which means that if something
were to throw error between the two increments, the st_changecount field
would be left odd indefinitely.  That would cause readers to lock up.
Now, since it's also a bad idea to leave the field odd for longer than
absolutely necessary (because readers will spin while we have it set),
the expectation was that we'd treat these segments like spinlock critical
sections, with only short, more or less straight-line, code in them.

That was fine as originally designed, but commit 9029f4b37 broke it
by inserting a significant amount of non-straight-line code into
pgstat_bestart(), code that is very capable of throwing errors, not to
mention taking a significant amount of time during which readers will spin.
We have a report from Neeraj Kumar of readers actually locking up, which
I suspect was due to an encoding conversion error in X509_NAME_to_cstring,
though conceivably it was just a garden-variety OOM failure.

Subsequent commits have loaded even more dubious code into pgstat_bestart's
critical section (and commit fc70a4b0d deserves some kind of booby prize
for managing to miss the critical section entirely, although the negative
consequences seem minimal given that the PgBackendStatus entry should be
seen by readers as inactive at that point).

The right way to fix this mess seems to be to compute all these values
into a local copy of the process' PgBackendStatus struct, and then just
copy the data back within the critical section proper.  This plan can't
be implemented completely cleanly because of the struct's heavy reliance
on out-of-line strings, which we must initialize separately within the
critical section.  But still, the critical section is far smaller and
safer than it was before.

In hopes of forestalling future errors of the same ilk, rename the
macros for st_changecount management to make it more apparent that
the writer-side macros create a critical section.  And to prevent
the worst consequences if we nonetheless manage to mess it up anyway,
adjust those macros so that they really are a critical section, ie
they now bump CritSectionCount.  That doesn't add much overhead, and
it guarantees that if we do somehow throw an error while the counter
is odd, it will lead to PANIC and a database restart to reset shared
memory.

Back-patch to 9.5 where the problem was introduced.

In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach
pgstat_read_current_status to copy st_gssstatus data from shared memory to
local memory.  Hence, subsequent use of that data within the transaction
would potentially see changing data that it shouldn't see.

Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
2019-05-11 21:27:29 -04:00
Michael Paquier 752f06443f Fix and improve description of locktag types in lock.h
The description of the lock type for speculative insertions was
incorrect, being copy-pasted from another one.

As discussed, also move the description for all the fields of lock tag
types from the structure listing lock tag types to the set of macros
setting each LOCKTAG.

Author: John Naylor
Discussion: https://postgr.es/m/CACPNZCtA0-ybaC4fFfaDq_8p_TUOLvGxZH9Dm-=TMHZJarBa7Q@mail.gmail.com
2019-05-10 09:35:27 +09:00
Tom Lane 2d7d946cd3 Clean up the behavior and API of catalog.c's is-catalog-relation tests.
The right way for IsCatalogRelation/Class to behave is to return true
for OIDs less than FirstBootstrapObjectId (not FirstNormalObjectId),
without any of the ad-hoc fooling around with schema membership.

The previous code was wrong because (1) it claimed that
information_schema tables were not catalog relations but their toast
tables were, which is silly; and (2) if you dropped and recreated
information_schema, which is a supported operation, the behavior
changed.  That's even sillier.  With this definition, "catalog
relations" are exactly the ones traceable to the postgres.bki data,
which seems like what we want.

With this simplification, we don't actually need access to the pg_class
tuple to identify a catalog relation; we only need its OID.  Hence,
replace IsCatalogClass with "IsCatalogRelationOid(oid)".  But keep
IsCatalogRelation as a convenience function.

This allows fixing some arguably-wrong semantics in contrib/sepgsql and
ReindexRelationConcurrently, which were using an IsSystemNamespace test
where what they really should be using is IsCatalogRelationOid.  The
previous coding failed to protect toast tables of system catalogs, and
also was not on board with the general principle that user-created tables
do not become catalogs just by virtue of being renamed into pg_catalog.
We can also get rid of a messy hack in ReindexMultipleTables.

While we're at it, also rename IsSystemNamespace to IsCatalogNamespace,
because the previous name invited confusion with the more expansive
semantics used by IsSystemRelation/Class.

Also improve the comments in catalog.c.

There are a few remaining places in replication-related code that are
special-casing OIDs below FirstNormalObjectId.  I'm inclined to think
those are wrong too, and if there should be any special case it should
just extend to FirstBootstrapObjectId.  But first we need to debate
whether a FOR ALL TABLES publication should include information_schema.

Discussion: https://postgr.es/m/21697.1557092753@sss.pgh.pa.us
Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
2019-05-08 23:27:38 -04:00
Etsuro Fujita b7434dc007 Add missing periods to comments. 2019-05-08 16:49:09 +09:00
Fujii Masao b84dbc8eb8 Add TRUNCATE parameter to VACUUM.
This commit adds new parameter to VACUUM command, TRUNCATE,
which specifies that VACUUM should attempt to truncate off
any empty pages at the end of the table and allow the disk space
for the truncated pages to be returned to the operating system.

This parameter, if specified, overrides the vacuum_truncate
reloption. If neither the reloption nor the VACUUM option is
used, the default is true, as before.

Author: Fujii Masao
Reviewed-by: Julien Rouhaud, Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoD+qtrSDL=GSma4Wd3kLYLeRC0hPna-YAdkDeV4z156vg@mail.gmail.com
2019-05-08 02:10:33 +09:00
Amit Kapila 7db0cde6b5 Revert "Avoid the creation of the free space map for small heap relations".
This feature was using a process local map to track the first few blocks
in the relation.  The map was reset each time we get the block with enough
freespace.  It was discussed that it would be better to track this map on
a per-relation basis in relcache and then invalidate the same whenever
vacuum frees up some space in the page or when FSM is created.  The new
design would be better both in terms of API design and performance.

List of commits reverted, in reverse chronological order:

06c8a5090e  Improve code comments in b0eaa4c51b.
13e8643bfc  During pg_upgrade, conditionally skip transfer of FSMs.
6f918159a9  Add more tests for FSM.
9c32e4c350  Clear the local map when not used.
29d108cdec  Update the documentation for FSM behavior..
08ecdfe7e5  Make FSM test portable.
b0eaa4c51b  Avoid creation of the free space map for small heap relations.

Discussion: https://postgr.es/m/20190416180452.3pm6uegx54iitbt5@alap3.anarazel.de
2019-05-07 09:30:24 +05:30
Tom Lane f884dca495 Remove RelationSetIndexList().
In the wake of commit f912d7dec, RelationSetIndexList isn't used any
more.  It was always a horrid wart, so getting rid of it is very nice.
We can also convert rd_indexvalid back to a plain boolean.

Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us
2019-05-03 10:26:14 -04:00
Magnus Hagander 659e53498c Fix union for pgstat message types
The message type for temp files and for checksum failures were missing
from the union. Due to the coding style used there was no compiler error
when this happend. So change the code to actively use the union thereby
producing a compiler error if the same mistake happens again, suggested
by Tom Lane.

Author: Julien Rouhaud
Reported-By: Tomas Vondra
Discussion: https://postgr.es/m/20190430163328.zd4rrlnbvgaqlcdz@development
2019-05-01 12:30:44 +02:00
Andres Freund 5c1560606d Fix several recently introduced issues around handling new relation forks.
Most of these stem from d25f519107 "tableam: relation creation, VACUUM
FULL/CLUSTER, SET TABLESPACE.".

1) To pass data to the relation_set_new_filenode()
   RelationSetNewRelfilenode() was made to update RelationData.rd_rel
   directly. That's not OK however, as it makes the relcache entries
   temporarily inconsistent. Which among other scenarios is a problem
   if a REINDEX targets an index on pg_class - the
   CatalogTupleUpdate() in RelationSetNewRelfilenode().  Presumably
   that was introduced because other places in the code do so - while
   those aren't "good practice" they don't appear to be actively
   buggy (e.g. because system tables may not be targeted).

   I (Andres) should have caught this while reviewing and signficantly
   evolving the code in that commit, mea culpa.

   Fix that by instead passing in the new RelFileNode as separate
   argument to relation_set_new_filenode() and rely on the relcache to
   update the catalog entry. Also revert that the
   RelationMapUpdateMap() call was changed to immediate, and undo some
   other more unnecessary changes.

2) Document that the relation_set_new_filenode cannot rely on the
   whole relcache entry to be valid. It might be worthwhile to
   refactor the code to never have to rely on that, but given the way
   heap_create() is currently coded, that'd be a large change.

3) ATExecSetTableSpace() shouldn't do FlushRelationBuffers() itself. A
   table AM might not use shared buffers at all. Move to
   index_copy_data() and heapam_relation_copy_data().

4) heapam_relation_set_new_filenode() previously sometimes accessed
   rel->rd_rel->relpersistence rather than the `persistence`
   argument. Code movement mistake.

5) Previously heapam_relation_set_new_filenode() re-opened the smgr
   relation to create the init for, if necesary. Instead have
   RelationCreateStorage() return the SMgrRelation and use it to
   create the init fork.

6) Add a note about the danger of modifying the relcache directly to
   ATExecSetTableSpace() - it's currently not a bug because there's a
   check ERRORing for catalog tables.

Regression tests and assertion improvements that together trigger the
bug described in 1) will be added in a later commit, as there is a
related bug on all branches.

Reported-By: Michael Paquier
Diagnosed-By: Tom Lane and Andres Freund
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz
2019-04-29 19:28:05 -07:00
Tom Lane a1a789eb5a In walreceiver, don't try to do ereport() in a signal handler.
This is quite unsafe, even for the case of ereport(FATAL) where we won't
return control to the interrupted code, and despite this code's use of
a flag to restrict the areas where we'd try to do it.  It's possible
for example that we interrupt malloc or free while that's holding a lock
that's meant to protect against cross-thread interference.  Then, any
attempt to do malloc or free within ereport() will result in a deadlock,
preventing the walreceiver process from exiting in response to SIGTERM.
We hypothesize that this explains some hard-to-reproduce failures seen
in the buildfarm.

Hence, get rid of the immediate-exit code in WalRcvShutdownHandler,
as well as the logic associated with WalRcvImmediateInterruptOK.
Instead, we need to take care that potentially-blocking operations
in the walreceiver's data transmission logic (libpqwalreceiver.c)
will respond reasonably promptly to the process's latch becoming
set and then call ProcessWalRcvInterrupts.  Much of the needed code
for that was already present in libpqwalreceiver.c.  I refactored
things a bit so that all the uses of PQgetResult use latch-aware
waiting, but didn't need to do much more.

These changes should be enough to ensure that libpqwalreceiver.c
will respond promptly to SIGTERM whenever it's waiting to receive
data.  In principle, it could block for a long time while waiting
to send data too, and this patch does nothing to guard against that.
I think that that hazard is mostly theoretical though: such blocking
should occur only if we fill the kernel's data transmission buffers,
and we don't generally send enough data to make that happen without
waiting for input.  If we find out that the hazard isn't just
theoretical, we could fix it by using PQsetnonblocking, but that
would require more ticklish changes than I care to make now.

This is a bug fix, but it seems like too big a change to push into
the back branches without much more testing than there's time for
right now.  Perhaps we'll back-patch once we have more confidence
in the change.

Patch by me; thanks to Thomas Munro for review.

Discussion: https://postgr.es/m/20190416070119.GK2673@paquier.xyz
2019-04-29 12:26:07 -04:00
Tom Lane c3f67ed6e4 Do pre-release housekeeping on catalog data, and fix jsonpath send/recv.
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES.  (The only change is 8394 -> 3428.)

Also run reformat_dat_file.pl while I'm here.

While looking at the reformat diffs, I chanced to notice that type
jsonpath had typsend and typreceive = '-', which surely is not the
intention given that jsonpath_send and jsonpath_recv exist.
Fix that.  It's safe to assume that these functions have never been
tested :-(.  I didn't try, but somebody should.
2019-04-28 17:16:50 -04:00
Alvaro Herrera 87259588d0 Fix tablespace inheritance for partitioned rels
Commit ca4103025d left a few loose ends.  The most important one
(broken pg_dump output) is already fixed by virtue of commit
3b23552ad8, but some things remained:

* When ALTER TABLE rewrites tables, the indexes must remain in the
  tablespace they were originally in.  This didn't work because
  index recreation during ALTER TABLE runs manufactured SQL (yuck),
  which runs afoul of default_tablespace in competition with the parent
  relation tablespace.  To fix, reset default_tablespace to the empty
  string temporarily, and add the TABLESPACE clause as appropriate.

* Setting a partitioned rel's tablespace to the database default is
  confusing; if it worked, it would direct the partitions to that
  tablespace regardless of default_tablespace.  But in reality it does
  not work, and making it work is a larger project.  Therefore, throw
  an error when this condition is detected, to alert the unwary.

Add some docs and tests, too.

Author: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
2019-04-25 10:31:32 -04:00
Andres Freund fdc7efcc30 Allow pg_class xid & multixid horizons to not be set.
This allows table AMs that don't need these horizons. This was already
documented in the tableam relation_set_new_filenode callback, but an
assert prevented if from actually working (the test AM code contained
the change itself). Defang the asserts in the general code, and move
the stronger ones into heap AM.

Relatedly, after CLUSTER/VACUUM, we'd always assign a relfrozenxid /
relminmxid. Change the table_relation_copy_for_cluster() interface to
allow the AM to overwrite the horizons that get set on the pg_class
entry.  This'd also in the future allow AMs like heap to compute a
relfrozenxid during rewrite that's the table's actual minimum rather
than a pre-determined value.  Arguably it'd have been better to move
the whole computation / setting of those values into the callback, but
it seems likely that for other reasons it'd be better to be able to
use one value to vacuum/cluster multiple tables (e.g. a toast's
horizon shouldn't be different than the table's).

Reported-By: Heikki Linnakangas
Author: Andres Freund
Discussion: https://postgr.es/m/9a7fb9cc-2419-5db7-8840-ddc10c93f122@iki.fi
2019-04-23 21:42:12 -07:00
Tom Lane e0fb4c9d01 Remove useless comment.
Commit e439c6f0c removed IndexStmt.relationId, but not the comment
that had been added to explain it.  Said comment was therefore
very confusing.
2019-04-23 17:17:26 -04:00
Peter Geoghegan 9b10926263 Prevent O(N^2) unique index insertion edge case.
Commit dd299df8 made nbtree treat heap TID as a tiebreaker column,
establishing the principle that there is only one correct location (page
and page offset number) for every index tuple, no matter what.
Insertions of tuples into non-unique indexes proceed as if heap TID
(scan key's scantid) is just another user-attribute value, but
insertions into unique indexes are more delicate.  The TID value in
scantid must initially be omitted to ensure that the unique index
insertion visits every leaf page that duplicates could be on.  The
scantid is set once again after unique checking finishes successfully,
which can force _bt_findinsertloc() to step right one or more times, to
locate the leaf page that the new tuple must be inserted on.

Stepping right within _bt_findinsertloc() was assumed to occur no more
frequently than stepping right within _bt_check_unique(), but there was
one important case where that assumption was incorrect: inserting a
"duplicate" with NULL values.  Since _bt_check_unique() didn't do any
real work in this case, it wasn't appropriate for _bt_findinsertloc() to
behave as if it was finishing off a conventional unique insertion, where
any existing physical duplicate must be dead or recently dead.
_bt_findinsertloc() might have to grovel through a substantial portion
of all of the leaf pages in the index to insert a single tuple, even
when there were no dead tuples.

To fix, treat insertions of tuples with NULLs into a unique index as if
they were insertions into a non-unique index: never unset scantid before
calling _bt_search() to descend the tree, and bypass _bt_check_unique()
entirely.  _bt_check_unique() is no longer responsible for incoming
tuples with NULL values.

Discussion: https://postgr.es/m/CAH2-Wzm08nr+JPx4jMOa9CGqxWYDQ-_D4wtPBiKghXAUiUy-nQ@mail.gmail.com
2019-04-23 10:33:57 -07:00
Tom Lane f4a3fdfbdc Avoid order-of-execution problems with ALTER TABLE ADD PRIMARY KEY.
Up to now, DefineIndex() was responsible for adding attnotnull constraints
to the columns of a primary key, in any case where it hadn't been
convenient for transformIndexConstraint() to mark those columns as
is_not_null.  It (or rather its minion index_check_primary_key) did this
by executing an ALTER TABLE SET NOT NULL command for the target table.

The trouble with this solution is that if we're creating the index due
to ALTER TABLE ADD PRIMARY KEY, and the outer ALTER TABLE has additional
sub-commands, the inner ALTER TABLE's operations executed at the wrong
time with respect to the outer ALTER TABLE's operations.  In particular,
the inner ALTER would perform a validation scan at a point where the
table's storage might be inconsistent with its catalog entries.  (This is
on the hairy edge of being a security problem, but AFAICS it isn't one
because the inner scan would only be interested in the tuples' null
bitmaps.)  This can result in unexpected failures, such as the one seen
in bug #15580 from Allison Kaptur.

To fix, let's remove the attempt to do SET NOT NULL from DefineIndex(),
reducing index_check_primary_key's role to verifying that the columns are
already not null.  (It shouldn't ever see such a case, but it seems wise
to keep the check for safety.)  Instead, make transformIndexConstraint()
generate ALTER TABLE SET NOT NULL subcommands to be executed ahead of
the ADD PRIMARY KEY operation in every case where it can't force the
column to be created already-not-null.  This requires only minor surgery
in parse_utilcmd.c, and it makes for a much more satisfying spec for
transformIndexConstraint(): it's no longer having to take it on faith
that someone else will handle addition of NOT NULL constraints.

To make that work, we have to move the execution of AT_SetNotNull into
an ALTER pass that executes ahead of AT_PASS_ADD_INDEX.  I moved it to
AT_PASS_COL_ATTRS, and put that after AT_PASS_ADD_COL to avoid failure
when the column is being added in the same command.  This incidentally
fixes a bug in the only previous usage of AT_PASS_COL_ATTRS, for
AT_SetIdentity: it didn't work either for a newly-added column.

Playing around with this exposed a separate bug in ALTER TABLE ONLY ...
ADD PRIMARY KEY for partitioned tables.  The intent of the ONLY modifier
in that context is to prevent doing anything that would require holding
lock for a long time --- but the implied SET NOT NULL would recurse to
the child partitions, and do an expensive validation scan for any child
where the column(s) were not already NOT NULL.  To fix that, invent a
new ALTER subcommand AT_CheckNotNull that just insists that a child
column be already NOT NULL, and apply that, not AT_SetNotNull, when
recursing to children in this scenario.  This results in a slightly laxer
definition of ALTER TABLE ONLY ... SET NOT NULL for partitioned tables,
too: that command will now work as long as all children are already NOT
NULL, whereas before it just threw up its hands if there were any
partitions.

In passing, clean up the API of generateClonedIndexStmt(): remove a
useless argument, ensure that the output argument is not left undefined,
update the header comment.

A small side effect of this change is that no-such-column errors in ALTER
TABLE ADD PRIMARY KEY now produce a different message that includes the
table name, because they are now detected by the SET NOT NULL step which
has historically worded its error that way.  That seems fine to me, so
I didn't make any effort to avoid the wording change.

The basic bug #15580 is of very long standing, and these other bugs
aren't new in v12 either.  However, this is a pretty significant change
in the way ALTER TABLE ADD PRIMARY KEY works.  On balance it seems best
not to back-patch, at least not till we get some more confidence that
this patch has no new bugs.

Patch by me, but thanks to Jie Zhang for a preliminary version.

Discussion: https://postgr.es/m/15580-d1a6de5a3d65da51@postgresql.org
Discussion: https://postgr.es/m/1396E95157071C4EBBA51892C5368521017F2E6E63@G08CNEXMBPEKD02.g08.fujitsu.local
2019-04-23 12:25:27 -04:00
Michael Paquier ccae190b91 Fix detection of passwords hashed with MD5 or SCRAM-SHA-256
This commit fixes a couple of issues related to the way password
verifiers hashed with MD5 or SCRAM-SHA-256 are detected, leading to
being able to store in catalogs passwords which do not follow the
supported hash formats:
- A MD5-hashed entry was checked based on if its header uses "md5" and
if the string length matches what is expected.  Unfortunately the code
never checked if the hash only used hexadecimal characters, as reported
by Tom Lane.
- A SCRAM-hashed entry was checked based on only its header, which
should be "SCRAM-SHA-256$", but it never checked for any fields
afterwards, as reported by Jonathan Katz.

Backpatch down to v10, which is where SCRAM has been introduced, and
where password verifiers in plain format have been removed.

Author: Jonathan Katz
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/016deb6b-1f0a-8e9f-1833-a8675b170aa9@postgresql.org
Backpatch-through: 10
2019-04-23 15:43:21 +09:00
Andres Freund b5f58cf213 Convert gist to compute page level xid horizon on primary.
Due to parallel development, gist added the missing conflict
information in c952eae52a, while 558a9165e0 moved that computation
to the primary for the index types that already had it.  Thus adapt
gist to also compute on the primary, using
index_compute_xid_horizon_for_tuples() instead of its own copy of the
logic.

This also adds pg_waldump support for XLOG_GIST_DELETE records, which
previously was not properly present.

Bumps WAL version.

Author: Andres Freund
Discussion: https://postgr.es/m/20190406050243.bszosdg4buvabfrt@alap3.anarazel.de
2019-04-22 14:28:30 -07:00
Tomas Vondra d08c44f7a4 Fix mvdistinct and dependencies size calculations
The formulas used to calculate size while (de)serializing mvndistinct
and functional dependencies were based on offset() of the structs. But
that is incorrect, because the structures are not copied directly, we
we copy the individual fields directly.

At the moment this works fine, because there is no alignment padding
on any platform we support. But it might break if we ever added some
fields into any of the structs, for example. It's also confusing.

Fixed by reworking the macros to directly sum sizes of serialized
fields. The macros are now useful only for serialiation, so there is
no point in keeping them in the public header file. So make them
private by moving them to the .c files.

Also adds a couple more asserts to check the serialization, and fixes
an incorrect allocation of MVDependency instead of (MVDependency *).

Reported-By: Tom Lane
Discussion: https://postgr.es/m/29785.1555365602@sss.pgh.pa.us
2019-04-21 20:23:34 +02:00
Andres Freund b8b94ea129 Fix slot type issue for fuzzy distance index scan over out-of-core table AM.
For amcanreorderby scans the nodeIndexscan.c's reorder queue holds
heap tuples, but the underlying table likely does not. Before this fix
we'd return different types of slots, depending on whether the tuple
came from the reorder queue, or from the index + table.

While that could be fixed by signalling that the node doesn't return a
fixed type of slot, it seems better to instead remove the separate
slot for the reorder queue, and use ExecForceStoreHeapTuple() to store
tuples from the queue. It's not particularly common to need
reordering, after all.

This reverts most of the iss_ReorderQueueSlot related changes to
nodeIndexscan.c made in 1a0586de36, except that now
ExecForceStoreHeapTuple() is used instead of ExecStoreHeapTuple().

Noticed when testing zheap against the in-core version of tableam.

Author: Andres Freund
2019-04-19 11:42:37 -07:00
Andres Freund 88e6ad3054 Fix two memory leaks around force-storing tuples in slots.
As reported by Tom, when ExecStoreMinimalTuple() had to perform a
conversion to store the minimal tuple in the slot, it forgot to
respect the shouldFree flag, and leaked the tuple into the current
memory context if true.  Fix that by freeing the tuple in that case.

Looking at the relevant code made me (Andres) realize that not having
the shouldFree parameter to ExecForceStoreHeapTuple() was a bad
idea. Some callers had to locally implement the necessary logic, and
in one case it was missing, creating a potential per-group leak in
non-hashed aggregation.

The choice to not free the tuple in ExecComputeStoredGenerated() is
not pretty, but not introduced by this commit - I'll start a separate
discussion about it.

Reported-By: Tom Lane
Discussion: https://postgr.es/m/366.1555382816@sss.pgh.pa.us
2019-04-19 11:39:56 -07:00
Tom Lane dde7fb7836 Use [FLEXIBLE_ARRAY_MEMBER] not [1] in MultiSortSupportData.
This struct seems to have not gotten the word about preferred
coding style for variable-length arrays.
2019-04-15 19:32:44 -04:00
Tom Lane 5f1433ac5e Prevent memory leaks associated with relcache rd_partcheck structures.
The original coding of generate_partition_qual() just copied the list
of predicate expressions into the global CacheMemoryContext, making it
effectively impossible to clean up when the owning relcache entry is
destroyed --- the relevant code in RelationDestroyRelation() only managed
to free the topmost List header :-(.  This resulted in a session-lifespan
memory leak whenever a table partition's relcache entry is rebuilt.
Fortunately, that's not normally a large data structure, and rebuilds
shouldn't occur all that often in production situations; but this is
still a bug worth fixing back to v10 where the code was introduced.

To fix, put the cached expression tree into its own small memory context,
as we do with other complicated substructures of relcache entries.
Also, deal more honestly with the case that a partition has an empty
partcheck list; while that probably isn't a case that's very interesting
for production use, it's legal.

In passing, clarify comments about how partitioning-related relcache
data structures are managed, and add some Asserts that we're not leaking
old copies when we overwrite these data fields.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/7961.1552498252@sss.pgh.pa.us
2019-04-13 13:22:26 -04:00
Noah Misch c098509927 Consistently test for in-use shared memory.
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process.  When the postmaster.pid file was
missing, a starting postmaster used weaker checks.  Change to use the
same checks in both scenarios.  This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start".  A postmaster
will no longer stop if shmat() of an old segment fails with EACCES.  A
postmaster will no longer recycle segments pertaining to other data
directories.  That's good for production, but it's bad for integration
tests that crash a postmaster and immediately delete its data directory.
Such a test now leaks a segment indefinitely.  No "make check-world"
test does that.  win32_shmem.c already avoided all these problems.  In
9.6 and later, enhance PostgresNode to facilitate testing.  Back-patch
to 9.4 (all supported versions).

Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI.

Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
2019-04-12 22:36:38 -07:00
Magnus Hagander 77bd49adba Show shared object statistics in pg_stat_database
This adds a row to the pg_stat_database view with datoid 0 and datname
NULL for those objects that are not in a database. This was added
particularly for checksums, but we were already tracking more satistics
for these objects, just not returning it.

Also add a checksum_last_failure column that holds the timestamptz of
the last checksum failure that occurred in a database (or in a
non-dataabase file), if any.

Author: Julien Rouhaud <rjuju123@gmail.com>
2019-04-12 14:04:50 +02:00
Peter Eisentraut ef6f30fe77 Fix REINDEX CONCURRENTLY of partitions
In case of a partition index, when swapping the old and new index, we
also need to attach the new index as a partition and detach the old
one.  Also, to handle partition indexes, we not only need to change
dependencies referencing the index, but also dependencies of the index
referencing something else.  The previous code did this only
specifically for a constraint, but we also need to do this for
partitioned indexes.  So instead write a generic function that does it
for all dependencies.

Author: Michael Paquier <michael@paquier.xyz>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/DF4PR8401MB11964EDB77C860078C343BEBEE5A0%40DF4PR8401MB1196.NAMPRD84.PROD.OUTLOOK.COM#154df1fedb735190a773481765f7b874
2019-04-12 08:36:05 +02:00
Amit Kapila bdf35744bd Avoid counting transaction stats for parallel worker cooperating
transaction.

The transaction that is initiated by the parallel worker to cooperate
with the actual transaction started by the main backend to complete the
query execution should not be counted as a separate transaction.  The
other internal transactions started and committed by the parallel worker
are still counted as separate transactions as we that is what we do in
other places like autovacuum.

This will partially fix the bloat in transaction stats due to additional
transactions performed by parallel workers.  For a complete fix, we need to
decide how we want to show all the transactions that are started internally
for various operations and that is a matter of separate patch.

Reported-by: Haribabu Kommi
Author: Haribabu Kommi
Reviewed-by: Amit Kapila, Jamison Kirk and Rahila Syed
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAJrrPGc9=jKXuScvNyQ+VNhO0FZk7LLAShAJRyZjnedd2D61EQ@mail.gmail.com
2019-04-10 08:24:15 +05:30
Thomas Munro d614aae02e Improve comment in sync.h.
Per off-list complaint from Andres Freund.
2019-04-10 12:49:49 +12:00
Noah Misch 617dc6d299 Avoid "could not reattach" by providing space for concurrent allocation.
We've long had reports of intermittent "could not reattach to shared
memory" errors on Windows.  Buildfarm member dory fails that way when
PGSharedMemoryReAttach() execution overlaps with creation of a thread
for the process's "default thread pool".  Fix that by providing a second
region to receive asynchronous allocations that would otherwise intrude
into UsedShmemSegAddr.  In pgwin32_ReserveSharedMemoryRegion(), stop
trying to free reservations landing at incorrect addresses; the caller's
next step has been to terminate the affected process.  Back-patch to 9.4
(all supported versions).

Reviewed by Tom Lane.  He also did much of the prerequisite research;
see commit bcbf2346d6.

Discussion: https://postgr.es/m/20190402135442.GA1173872@rfd.leadboat.com
2019-04-08 21:39:00 -07:00
Andres Freund 6421011ea2 tableam: comment and formatting fixes.
Author: Heikki Linnakangas
Discussion: https://postgr.es/m/9a7fb9cc-2419-5db7-8840-ddc10c93f122@iki.fi
2019-04-08 16:24:36 -07:00
Fujii Masao 119dcfad98 Add vacuum_truncate reloption.
vacuum_truncate controls whether vacuum tries to truncate off
any empty pages at the end of the table. Previously vacuum always
tried to do the truncation. However, the truncation could cause
some problems; for example, ACCESS EXCLUSIVE lock needs to
be taken on the table during the truncation and can cause
the query cancellation on the standby even if hot_standby_feedback
is true. Setting this reloption to false can be helpful to avoid
such problems.

Author: Tsunakawa Takayuki
Reviewed-By: Julien Rouhaud, Masahiko Sawada, Michael Paquier, Kirk Jamison and Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwE5UqFqSq1=kV3QtTUtXphTdyHA-8rAj4A=Y+e4kyp3BQ@mail.gmail.com
2019-04-08 16:43:57 +09:00
Andres Freund 41f5e04aec Fix a number of issues around modifying a previously updated row.
This commit fixes three, unfortunately related, issues:

1) Since 5db6df0c01, the introduction of DML via tableam, it was
   possible to trigger "ERROR: unexpected table_lock_tuple status: 1"
   when updating a row that was previously updated in the same
   transaction - but only when the previously updated row was before
   updated in a concurrent transaction (and READ COMMITTED was
   used). The reason for that was that that case simply wasn't
   expected. Fixing that lead to:

2) Even before the above commit, there were error checks (introduced
   in 6868ed7491) preventing a row being updated by different
   commands within the same statement (say in a function called by an
   UPDATE) - but that check wasn't performed when the row was first
   updated in a concurrent transaction - instead the second update was
   silently skipped in that case. After this change we throw the same
   error as we'd without the concurrent transaction.

3) The error messages (introduced in 6868ed7491) preventing such
   updates emitted the same error message for both DELETE and
   UPDATE ("tuple to be updated was already modified by an operation
   triggered by the current command"). While that could be changed
   separately, it made it hard to write tests that verify the correct
   correct behavior of the code.

This commit changes heap's implementation of table_lock_tuple() to
return TM_SelfModified instead of TM_Invisible (previously loosely
modeled after EvalPlanQualFetch), and teaches nodeModifyTable.c to
handle that in response to table_lock_tuple() and not just in response
to table_(delete|update).

Additionally it fixes the wrong error message (see 3 above). The
comment for table_lock_tuple() is also adjusted to state that
TM_Deleted won't return information in TM_FailureData - it'll not
always be available.

This also adds tests to ensure that DELETE/UPDATE correctly error out
when affecting a row that concurrently was modified by another
transaction.

Author: Andres Freund
Reported-By: Tom Lane, when investigating a bug bug fix to another bug
    by Amit Langote
Discussion: https://postgr.es/m/19321.1554567786@sss.pgh.pa.us
2019-04-07 22:14:47 -07:00
Peter Eisentraut 03f9e5cba0 Report progress of REINDEX operations
This uses the same infrastructure that the CREATE INDEX progress
reporting uses.  Add a column to pg_stat_progress_create_index to
report the OID of the index being worked on.  This was not necessary
for CREATE INDEX, but it's useful for REINDEX.

Also edit the phase descriptions a bit to be more consistent with the
source code comments.

Discussion: https://www.postgresql.org/message-id/ef6a6757-c36a-9e81-123f-13b19e36b7d7%402ndquadrant.com
2019-04-07 12:35:29 +02:00
Peter Eisentraut 106f2eb664 Cast pg_stat_progress_cluster.cluster_index_relid to oid
It's tracked internally as bigint, but when presented to the user it
should be oid.
2019-04-07 10:31:32 +02:00
Michael Paquier 249d649996 Add support TCP user timeout in libpq and the backend server
Similarly to the set of parameters for keepalive, a connection parameter
for libpq is added as well as a backend GUC, called tcp_user_timeout.

Increasing the TCP user timeout is useful to allow a connection to
survive extended periods without end-to-end connection, and decreasing
it allows application to fail faster.  By default, the parameter is 0,
which makes the connection use the system default, and follows a logic
close to the keepalive parameters in its handling.  When connecting
through a Unix-socket domain, the parameters have no effect.

Author: Ryohei Nagaura
Reviewed-by: Fabien Coelho, Robert Haas, Kyotaro Horiguchi, Kirk
Jamison, Mikalai Keida, Takayuki Tsunakawa, Andrei Yahorau
Discussion: https://postgr.es/m/EDA4195584F5064680D8130B1CA91C45367328@G01JPEXMBYT04
2019-04-06 15:23:37 +09:00
Tom Lane 959d00e9db Use Append rather than MergeAppend for scanning ordered partitions.
If we need ordered output from a scan of a partitioned table, but
the ordering matches the partition ordering, then we don't need to
use a MergeAppend to combine the pre-ordered per-partition scan
results: a plain Append will produce the same results.  This
both saves useless comparison work inside the MergeAppend proper,
and allows us to start returning tuples after istarting up just
the first child node not all of them.

However, all is not peaches and cream, because if some of the
child nodes have high startup costs then there will be big
discontinuities in the tuples-returned-versus-elapsed-time curve.
The planner's cost model cannot handle that (yet, anyway).
If we model the Append's startup cost as being just the first
child's startup cost, we may drastically underestimate the cost
of fetching slightly more tuples than are available from the first
child.  Since we've had bad experiences with over-optimistic choices
of "fast start" plans for ORDER BY LIMIT queries, that seems scary.
As a klugy workaround, set the startup cost estimate for an ordered
Append to be the sum of its children's startup costs (as MergeAppend
would).  This doesn't really describe reality, but it's less likely
to cause a bad plan choice than an underestimated startup cost would.
In practice, the cases where we really care about this optimization
will have child plans that are IndexScans with zero startup cost,
so that the overly conservative estimate is still just zero.

David Rowley, reviewed by Julien Rouhaud and Antonin Houska

Discussion: https://postgr.es/m/CAKJS1f-hAqhPLRk_RaSFTgYxd=Tz5hA7kQ2h4-DhJufQk8TGuw@mail.gmail.com
2019-04-05 19:20:43 -04:00
Alvaro Herrera 9f06d79ef8 Add facility to copy replication slots
This allows the user to create duplicates of existing replication slots,
either logical or physical, and even changing properties such as whether
they are temporary or the output plugin used.

There are multiple uses for this, such as initializing multiple replicas
using the slot for one base backup; when doing investigation of logical
replication issues; and to select a different output plugins.

Author: Masahiko Sawada
Reviewed-by: Michael Paquier, Andres Freund, Petr Jelinek
Discussion: https://postgr.es/m/CAD21AoAm7XX8y_tOPP6j4Nzzch12FvA1wPqiO690RCk+uYVstg@mail.gmail.com
2019-04-05 18:05:18 -03:00
Noah Misch 82150a05be Revert "Consistently test for in-use shared memory."
This reverts commits 2f932f71d9,
16ee6eaf80 and
6f0e190056.  The buildfarm has revealed
several bugs.  Back-patch like the original commits.

Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
2019-04-05 00:00:52 -07:00
Andres Freund ea97e440b8 Harden tableam against nonexistant / wrong kind of AMs.
Previously it was allowed to set default_table_access_method to an
empty string. That makes sense for default_tablespace, where that was
copied from, as it signals falling back to the database's default
tablespace. As there is no equivalent for table AMs, forbid that.

Also make sure to throw a usable error when creating a table using an
index AM, by using get_am_type_oid() to implement get_table_am_oid()
instead of a separate copy. Previously we'd error out only later, in
GetTableAmRoutine().

Thirdly remove GetTableAmRoutineByAmId() - it was only used in an
earlier version of 8586bf7ed8.

Add tests for the above (some for index AMs as well).
2019-04-04 17:39:39 -07:00
Andres Freund 86b85044e8 tableam: Add table_multi_insert() and revamp/speed-up COPY FROM buffering.
This adds table_multi_insert(), and converts COPY FROM, the only user
of heap_multi_insert, to it.

A simple conversion of COPY FROM use slots would have yielded a
slowdown when inserting into a partitioned table for some
workloads. Different partitions might need different slots (both slot
types and their descriptors), and dropping / creating slots when
there's constant partition changes is measurable.

Thus instead revamp the COPY FROM buffering for partitioned tables to
allow to buffer inserts into multiple tables, flushing only when
limits are reached across all partition buffers. By only dropping
slots when there've been inserts into too many different partitions,
the aforementioned overhead is gone. By allowing larger batches, even
when there are frequent partition changes, we actuall speed such cases
up significantly.

By using slots COPY of very narrow rows into unlogged / temporary
might slow down very slightly (due to the indirect function calls).

Author: David Rowley, Andres Freund, Haribabu Kommi
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
2019-04-04 16:28:18 -07:00