Commit Graph

2291 Commits

Author SHA1 Message Date
Tom Lane 166f69f769 Fix O(N^2) performance issue in pg_publication_tables view.
The original coding of this view relied on a correlated IN sub-query.
Our planner is not very bright about correlated sub-queries, and even
if it were, there's no way for it to know that the output of
pg_get_publication_tables() is duplicate-free, making the de-duplicating
semantics of IN unnecessary.  Hence, rewrite as a LATERAL sub-query.
This provides circa 100X speedup for me with a few hundred published
tables (the whole regression database), and things would degrade as
roughly O(published_relations * all_relations) beyond that.

Because the rules.out expected output changes, force a catversion bump.
Ordinarily we might not want to do that post-beta1; but we already know
we'll be doing a catversion bump before beta2 to fix pg_statistic_ext
issues, so it's pretty much free to fix it now instead of waiting for v13.

Per report and fix suggestion from PegoraroF10.

Discussion: https://postgr.es/m/1551385426763-0.post@n3.nabble.com
2019-05-22 11:47:02 -04:00
Tom Lane 8a0f0ad540 Remove no-longer-used typedef.
struct ClonedConstraint is no longer needed, so delete it.

Discussion: https://postgr.es/m/18102.1557947143@sss.pgh.pa.us
2019-05-15 17:26:52 -04:00
Peter Eisentraut 037165ca95 Update SQL features/conformance information to SQL:2016 2019-05-14 15:44:37 +02:00
Tom Lane 2d7d946cd3 Clean up the behavior and API of catalog.c's is-catalog-relation tests.
The right way for IsCatalogRelation/Class to behave is to return true
for OIDs less than FirstBootstrapObjectId (not FirstNormalObjectId),
without any of the ad-hoc fooling around with schema membership.

The previous code was wrong because (1) it claimed that
information_schema tables were not catalog relations but their toast
tables were, which is silly; and (2) if you dropped and recreated
information_schema, which is a supported operation, the behavior
changed.  That's even sillier.  With this definition, "catalog
relations" are exactly the ones traceable to the postgres.bki data,
which seems like what we want.

With this simplification, we don't actually need access to the pg_class
tuple to identify a catalog relation; we only need its OID.  Hence,
replace IsCatalogClass with "IsCatalogRelationOid(oid)".  But keep
IsCatalogRelation as a convenience function.

This allows fixing some arguably-wrong semantics in contrib/sepgsql and
ReindexRelationConcurrently, which were using an IsSystemNamespace test
where what they really should be using is IsCatalogRelationOid.  The
previous coding failed to protect toast tables of system catalogs, and
also was not on board with the general principle that user-created tables
do not become catalogs just by virtue of being renamed into pg_catalog.
We can also get rid of a messy hack in ReindexMultipleTables.

While we're at it, also rename IsSystemNamespace to IsCatalogNamespace,
because the previous name invited confusion with the more expansive
semantics used by IsSystemRelation/Class.

Also improve the comments in catalog.c.

There are a few remaining places in replication-related code that are
special-casing OIDs below FirstNormalObjectId.  I'm inclined to think
those are wrong too, and if there should be any special case it should
just extend to FirstBootstrapObjectId.  But first we need to debate
whether a FOR ALL TABLES publication should include information_schema.

Discussion: https://postgr.es/m/21697.1557092753@sss.pgh.pa.us
Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
2019-05-08 23:27:38 -04:00
Andres Freund 5c1560606d Fix several recently introduced issues around handling new relation forks.
Most of these stem from d25f519107 "tableam: relation creation, VACUUM
FULL/CLUSTER, SET TABLESPACE.".

1) To pass data to the relation_set_new_filenode()
   RelationSetNewRelfilenode() was made to update RelationData.rd_rel
   directly. That's not OK however, as it makes the relcache entries
   temporarily inconsistent. Which among other scenarios is a problem
   if a REINDEX targets an index on pg_class - the
   CatalogTupleUpdate() in RelationSetNewRelfilenode().  Presumably
   that was introduced because other places in the code do so - while
   those aren't "good practice" they don't appear to be actively
   buggy (e.g. because system tables may not be targeted).

   I (Andres) should have caught this while reviewing and signficantly
   evolving the code in that commit, mea culpa.

   Fix that by instead passing in the new RelFileNode as separate
   argument to relation_set_new_filenode() and rely on the relcache to
   update the catalog entry. Also revert that the
   RelationMapUpdateMap() call was changed to immediate, and undo some
   other more unnecessary changes.

2) Document that the relation_set_new_filenode cannot rely on the
   whole relcache entry to be valid. It might be worthwhile to
   refactor the code to never have to rely on that, but given the way
   heap_create() is currently coded, that'd be a large change.

3) ATExecSetTableSpace() shouldn't do FlushRelationBuffers() itself. A
   table AM might not use shared buffers at all. Move to
   index_copy_data() and heapam_relation_copy_data().

4) heapam_relation_set_new_filenode() previously sometimes accessed
   rel->rd_rel->relpersistence rather than the `persistence`
   argument. Code movement mistake.

5) Previously heapam_relation_set_new_filenode() re-opened the smgr
   relation to create the init for, if necesary. Instead have
   RelationCreateStorage() return the SMgrRelation and use it to
   create the init fork.

6) Add a note about the danger of modifying the relcache directly to
   ATExecSetTableSpace() - it's currently not a bug because there's a
   check ERRORing for catalog tables.

Regression tests and assertion improvements that together trigger the
bug described in 1) will be added in a later commit, as there is a
related bug on all branches.

Reported-By: Michael Paquier
Diagnosed-By: Tom Lane and Andres Freund
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz
2019-04-29 19:28:05 -07:00
Tom Lane c3f67ed6e4 Do pre-release housekeeping on catalog data, and fix jsonpath send/recv.
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES.  (The only change is 8394 -> 3428.)

Also run reformat_dat_file.pl while I'm here.

While looking at the reformat diffs, I chanced to notice that type
jsonpath had typsend and typreceive = '-', which surely is not the
intention given that jsonpath_send and jsonpath_recv exist.
Fix that.  It's safe to assume that these functions have never been
tested :-(.  I didn't try, but somebody should.
2019-04-28 17:16:50 -04:00
Alvaro Herrera 87259588d0 Fix tablespace inheritance for partitioned rels
Commit ca4103025d left a few loose ends.  The most important one
(broken pg_dump output) is already fixed by virtue of commit
3b23552ad8, but some things remained:

* When ALTER TABLE rewrites tables, the indexes must remain in the
  tablespace they were originally in.  This didn't work because
  index recreation during ALTER TABLE runs manufactured SQL (yuck),
  which runs afoul of default_tablespace in competition with the parent
  relation tablespace.  To fix, reset default_tablespace to the empty
  string temporarily, and add the TABLESPACE clause as appropriate.

* Setting a partitioned rel's tablespace to the database default is
  confusing; if it worked, it would direct the partitions to that
  tablespace regardless of default_tablespace.  But in reality it does
  not work, and making it work is a larger project.  Therefore, throw
  an error when this condition is detected, to alert the unwary.

Add some docs and tests, too.

Author: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
2019-04-25 10:31:32 -04:00
Magnus Hagander 77bd49adba Show shared object statistics in pg_stat_database
This adds a row to the pg_stat_database view with datoid 0 and datname
NULL for those objects that are not in a database. This was added
particularly for checksums, but we were already tracking more satistics
for these objects, just not returning it.

Also add a checksum_last_failure column that holds the timestamptz of
the last checksum failure that occurred in a database (or in a
non-dataabase file), if any.

Author: Julien Rouhaud <rjuju123@gmail.com>
2019-04-12 14:04:50 +02:00
Peter Eisentraut ef6f30fe77 Fix REINDEX CONCURRENTLY of partitions
In case of a partition index, when swapping the old and new index, we
also need to attach the new index as a partition and detach the old
one.  Also, to handle partition indexes, we not only need to change
dependencies referencing the index, but also dependencies of the index
referencing something else.  The previous code did this only
specifically for a constraint, but we also need to do this for
partitioned indexes.  So instead write a generic function that does it
for all dependencies.

Author: Michael Paquier <michael@paquier.xyz>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/DF4PR8401MB11964EDB77C860078C343BEBEE5A0%40DF4PR8401MB1196.NAMPRD84.PROD.OUTLOOK.COM#154df1fedb735190a773481765f7b874
2019-04-12 08:36:05 +02:00
Peter Eisentraut 03f9e5cba0 Report progress of REINDEX operations
This uses the same infrastructure that the CREATE INDEX progress
reporting uses.  Add a column to pg_stat_progress_create_index to
report the OID of the index being worked on.  This was not necessary
for CREATE INDEX, but it's useful for REINDEX.

Also edit the phase descriptions a bit to be more consistent with the
source code comments.

Discussion: https://www.postgresql.org/message-id/ef6a6757-c36a-9e81-123f-13b19e36b7d7%402ndquadrant.com
2019-04-07 12:35:29 +02:00
Peter Eisentraut 106f2eb664 Cast pg_stat_progress_cluster.cluster_index_relid to oid
It's tracked internally as bigint, but when presented to the user it
should be oid.
2019-04-07 10:31:32 +02:00
Alvaro Herrera 9f06d79ef8 Add facility to copy replication slots
This allows the user to create duplicates of existing replication slots,
either logical or physical, and even changing properties such as whether
they are temporary or the output plugin used.

There are multiple uses for this, such as initializing multiple replicas
using the slot for one base backup; when doing investigation of logical
replication issues; and to select a different output plugins.

Author: Masahiko Sawada
Reviewed-by: Michael Paquier, Andres Freund, Petr Jelinek
Discussion: https://postgr.es/m/CAD21AoAm7XX8y_tOPP6j4Nzzch12FvA1wPqiO690RCk+uYVstg@mail.gmail.com
2019-04-05 18:05:18 -03:00
Tomas Vondra c50b3158bf Reduce overhead of pg_mcv_list (de)serialization
Commit ea4e1c0e8f resolved issues with memory alignment in serialized
pg_mcv_list values, but it required copying data to/from the varlena
buffer during serialization and deserialization.  As the MCV lits may
be fairly large, the overhead (memory consumption, CPU usage) can get
rather significant too.

This change tweaks the serialization format so that the alignment is
correct with respect to the varlena value, and so the parts may be
accessed directly without copying the data.

Catversion bump, as it affects existing pg_statistic_ext data.
2019-04-03 21:23:40 +02:00
Stephen Frost b0b39f72b9 GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.

Add frontend and backend encryption support functions.  Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.

In postmaster, pull GSSAPI authorization checking into a shared
function.  Also share the initiator name between the encryption and
non-encryption codepaths.

For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts.  "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.

Similarly, add a "gssencmode" parameter to libpq.  Supported values are
"disable", "require", and "prefer".  Notably, negotiation will only be
attempted if credentials can be acquired.  Move credential acquisition
into its own function to support this behavior.

Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.

Finally, add documentation for everything new, and update existing
documentation on connection security.

Thanks to Michael Paquier for the Windows fixes.

Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
   Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 15:02:33 -04:00
Alvaro Herrera ab0dfc961b Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.

There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific.  The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.

The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan.  (The index validation index scan requires
patching each AM, which has not been included here.)

Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 15:18:08 -03:00
Alexander Korotkov 0a02e2ae02 GIN support for @@ and @? jsonpath operators
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators.  Basic idea is
to extract statements of following form out of jsonpath.

 key1.key2. ... .keyN = const

The rest of jsonpath is rechecked from heap.

Catversion is bumped.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov
Reviewed-by: Jonathan Katz, Pavel Stehule
2019-04-01 18:08:52 +03:00
Peter Eisentraut fc22b6623b Generated columns
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.

This implements one kind of generated column: stored (computed on
write).  Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
2019-03-30 08:15:57 +01:00
Tomas Vondra d85e0f366a Fix memory alignment in pg_mcv_list serialization
Blind attempt at fixing ia64, hppa an sparc builds.

The serialized representation of MCV lists did not enforce proper memory
alignment for internal fields, resulting in deserialization issues on
platforms that are more sensitive to this (ia64, sparc and hppa).

This forces a catalog version bump, because the layout of serialized
pg_mcv_list changes.

Broken since 7300a699.
2019-03-29 19:06:38 +01:00
Peter Eisentraut 5dc92b844e REINDEX CONCURRENTLY
This adds the CONCURRENTLY option to the REINDEX command.  A REINDEX
CONCURRENTLY on a specific index creates a new index (like CREATE
INDEX CONCURRENTLY), then renames the old index away and the new index
in place and adjusts the dependencies, and then drops the old
index (like DROP INDEX CONCURRENTLY).  The REINDEX command also has
the capability to run its other variants (TABLE, DATABASE) with the
CONCURRENTLY option (but not SYSTEM).

The reindexdb command gets the --concurrently option.

Author: Michael Paquier, Andreas Karlsson, Peter Eisentraut
Reviewed-by: Andres Freund, Fujii Masao, Jim Nasby, Sergei Kornilov
Discussion: https://www.postgresql.org/message-id/flat/60052986-956b-4478-45ed-8bd119e9b9cf%402ndquadrant.com#74948a1044c56c5e817a5050f554ddee
2019-03-29 08:26:33 +01:00
Andres Freund d25f519107 tableam: relation creation, VACUUM FULL/CLUSTER, SET TABLESPACE.
This moves the responsibility for:
- creating the storage necessary for a relation, including creating a
  new relfilenode for a relation with existing storage
- non-transactional truncation of a relation
- VACUUM FULL / CLUSTER's rewrite of a table
below tableam.

This is fairly straight forward, with a bit of complexity smattered in
to move the computation of xid / multixid horizons below the AM, as
they don't make sense for every table AM.

Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-28 20:01:43 -07:00
Thomas Munro 2fc7af5e96 Add basic infrastructure for 64 bit transaction IDs.
Instead of inferring epoch progress from xids and checkpoints,
introduce a 64 bit FullTransactionId type and use it to track xid
generation.  This fixes an unlikely bug where the epoch is reported
incorrectly if the range of active xids wraps around more than once
between checkpoints.

The only user-visible effect of this commit is to correct the epoch
used by txid_current() and txid_status(), also visible with
pg_controldata, in those rare circumstances.  It also creates some
basic infrastructure so that later patches can use 64 bit
transaction IDs in more places.

The new type is a struct that we pass by value, as a form of strong
typedef.  This prevents the sort of accidental confusion between
TransactionId and FullTransactionId that would be possible if we
were to use a plain old uint64.

Author: Thomas Munro
Reported-by: Amit Kapila
Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
2019-03-28 18:12:20 +13:00
Andres Freund 2a96909a4a tableam: Support for an index build's initial table scan(s).
To support building indexes over tables of different AMs, the scans to
do so need to be routed through the table AM.  While moving a fair
amount of code, nearly all the changes are just moving code to below a
callback.

Currently the range based interface wouldn't make much sense for non
block based table AMs. But that seems aceptable for now.

Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
2019-03-27 19:59:06 -07:00
Tomas Vondra 7300a69950 Add support for multivariate MCV lists
Introduce a third extended statistic type, supported by the CREATE
STATISTICS command - MCV lists, a generalization of the statistic
already built and used for individual columns.

Compared to the already supported types (n-distinct coefficients and
functional dependencies), MCV lists are more complex, include column
values and allow estimation of much wider range of common clauses
(equality and inequality conditions, IS NULL, IS NOT NULL etc.).
Similarly to the other types, a new pseudo-type (pg_mcv_list) is used.

Author: Tomas Vondra
Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera
Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
2019-03-27 18:32:18 +01:00
Michael Paquier 5bde1651bb Switch function current_schema[s]() to be parallel-unsafe
When invoked for the first time in a session, current_schema() and
current_schemas() can finish by creating a temporary schema.  Currently
those functions are parallel-safe, however if for a reason or another
they get launched across multiple parallel workers, they would fail when
attempting to create a temporary schema as temporary contexts are not
supported in this case.

The original issue has been spotted by buildfarm members crake and
lapwing, after commit c5660e0 has introduced the first regression tests
based on current_schema() in the tree.  After that, 396676b has
introduced a workaround to avoid parallel plans but that was not
completely right either.

Catversion is bumped.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190118024618.GF1883@paquier.xyz
2019-03-27 11:35:12 +09:00
Robert Haas 6f97457e0d Add progress reporting for CLUSTER and VACUUM FULL.
This uses the same progress reporting infrastructure added in commit
c16dc1aca5 and extends it to these
additional cases.  We lack the ability to track the internal progress
of sorts and index builds so the information reported is
coarse-grained for some parts of the operation, but it still seems
like a significant improvement over having nothing at all.

Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael
Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me.  A fair
amount of polishing also by me.

Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp
2019-03-25 10:59:04 -04:00
Peter Eisentraut 5e1963fb76 Collations with nondeterministic comparison
This adds a flag "deterministic" to collations.  If that is false,
such a collation disables various optimizations that assume that
strings are equal only if they are byte-wise equal.  That then allows
use cases such as case-insensitive or accent-insensitive comparisons
or handling of strings with different Unicode normal forms.

This functionality is only supported with the ICU provider.  At least
glibc doesn't appear to have any locales that work in a
nondeterministic way, so it's not worth supporting this for the libc
provider.

The term "deterministic comparison" in this context is from Unicode
Technical Standard #10
(https://unicode.org/reports/tr10/#Deterministic_Comparison).

This patch makes changes in three areas:

- CREATE COLLATION DDL changes and system catalog changes to support
  this new flag.

- Many executor nodes and auxiliary code are extended to track
  collations.  Previously, this code would just throw away collation
  information, because the eventually-called user-defined functions
  didn't use it since they only cared about equality, which didn't
  need collation information.

- String data type functions that do equality comparisons and hashing
  are changed to take the (non-)deterministic flag into account.  For
  comparison, this just means skipping various shortcuts and tie
  breakers that use byte-wise comparison.  For hashing, we first need
  to convert the input string to a canonical "sort key" using the ICU
  analogue of strxfrm().

Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
2019-03-22 12:12:43 +01:00
Alvaro Herrera 03ae9d59bd Catversion bump announced in previous commit but forgotten 2019-03-21 18:43:41 -03:00
Tom Lane 8aa9dd74b3 Sort the dependent objects before deletion in DROP OWNED BY.
This finishes a task we left undone in commit f1ad067fc, by extending
the delete-in-descending-OID-order rule to deletions triggered by
DROP OWNED BY.  We've coped with machine-dependent deletion orders
one time too many, and the new issues caused by Peter G's recent
nbtree hacking seem like the last straw.

Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
2019-03-20 18:06:29 -04:00
Alvaro Herrera a6da004715 Add index_get_partition convenience function
This new function simplifies some existing coding, as well as supports
future patches.

Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
Reviewed-by: Amit Langote, Jesper Pedersen
2019-03-20 18:18:50 -03:00
Alexander Korotkov 641fde2523 Remove ambiguity for jsonb_path_match() and jsonb_path_exists()
There are 2-arguments and 4-arguments versions of jsonb_path_match() and
jsonb_path_exists().  But 4-arguments versions have optional 3rd and 4th
arguments, that leads to ambiguity.  In the same time 2-arguments versions are
needed only for @@ and @? operators.  So, rename 2-arguments versions to
remove the ambiguity.

Catversion is bumped.
2019-03-20 10:30:56 +03:00
Andrew Gierth 01bde4fa4c Implement OR REPLACE option for CREATE AGGREGATE.
Aggregates have acquired a dozen or so optional attributes in recent
years for things like parallel query and moving-aggregate mode; the
lack of an OR REPLACE option to add or change these for an existing
agg makes extension upgrades gratuitously hard. Rectify.
2019-03-19 01:16:50 +00:00
Alexander Korotkov 72b6460336 Partial implementation of SQL/JSON path language
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database.  The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them.  This commit implements partial support JSON path language as
separate datatype called "jsonpath".  The implementation is partial because
it's lacking datetime support and suppression of numeric errors.  Missing
features will be added later by separate commits.

Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches.  This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:

 * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]),
 * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]).
 * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]).

This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly.  These operators will have an index support
(implemented in subsequent patches).

Catversion bumped, to add new functions and operators.

Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova.  The work
was inspired by Oleg Bartunov.

Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
2019-03-16 12:16:48 +03:00
Peter Eisentraut b13a913607 Add BKI_DEFAULT to pg_class.relrewrite
This column is always 0 on disk, so it doesn't have to be tracked
separately for each entry.
2019-03-14 21:25:39 +01:00
Tom Lane f1d85aa98e Add support for hyperbolic functions, as well as log10().
The SQL:2016 standard adds support for the hyperbolic functions
sinh(), cosh(), and tanh().  POSIX has long required libm to
provide those functions as well as their inverses asinh(),
acosh(), atanh().  Hence, let's just expose the libm functions
to the SQL level.  As with the trig functions, we only implement
versions for float8, not numeric.

For the moment, we'll assume that all platforms actually do have
these functions; if experience teaches otherwise, some autoconf
effort may be needed.

SQL:2016 also adds support for base-10 logarithm, but with the
function name log10(), whereas the name we've long used is log().
Add aliases named log10() for the float8 and numeric versions.

Lætitia Avrot

Discussion: https://postgr.es/m/CAB_COdguG22LO=rnxDQ2DW1uzv8aQoUzyDQNJjrR4k00XSgm5w@mail.gmail.com
2019-03-12 15:55:09 -04:00
Tom Lane 3aa0395d4e Remove remaining hard-wired OID references in the initial catalog data.
In the v11-era commits that taught genbki.pl to resolve symbolic
OID references in the initial catalog data, we didn't bother to
make every last reference symbolic; some of the catalogs have so
few initial rows that it didn't seem worthwhile.

However, the new project policy that OIDs assigned by new patches
should be automatically renumberable changes this calculus.
A patch that wants to add a row in one of these catalogs would have
a problem when the OID it assigns gets renumbered.  Hence, do the
mop-up work needed to make all OID references in initial data be
symbolic, and establish an associated project policy that we'll
never again write a hard-wired OID reference there.

No catversion bump since the contents of postgres.bki aren't
actually changed by this commit.

Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
2019-03-12 12:30:35 -04:00
Tom Lane a6417078c4 Create a script that can renumber manually-assigned OIDs.
This commit adds a Perl script renumber_oids.pl, which can reassign a
range of manually-assigned OIDs to someplace else by modifying OID
fields of the catalog *.dat files and OID-assigning macros in the
catalog *.h files.

Up to now, we've encouraged new patches that need manually-assigned
OIDs to use OIDs just above the range of existing OIDs.  Predictably,
this leads to patches stepping on each others' toes, as whichever
one gets committed first creates an OID conflict that other patch
author(s) have to resolve manually.  With the availability of
renumber_oids.pl, we can eliminate a lot of this hassle.
The new project policy, therefore, is:

* Encourage new patches to use high OIDs (the documentation suggests
choosing a block of OIDs at random in 8000..9999).

* After feature freeze in each development cycle, run renumber_oids.pl
to move all such OIDs down to lower numbers, thus freeing the high OID
range for the next development cycle.

This plan should greatly reduce the risk of OID collisions between
concurrently-developed patches.  Also, if such a collision happens
anyway, we have the option to resolve it without much effort by doing
an off-schedule OID renumbering to get the first-committed patch out
of the way.  Or a patch author could use renumber_oids.pl to change
their patch's assignments without much pain.

This approach does put a premium on not hard-wiring any OID values
in places where renumber_oids.pl and genbki.pl can't fix them.
Project practice in that respect seems to be pretty good already,
but a follow-on patch will sand down some rough edges.

John Naylor and Tom Lane, per an idea of Peter Geoghegan's

Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
2019-03-12 10:50:48 -04:00
Andres Freund c2fe139c20 tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:

1) Heap scans need to be generalized into table scans. Do this by
   introducing TableScanDesc, which will be the "base class" for
   individual AMs. This contains the AM independent fields from
   HeapScanDesc.

   The previous heap_{beginscan,rescan,endscan} et al. have been
   replaced with a table_ version.

   There's no direct replacement for heap_getnext(), as that returned
   a HeapTuple, which is undesirable for a other AMs. Instead there's
   table_scan_getnextslot().  But note that heap_getnext() lives on,
   it's still used widely to access catalog tables.

   This is achieved by new scan_begin, scan_end, scan_rescan,
   scan_getnextslot callbacks.

2) The portion of parallel scans that's shared between backends need
   to be able to do so without the user doing per-AM work. To achieve
   that new parallelscan_{estimate, initialize, reinitialize}
   callbacks are introduced, which operate on a new
   ParallelTableScanDesc, which again can be subclassed by AMs.

   As it is likely that several AMs are going to be block oriented,
   block oriented callbacks that can be shared between such AMs are
   provided and used by heap. table_block_parallelscan_{estimate,
   intiialize, reinitialize} as callbacks, and
   table_block_parallelscan_{nextpage, init} for use in AMs. These
   operate on a ParallelBlockTableScanDesc.

3) Index scans need to be able to access tables to return a tuple, and
   there needs to be state across individual accesses to the heap to
   store state like buffers. That's now handled by introducing a
   sort-of-scan IndexFetchTable, which again is intended to be
   subclassed by individual AMs (for heap IndexFetchHeap).

   The relevant callbacks for an AM are index_fetch_{end, begin,
   reset} to create the necessary state, and index_fetch_tuple to
   retrieve an indexed tuple.  Note that index_fetch_tuple
   implementations need to be smarter than just blindly fetching the
   tuples for AMs that have optimizations similar to heap's HOT - the
   currently alive tuple in the update chain needs to be fetched if
   appropriate.

   Similar to table_scan_getnextslot(), it's undesirable to continue
   to return HeapTuples. Thus index_fetch_heap (might want to rename
   that later) now accepts a slot as an argument. Core code doesn't
   have a lot of call sites performing index scans without going
   through the systable_* API (in contrast to loads of heap_getnext
   calls and working directly with HeapTuples).

   Index scans now store the result of a search in
   IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
   target is not generally a HeapTuple anymore that seems cleaner.

To be able to sensible adapt code to use the above, two further
callbacks have been introduced:

a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
   slots capable of holding a tuple of the AMs
   type. table_slot_callbacks() and table_slot_create() are based
   upon that, but have additional logic to deal with views, foreign
   tables, etc.

   While this change could have been done separately, nearly all the
   call sites that needed to be adapted for the rest of this commit
   also would have been needed to be adapted for
   table_slot_callbacks(), making separation not worthwhile.

b) tuple_satisfies_snapshot checks whether the tuple in a slot is
   currently visible according to a snapshot. That's required as a few
   places now don't have a buffer + HeapTuple around, but a
   slot (which in heap's case internally has that information).

Additionally a few infrastructure changes were needed:

I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
   internally uses a slot to keep track of tuples. While
   systable_getnext() still returns HeapTuples, and will so for the
   foreseeable future, the index API (see 1) above) now only deals with
   slots.

The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.

Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 12:46:41 -07:00
Tom Lane caf626b2cd Convert [autovacuum_]vacuum_cost_delay into floating-point GUCs.
This change makes it possible to specify sub-millisecond delays,
which work well on most modern platforms, though that was not true
when the cost-delay feature was designed.

To support this without breaking existing configuration entries,
improve guc.c to allow floating-point GUCs to have units.  Also,
allow "us" (microseconds) as an input/output unit for time-unit GUCs.
(It's not allowed as a base unit, at least not yet.)

Likewise change the autovacuum_vacuum_cost_delay reloption to be
floating-point; this forces a catversion bump because the layout of
StdRdOptions changes.

This patch doesn't in itself change the default values or allowed
ranges for these parameters, and it should not affect the behavior
for any already-allowed setting for them.

Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
2019-03-10 15:01:39 -04:00
Magnus Hagander 6b9e875f72 Track block level checksum failures in pg_stat_database
This adds a column that counts how many checksum failures have occurred
on files belonging to a specific database. Both checksum failures
during normal backend processing and those created when a base backup
detects a checksum failure are counted.

Author: Magnus Hagander
Reviewed by: Julien Rouhaud
2019-03-09 10:47:30 -08:00
Tom Lane 1b76168da7 Reformat catalog .dat files.
Test run for my previous commit; cleans up formatting issues in some
other recent commits.
2019-03-08 12:01:27 -05:00
Tom Lane 27aaf6eff4 Minor improvements for reformat_dat_file.pl.
Use Getopt::Long in preference to hand-rolled option parsing code.

Also, remove "-I .../backend/catalog" switch from the Makefile
invocations.  That's been unnecessary for some time, and leaving it
there gives the false impression it's needed in manual invocations.

John Naylor (extracted from a larger but more controversial patch)

Discussion: https://postgr.es/m/CACPNZCsHdcQN2jQ1=ptbi1Co2Nj3aHgRCUMk62=ThgWNabPY+Q@mail.gmail.com
2019-03-08 11:48:49 -05:00
Thomas Munro 91595f9d49 Drop the vestigial "smgr" type.
Before commit 3fa2bb31 this type appeared in the catalogs to
select which of several block storage mechanisms each relation
used.

New features under development propose to revive the concept of
different block storage managers for new kinds of data accessed
via bufmgr.c, but don't need to put references to them in the
catalogs.  So, avoid useless maintenance work on this type by
dropping it.  Update some regression tests that were referencing
it where any type would do.

Discussion: https://postgr.es/m/CA%2BhUKG%2BDE0mmiBZMtZyvwWtgv1sZCniSVhXYsXkvJ_Wo%2B83vvw%40mail.gmail.com
2019-03-07 15:44:04 +13:00
Andres Freund b172342321 Fix copy/out/readfuncs for accessMethod addition in 8586bf7ed8.
This includes a catversion bump, as IntoClause is theoretically
speaking part of storable rules. In practice I don't think that can
happen, but there's no reason to be stingy here.

Per buildfarm member calliphoridae.
2019-03-06 11:55:28 -08:00
Andres Freund 8586bf7ed8 tableam: introduce table AM infrastructure.
This introduces the concept of table access methods, i.e. CREATE
  ACCESS METHOD ... TYPE TABLE and
  CREATE TABLE ... USING (storage-engine).
No table access functionality is delegated to table AMs as of this
commit, that'll be done in following commits.

Subsequent commits will incrementally abstract table access
functionality to be routed through table access methods. That change
is too large to be reviewed & committed at once, so it'll be done
incrementally.

Docs will be updated at the end, as adding them incrementally would
likely make them less coherent, and definitely is a lot more work,
without a lot of benefit.

Table access methods are specified similar to index access methods,
i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with
callbacks. In contrast to index AMs that struct needs to live as long
as a backend, typically that's achieved by just returning a pointer to
a constant struct.

Psql's \d+ now displays a table's access method. That can be disabled
with HIDE_TABLEAM=true, which is mainly useful so regression tests can
be run against different AMs.  It's quite possible that this behaviour
still needs to be fine tuned.

For now it's not allowed to set a table AM for a partitioned table, as
we've not resolved how partitions would inherit that. Disallowing
allows us to introduce, if we decide that's the way forward, such a
behaviour without a compatibility break.

Catversion bumped, to add the heap table AM and references to it.

Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
    https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de
    https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 09:54:38 -08:00
Alvaro Herrera b96f6b1948 pg_partition_ancestors
Adds another introspection feature for partitioning, necessary for
further psql patches.

Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/20190226222757.GA31622@alvherre.pgsql
2019-03-04 16:14:29 -03:00
Robert Haas 1bb5e78218 Move code for managing PartitionDescs into a new file, partdesc.c
This is similar in spirit to the existing partbounds.c file in the
same directory, except that there's a lot less code in the new file
created by this commit.  Pending work in this area proposes to add a
bunch more code related to PartitionDescs, though, and this will give
us a good place to put it.

Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
2019-02-21 11:45:02 -05:00
Joe Conway 290e3b77fd Mark pg_config() stable rather than immutable
pg_config() has been marked immutable since its inception. As part of a
larger discussion around the definition of immutable versus stable and
related implications for marking functions parallel safe raised by
Andres, the consensus was clearly that pg_config() is stable, since
it could possibly change output even for the same minor version with
a recompile or installation of a new binary. So mark it stable.

Theoretically this could/should be backpatched, but it was deemed to be not
worth the effort since in practice this is very unlikely to cause problems
in the real world.

Discussion: https://postgr.es/m/20181126234521.rh3grz7aavx2ubjv@alap3.anarazel.de
2019-02-17 09:21:13 -05:00
Tom Lane 608b167f9f Allow user control of CTE materialization, and change the default behavior.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it).  This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.

Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query.  Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.

Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free.  By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED.  Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.

Andreas Karlsson, Andrew Gierth, David Fetter

Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
2019-02-16 16:11:12 -05:00
Tom Lane 74dfe58a59 Allow extensions to generate lossy index conditions.
For a long time, indxpath.c has had the ability to extract derived (lossy)
index conditions from certain operators such as LIKE.  For just as long,
it's been obvious that we really ought to make that capability available
to extensions.  This commit finally accomplishes that, by adding another
API for planner support functions that lets them create derived index
conditions for their functions.  As proof of concept, the hardwired
"special index operator" code formerly present in indxpath.c is pushed
out to planner support functions attached to LIKE and other relevant
operators.

A weak spot in this design is that an extension needs to know OIDs for
the operators, datatypes, and opfamilies involved in the transformation
it wants to make.  The core-code prototypes use hard-wired OID references
but extensions don't have that option for their own operators etc.  It's
usually possible to look up the required info, but that may be slow and
inconvenient.  However, improving that situation is a separate task.

I want to do some additional refactorization around selfuncs.c, but
that also seems like a separate task.

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-11 21:26:14 -05:00
Michael Paquier ea92368cd1 Move max_wal_senders out of max_connections for connection slot handling
Since its introduction, max_wal_senders is counted as part of
max_connections when it comes to define how many connection slots can be
used for replication connections with a WAL sender context.  This can
lead to confusion for some users, as it could be possible to block a
base backup or replication from happening because other backend sessions
are already taken for other purposes by an application, and
superuser-only connection slots are not a correct solution to handle
that case.

This commit makes max_wal_senders independent of max_connections for its
handling of PGPROC entries in ProcGlobal, meaning that connection slots
for WAL senders are handled using their own free queue, like autovacuum
workers and bgworkers.

One compatibility issue that this change creates is that a standby now
requires to have a value of max_wal_senders at least equal to its
primary.  So, if a standby created enforces the value of
max_wal_senders to be lower than that, then this could break failovers.
Normally this should not be an issue though, as any settings of a
standby are inherited from its primary as postgresql.conf gets normally
copied as part of a base backup, so parameters would be consistent.

Author: Alexander Kukushkin
Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
Kliukin
Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
2019-02-12 10:07:56 +09:00
Tom Lane 1d92a0c9f7 Redesign the partition dependency mechanism.
The original setup for dependencies of partitioned objects had
serious problems:

1. It did not verify that a drop cascading to a partition-child object
also cascaded to at least one of the object's partition parents.  Now,
normally a child object would share all its dependencies with one or
another parent (e.g. a child index's opclass dependencies would be shared
with the parent index), so that this oversight is usually harmless.
But if some dependency failed to fit this pattern, the child could be
dropped while all its parents remain, creating a logically broken
situation.  (It's easy to construct artificial cases that break it,
such as attaching an unrelated extension dependency to the child object
and then dropping the extension.  I'm not sure if any less-artificial
cases exist.)

2. Management of partition dependencies during ATTACH/DETACH PARTITION
was complicated and buggy; for example, after detaching a partition
table it was possible to create cases where a formerly-child index
should be dropped and was not, because the correct set of dependencies
had not been reconstructed.

Less seriously, because multiple partition relationships were
represented identically in pg_depend, there was an order-of-traversal
dependency on which partition parent was cited in error messages.
We also had some pre-existing order-of-traversal hazards for error
messages related to internal and extension dependencies.  This is
cosmetic to users but causes testing problems.

To fix #1, add a check at the end of the partition tree traversal
to ensure that at least one partition parent got deleted.  To fix #2,
establish a new policy that partition dependencies are in addition to,
not instead of, a child object's usual dependencies; in this way
ATTACH/DETACH PARTITION need not cope with adding or removing the
usual dependencies.

To fix the cosmetic problem, distinguish between primary and secondary
partition dependency entries in pg_depend, by giving them different
deptypes.  (They behave identically except for having different
priorities for being cited in error messages.)  This means that the
former 'I' dependency type is replaced with new 'P' and 'S' types.

This also fixes a longstanding bug that after handling an internal
dependency by recursing to the owning object, findDependentObjects
did not verify that the current target was now scheduled for deletion,
and did not apply the current recursion level's objflags to it.
Perhaps that should be back-patched; but in the back branches it
would only matter if some concurrent transaction had removed the
internal-linkage pg_depend entry before the recursive call found it,
or the recursive call somehow failed to find it, both of which seem
unlikely.

Catversion bump because the contents of pg_depend change for
partitioning relationships.

Patch HEAD only.  It's annoying that we're not fixing #2 in v11,
but there seems no practical way to do so given that the problem
is exactly a poor choice of what entries to put in pg_depend.
We can't really fix that while staying compatible with what's
in pg_depend in existing v11 installations.

Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
2019-02-11 14:41:17 -05:00
Tom Lane a391ff3c3d Build out the planner support function infrastructure.
Add support function requests for estimating the selectivity, cost,
and number of result rows (if a SRF) of the target function.

The lack of a way to estimate selectivity of a boolean-returning
function in WHERE has been a recognized deficiency of the planner
since Berkeley days.  This commit finally fixes it.

In addition, non-constant estimates of cost and number of output
rows are now possible.  We still fall back to looking at procost
and prorows if the support function doesn't service the request,
of course.

To make concrete use of the possibility of estimating output rowcount
for SRFs, this commit adds support functions for array_unnest(anyarray)
and the integer variants of generate_series; the lack of plausible
rowcount estimates for those, even when it's obvious to a human,
has been a repeated subject of complaints.  Obviously, much more
could now be done in this line, but I'm mostly just trying to get
the infrastructure in place.

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-09 18:32:23 -05:00
Tom Lane 1fb57af920 Create the infrastructure for planner support functions.
Rename/repurpose pg_proc.protransform as "prosupport".  The idea is
still that it names an internal function that provides knowledge to
the planner about the behavior of the function it's attached to;
but redesign the API specification so that it's not limited to doing
just one thing, but can support an extensible set of requests.

The original purpose of simplifying a function call is handled by
the first request type to be invented, SupportRequestSimplify.
Adjust all the existing transform functions to handle this API,
and rename them fron "xxx_transform" to "xxx_support" to reflect
the potential generalization of what they do.  (Since we never
previously provided any way for extensions to add transform functions,
this change doesn't create an API break for them.)

Also add DDL and pg_dump support for attaching a support function to a
user-defined function.  Unfortunately, DDL access has to be restricted
to superusers, at least for now; but seeing that support functions
will pretty much have to be written in C, that limitation is just
theoretical.  (This support is untested in this patch, but a follow-on
patch will add cases that exercise it.)

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-09 18:08:48 -05:00
Michael Paquier 3677a0b26b Add pg_partition_root to display top-most parent of a partition tree
This is useful when looking at partition trees with multiple layers, and
combined with pg_partition_tree, it provides the possibility to show up
an entire tree by just knowing one member at any level.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20181207014015.GP2407@paquier.xyz
2019-02-08 08:56:14 +09:00
Peter Eisentraut 0c1f8f166c Use EXECUTE FUNCTION syntax for triggers more
Change pg_dump and ruleutils.c to use the FUNCTION keyword instead of
PROCEDURE in trigger and event trigger definitions.

This completes the pieces of the transition started in
0a63f996e0 that were kept out of
PostgreSQL 11 because of the required catversion change.

Discussion: https://www.postgresql.org/message-id/381bef53-f7be-29c8-d977-948e389161d6@2ndquadrant.com
2019-02-07 09:21:34 +01:00
Peter Eisentraut f60a0e9677 Add more columns to pg_stat_ssl
Add columns client_serial and issuer_dn to pg_stat_ssl.  These allow
uniquely identifying the client certificate.

Rename the existing column clientdn to client_dn, to make the naming
more consistent and easier to read.

Discussion: https://www.postgresql.org/message-id/flat/398754d8-6bb5-c5cf-e7b8-22e5f0983caf@2ndquadrant.com/
2019-02-01 00:33:47 +01:00
Tom Lane 5f5c014590 Allow RECORD and RECORD[] to be specified in function coldeflists.
We can't allow these pseudo-types to be used as table column types,
because storing an anonymous record value in a table would result
in data that couldn't be understood by other sessions.  However,
it seems like there's no harm in allowing the case in a column
definition list that's specifying what a function-returning-record
returns.  The data involved is all local to the current session,
so we should be just as able to resolve its actual tuple type as
we are for the function-returning-record's top-level tuple output.

Elvis Pranskevichus, with cosmetic changes by me

Discussion: https://postgr.es/m/11038447.kQ5A9Uj5xi@hammer.magicstack.net
2019-01-30 19:25:33 -05:00
Michael Paquier 289198c0d9 Remove argument isprimary from index_build()
The flag was introduced in 3fdeb18, but f66e8bf actually forgot to
finish the cleanup as index_update_stats() has simplified its
interface.

Author: Michael Paquier
Discussion: https://postgr.es/m/20190122080852.GB3873@paquier.xyz
2019-01-24 07:57:09 +09:00
Alvaro Herrera 03afae201f Move CloneForeignKeyConstraints to tablecmds.c
My commit 3de241dba8 introduced some code to create a clone of a
foreign key to a partition, but I put it in pg_constraint.c because it
was too close to the contents of the pg_constraint row.  With the
previous commit that split out the constraint tuple deconstruction into
its own routine, it makes more sense to have the FK-cloning function in
tablecmds.c, mostly because its static subroutine can then be used by a
future bugfix.

My initial posting of this patch had this routine as static in
tablecmds.c, but sadly this function is already part of the Postgres 11
ABI as exported from pg_constraint.c, so keep it as exported also just
to avoid breaking any possible users of it.
2019-01-18 15:00:06 -03:00
Alvaro Herrera 0080396dad Refactor duplicate code into DeconstructFkConstraintRow
My commit 3de241dba8 introduced some code (in tablecmds.c) to obtain
data from a pg_constraint row for a foreign key, that already existed in
ri_triggers.c.  Split it out into its own routine in pg_constraint.c,
where it naturally belongs.

No functional code changes, only code movement.

Backpatch to pg11, because a future bugfix is simpler after this.
2019-01-18 14:59:44 -03:00
Andres Freund 4c850ecec6 Don't include heapam.h from others headers.
heapam.h previously was included in a number of widely used
headers (e.g. execnodes.h, indirectly in executor.h, ...). That's
problematic on its own, as heapam.h contains a lot of low-level
details that don't need to be exposed that widely, but becomes more
problematic with the upcoming introduction of pluggable table storage
- it seems inappropriate for heapam.h to be included that widely
afterwards.

heapam.h was largely only included in other headers to get the
HeapScanDesc typedef (which was defined in heapam.h, even though
HeapScanDescData is defined in relscan.h). The better solution here
seems to be to just use the underlying struct (forward declared where
necessary). Similar for BulkInsertState.

Another problem was that LockTupleMode was used in executor.h - parts
of the file tried to cope without heapam.h, but due to the fact that
it indirectly included it, several subsequent violations of that goal
were not not noticed. We could just reuse the approach of declaring
parameters as int, but it seems nicer to move LockTupleMode to
lockoptions.h - that's not a perfect location, but also doesn't seem
bad.

As a number of files relied on implicitly included heapam.h, a
significant number of files grew an explicit include. It's quite
probably that a few external projects will need to do the same.

Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-14 16:24:41 -08:00
Alvaro Herrera 807ae415c5 Don't create relfilenode for relations without storage
Some relation kinds had relfilenode set to some non-zero value, but
apparently the actual files did not really exist because creation was
prevented elsewhere.  Get rid of the phony pg_class.relfilenode values.

Catversion bumped, but only because the sanity_test check will fail if
run in a system initdb'd with the previous version.

Reviewed-by: Kyotaro HORIGUCHI, Michael Paquier
Discussion: https://postgr.es/m/20181206215552.fm2ypuxq6nhpwjuc@alvherre.pgsql
2019-01-04 14:51:17 -03:00
Alvaro Herrera df5be63763 Rename macro to RELKIND_HAS_STORAGE
The original name was an unfortunate choice.

Discussion: https://postgr.es/m/20181218.145600.172055615.horiguchi.kyotaro@lab.ntt.co.jp
2019-01-04 14:34:18 -03:00
Tom Lane d33faa285b Move the built-in conversions into the initial catalog data.
Instead of running a SQL script to create the standard conversion
functions and pg_conversion entries, put those entries into the
initial data in postgres.bki.

This shaves a few percent off the runtime of initdb, and also allows
accurate comments to be attached to the conversion functions; the
previous script labeled them with machine-generated comments that
were not quite right for multi-purpose conversion functions.
Also, we can get rid of the duplicative Makefile and MSVC perl
implementations of the generation code for that SQL script.

A functional change is that these pg_proc and pg_conversion entries
are now "pinned" by initdb.  Leaving them unpinned was perhaps a
good thing back while the conversions feature was under development,
but there seems no valid reason for it now.

Also, the conversion functions are now marked as immutable, where
before they were volatile by virtue of lacking any explicit
specification.  That seems like it was just an oversight.

To avoid using magic constants in pg_conversion.dat, extend
genbki.pl to allow encoding names to be converted, much as it
does for language, access method, etc names.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 19:47:53 -05:00
Tom Lane 814c9019aa Use symbolic references for pg_language OIDs in the bootstrap data.
This patch teaches genbki.pl to replace pg_language names by OIDs
in much the same way as it already does for pg_am names etc, and
converts pg_proc.dat to use such symbolic references in the prolang
column.

Aside from getting rid of a few more magic numbers in the initial
catalog data, this means that Gen_fmgrtab.pl no longer needs to read
pg_language.dat, since it doesn't have to know the OID of the "internal"
language; now it's just looking for the string "internal".

No need for a catversion bump, since the contents of postgres.bki
don't actually change at all.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 18:38:49 -05:00
Bruce Momjian 97c39498e5 Update copyright for 2019
Backpatch-through: certain files through 9.4
2019-01-02 12:44:25 -05:00
Tom Lane d01e75d68e Update leakproofness markings on some btree comparison functions.
Mark pg_lsn and oidvector comparison functions as leakproof.  Per
discussion, these clearly are leakproof so we might as well mark them so.

On the other hand, remove leakproof markings from name comparison
functions other than equal/not-equal.  Now that these depend on
varstr_cmp, they can't be considered leakproof if text comparison isn't.
(This was my error in commit 586b98fdf.)

While at it, add some opr_sanity queries to catch cases where related
functions do not have the same volatility and leakproof markings.
This would clearly be bogus for commutator or negator pairs.  In the
domain of btree comparison functions, we do have some exceptions,
because text equality is leakproof but inequality comparisons are not.
That's odd on first glance but is reasonable (for now anyway) given
the much greater complexity of the inequality code paths.

Discussion: https://postgr.es/m/20181231172551.GA206480@gust.leadboat.com
2018-12-31 16:38:11 -05:00
Tom Lane 0a6ea4001a Add a hash opclass for type "tid".
Up to now we've not worried much about joins where the join key is a
relation's CTID column, reasoning that storing a table's CTIDs in some
other table would be pretty useless.  However, there are use-cases for
this sort of query involving self-joins, so that argument doesn't really
hold water.

With larger relations, a merge or hash join is desirable.  We had a btree
opclass for type "tid", allowing merge joins on CTID, but no hash opclass
so that hash joins weren't possible.  Add the missing infrastructure.

This also potentially enables hash aggregation on "tid", though the
use-cases for that aren't too clear.

Discussion: https://postgr.es/m/1853.1545453106@sss.pgh.pa.us
2018-12-30 15:40:04 -05:00
Peter Eisentraut ae4472c619 Remove obsolete IndexIs* macros
Remove IndexIsValid(), IndexIsReady(), IndexIsLive() in favor of
accessing the index structure directly.  These macros haven't been
used consistently, and the original reason of maintaining source
compatibility with PostgreSQL 9.2 is gone.

Discussion: https://www.postgresql.org/message-id/flat/d419147c-09d4-6196-5d9d-0234b230880a%402ndquadrant.com
2018-12-27 10:07:46 +01:00
Tom Lane 7c15cef86d Base information_schema.sql_identifier domain on name, not varchar.
The SQL spec says that sql_identifier is a domain over varchar,
but it also says that that domain is supposed to represent the set
of valid identifiers for the implementation, in particular applying
a length limit matching the implementation's identifier length limit.
We were declaring sql_identifier as just "character varying", thus
duplicating what the spec says about base type, but entirely failing
at the rest of it.

Instead, let's declare sql_identifier as a domain over type "name".
(We can drop the COLLATE "C" added by commit 6b0faf723, since that's
now implicit in "name".)  With the recent improvements to name's
comparison support, there's not a lot of functional difference between
name and varchar.  So although in principle this is a spec deviation,
it's a pretty minor one.  And correctly enforcing PG's name length limit
is a good thing; on balance this seems closer to the intent of the spec
than what we had.

But that's all just language-lawyering.  The *real* reason to do this is
that it makes sql_identifier columns exposed by information_schema views
be just direct representations of the underlying "name" catalog columns,
eliminating a semantic mismatch that was disastrous for performance of
typical queries on the information_schema.  In combination with the
recent change to allow dropping no-op CoerceToDomain nodes, this allows
(for example) queries such as

    select ... from information_schema.tables where table_name = 'foo';

to produce an indexscan rather than a seqscan on pg_class.

Discussion: https://postgr.es/m/CAFj8pRBUCX4LZ2rA2BbEkdD6NN59mgx+BLo1gO08Wod4RLtcTg@mail.gmail.com
2018-12-20 16:21:59 -05:00
Tom Lane 5bbee34d9f Avoid producing over-length specific_name outputs in information_schema.
information_schema output columns that are declared as being type
sql_identifier are supposed to conform to the implementation's rules
for valid identifiers, in particular the identifier length limit.
Several places potentially violated this limit by concatenating a
function's name and OID.  (The OID is added to ensure name uniqueness
within a schema, since the spec doesn't expect function name overloading.)

Simply truncating the concatenation result to fit in "name" won't do,
since losing part of the OID might wind up giving non-unique results.
Instead, let's truncate the function name as necessary.

The most practical way to do that is to do it in a C function; the
information_schema.sql script doesn't have easy access to the value
of NAMEDATALEN, nor does it have an easy way to truncate on the basis
of resulting byte-length rather than number of characters.

(There are still a couple of places that cast concatenation results to
sql_identifier, but as far as I can see they are guaranteed not to produce
over-length strings, at least with the normal value of NAMEDATALEN.)

Discussion: https://postgr.es/m/23817.1545283477@sss.pgh.pa.us
2018-12-20 16:21:59 -05:00
Tom Lane 2ece7c07dc Add text-vs-name cross-type operators, and unify name_ops with text_ops.
Now that name comparison has effectively the same behavior as text
comparison, we might as well merge the name_ops opfamily into text_ops,
allowing cross-type comparisons to be processed without forcing a
datatype coercion first.  We need do little more than add cross-type
operators to make the opfamily complete, and fix one or two places
in the planner that assumed text_ops was a single-datatype opfamily.

I chose to unify hash name_ops into hash text_ops as well, since the
types have compatible hashing semantics.  This allows marking the
new cross-type equality operators as oprcanhash.

(Note: this doesn't remove the name_ops opclasses, so there's no
breakage of index definitions.  Those opclasses are just reparented
into the text_ops opfamily.)

Discussion: https://postgr.es/m/15938.1544377821@sss.pgh.pa.us
2018-12-19 17:46:25 -05:00
Tom Lane 586b98fdf1 Make type "name" collation-aware.
The "name" comparison operators now all support collations, making them
functionally equivalent to "text" comparisons, except for the different
physical representation of the datatype.  They do, in fact, mostly share
the varstr_cmp and varstr_sortsupport infrastructure, which has been
slightly enlarged to handle the case.

To avoid changes in the default behavior of the datatype, set name's
typcollation to C_COLLATION_OID not DEFAULT_COLLATION_OID, so that
by default comparisons to a name value will continue to use strcmp
semantics.  (This would have been the case for system catalog columns
anyway, because of commit 6b0faf723, but doing this makes it true for
user-created name columns as well.  In particular, this avoids
locale-dependent changes in our regression test results.)

In consequence, tweak a couple of places that made assumptions about
collatable base types always having typcollation DEFAULT_COLLATION_OID.
I have not, however, attempted to relax the restriction that user-
defined collatable types must have that.  Hence, "name" doesn't
behave quite like a user-defined type; it acts more like a domain
with COLLATE "C".  (Conceivably, if we ever get rid of the need for
catalog name columns to be fixed-length, "name" could actually become
such a domain over text.  But that'd be a pretty massive undertaking,
and I'm not volunteering.)

Discussion: https://postgr.es/m/15938.1544377821@sss.pgh.pa.us
2018-12-19 17:46:25 -05:00
Tom Lane 6b0faf7236 Make collation-aware system catalog columns use "C" collation.
Up to now we allowed text columns in system catalogs to use collation
"default", but that isn't really safe because it might mean something
different in template0 than it means in a database cloned from template0.
In particular, this could mean that cloned pg_statistic entries for such
columns weren't entirely valid, possibly leading to bogus planner
estimates, though (probably) not any outright failures.

In the wake of commit 5e0928005, a better solution is available: if we
label such columns with "C" collation, then their pg_statistic entries
will also use that collation and hence will be valid independently of
the database collation.

This also provides a cleaner solution for indexes on such columns than
the hack added by commit 0b28ea79c: the indexes will naturally inherit
"C" collation and don't have to be forced to use text_pattern_ops.

Also, with the planned improvement of type "name" to be collation-aware,
this policy will apply cleanly to both text and name columns.

Because of the pg_statistic angle, we should also apply this policy
to the tables in information_schema.  This patch does that by adjusting
information_schema's textual domain types to specify "C" collation.
That has the user-visible effect that order-sensitive comparisons to
textual information_schema view columns will now use "C" collation
by default.  The SQL standard says that the collation of those view
columns is implementation-defined, so I think this is legal per spec.
At some point this might allow for translation of such comparisons
into indexable conditions on the underlying "name" columns, although
additional work will be needed before that can happen.

Discussion: https://postgr.es/m/19346.1544895309@sss.pgh.pa.us
2018-12-18 12:48:15 -05:00
Michael Paquier f94cec6447 Include partitioned indexes to system view pg_indexes
pg_tables already includes partitioned tables, so for consistency
pg_indexes should show partitioned indexes.

Author: Suraj Kharage
Reviewed-by: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/CAF1DzPVrYo4XNTEnc=PqVp6aLJc7LFYpYR4rX=_5pV=wJ2KdZg@mail.gmail.com
2018-12-18 16:37:51 +09:00
Alvaro Herrera ca4103025d Fix tablespace handling for partitioned tables
When partitioned tables were introduced, we failed to realize that by
copying the tablespace handling for other relation kinds with no
physical storage we were causing the secondary effect that their
partitions would not automatically inherit the tablespace setting.  This
is surprising and unhelpful, so change it to adopt the behavior
introduced in pg11 (commit 33e6c34c32) for partitioned indexes: the
parent relation remembers the tablespace specification, which is then
used for any new partitions that don't declare one.

Because this commit changes behavior of the TABLESPACE clause for
partitioned tables (it's no longer a no-op), it is not backpatched.

Author: David Rowley, Álvaro Herrera
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAKJS1f9SxVzqDrGD1teosFd6jBMM0UEaa14_8mRvcWE19Tu0hA@mail.gmail.com
2018-12-17 15:37:40 -03:00
Tom Lane 5e09280057 Make pg_statistic and related code account more honestly for collations.
When we first put in collations support, we basically punted on teaching
pg_statistic, ANALYZE, and the planner selectivity functions about that.
They've just used DEFAULT_COLLATION_OID independently of the actual
collation of the data.  It's time to improve that, so:

* Add columns to pg_statistic that record the specific collation associated
with each statistics slot.

* Teach ANALYZE to use the column's actual collation when comparing values
for statistical purposes, and record this in the appropriate slot.  (Note
that type-specific typanalyze functions are now expected to fill
stats->stacoll with the appropriate collation, too.)

* Teach assorted selectivity functions to use the actual collation of
the stats they are looking at, instead of just assuming it's
DEFAULT_COLLATION_OID.

This should give noticeably better results in selectivity estimates for
columns with nondefault collations, at least for query clauses that use
that same collation (which would be the default behavior in most cases).
It's still true that comparisons with explicit COLLATE clauses different
from the stored data's collation won't be well-estimated, but that's no
worse than before.  Also, this patch does make the first step towards
doing better with that, which is that it's now theoretically possible to
collect stats for a collation other than the column's own collation.

Patch by me; thanks to Peter Eisentraut for review.

Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
2018-12-14 12:52:49 -05:00
Andres Freund 09568ec3d3 Create a separate oid range for oids assigned by genbki.pl.
The changes I made in 578b229718 assigned oids below
FirstBootstrapObjectId to objects in include/catalog/*.dat files that
did not have an oid assigned, starting at the max oid explicitly
assigned.  Tom criticized that for mainly two reasons:
1) It's not clear which values are manually and which explicitly
   assigned.
2) The space below FirstBootstrapObjectId gets pretty crowded, and
   some PostgreSQL forks have used oids >= 9000 for their own objects,
   to avoid conflicting.

Thus create a new range for objects not assigned explicit oids, but
assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids,
FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for
genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids
assigned during bootstrap.  It's possible that we'll have to adjust
these boundaries, but there's some headroom for now.

Add a note suggesting that oids in forks should be assigned in the
9000-9999 range.

Catversion bump for obvious reasons.

Per complaint from Tom Lane.

Author: Andres Freund
Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
2018-12-13 14:50:57 -08:00
Michael Paquier 7fee252f6f Add timestamp of last received message from standby to pg_stat_replication
The timestamp generated by the standby at message transmission has been
included in the protocol since its introduction for both the status
update message and hot standby feedback message, but it has never
appeared in pg_stat_replication.  Seeing this timestamp does not matter
much with a cluster which has a lot of activity, but on a mostly-idle
cluster, this makes monitoring able to react faster than the configured
timeouts.

Author: MyungKyu LIM
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/1657809367.407321.1533027417725.JavaMail.jboss@ep2ml404
2018-12-09 16:35:06 +09:00
Andres Freund 578b229718 Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.

This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row.  Neither pg_dump nor COPY included the contents of the
oid column by default.

The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.

WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.

Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
  WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
  issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
  restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
  OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
  plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.

The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.

The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such.  This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.

The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.

Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).

The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.

While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.

Catversion bump, for obvious reasons.

Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-20 16:00:17 -08:00
Alvaro Herrera 3f2393edef Redesign initialization of partition routing structures
This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well
as the future MERGE) on partitioned tables.

This changes the setup for tuple routing so that it does far less work
during the initial setup and pushes more work out to when partitions
receive tuples.  PartitionDispatchData structs for sub-partitioned
tables are only created when a tuple gets routed through it.  The
possibly large arrays in the PartitionTupleRouting struct have largely
been removed.  The partitions[] array remains but now never contains any
NULL gaps.  Previously the NULLs had to be skipped during
ExecCleanupTupleRouting(), which could add a large overhead to the
cleanup when the number of partitions was large.  The partitions[] array
is allocated small to start with and only enlarged when we route tuples
to enough partitions that it runs out of space. This allows us to keep
simple single-row partition INSERTs running quickly.  Redesign

The arrays in PartitionTupleRouting which stored the tuple translation maps
have now been removed.  These have been moved out into a
PartitionRoutingInfo struct which is an additional field in ResultRelInfo.

The find_all_inheritors() call still remains by far the slowest part of
ExecSetupPartitionTupleRouting(). This commit just removes the other slow
parts.

In passing also rename the tuple translation maps from being ParentToChild
and ChildToParent to being RootToPartition and PartitionToRoot. The old
names mislead you into thinking that a partition of some sub-partitioned
table would translate to the rowtype of the sub-partitioned table rather
than the root partitioned table.

Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera
Testing help from Jesper Pedersen and Kato Sho.
Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com
2018-11-16 15:01:05 -03:00
Andres Freund 6a744413ea Make reformat-dat-files, reformat-dat-files VPATH safe.
The reformat_dat_file.pl script, added by 372728b0d4, supported
all the necessary options to make it work in a VPATH build, but the
makefile invocations didn't take VPATH into account. Fix that.

Discussion: https://postgr.es/m/20181115185303.d2z7wonx23mdfvd3@alap3.anarazel.de
Backpatch: 11-, where 372728b0d4 was merged
2018-11-15 11:38:03 -08:00
Tom Lane 600b04d6b5 Add a timezone-specific variant of date_trunc().
date_trunc(field, timestamptz, zone_name) performs truncation using
the named time zone as reference, rather than working in the session
time zone as is the default behavior.  It's equivalent to

date_trunc(field, timestamptz at time zone zone_name) at time zone zone_name

but it's faster, easier to type, and arguably easier to understand.

Vik Fearing and Tom Lane

Discussion: https://postgr.es/m/6249ffc4-2b22-4c1b-4e7d-7af84fedd7c6@2ndquadrant.com
2018-11-14 15:41:07 -05:00
Tom Lane fa2952d8eb Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types.  While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL.  That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names.  (I think I've
seen field reports matching that symptom, but can't find any right now.)

Several cases are fixed here:

1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain.  This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.

2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.

3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).

Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)

ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts.  This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database.  We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.

To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have.  Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.)  While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.

The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally.  We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.

Back-patch relevant parts to all supported branches.

Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us
2018-11-09 20:42:14 -05:00
Michael Paquier 8f045e242b Switch pg_promote to be parallel-safe
pg_promote uses nothing relying on a global state, so it is fine to mark
it as parallel-safe, conclusion based on a detailed analysis from Robert
Haas.  This also fixes an inconsistency where pg_proc.dat missed to mark
the function with its previous value for proparallel, update which does
not matter now as the default is used.

Based on a discussion between multiple folks: Laurenz Albe, Robert Haas,
Amit Kapila, Tom Lane and myself.

Discussion: https://postgr.es/m/20181029082530.GL14242@paquier.xyz
2018-11-06 14:11:21 +09:00
Tom Lane 55f3d10296 Remove unreferenced pg_opfamily entry.
The entry with OID 4035, for GIST jsonb_ops, is unused; apparently
it was added in preparation for index support that never materialized.
Remove it, and add a regression test case to detect future mistakes
of the same kind.

Discussion: https://postgr.es/m/17188.1541379745@sss.pgh.pa.us
2018-11-05 12:02:27 -05:00
Peter Eisentraut 96b00c433c Remove obsolete pg_constraint.consrc column
This has been deprecated and effectively unused for a long time.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-01 20:36:05 +01:00
Peter Eisentraut fe5038236c Remove obsolete pg_attrdef.adsrc column
This has been deprecated and effectively unused for a long time.

Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-01 20:35:42 +01:00
Michael Paquier d5eec4eefd Add pg_partition_tree to display information about partitions
This new function is useful to display a full tree of partitions with a
partitioned table given in output, and avoids the need of any complex
WITH RECURSIVE query when looking at partition trees which are
deep multiple levels.

It returns a set of records, one for each partition, containing the
partition's name, its immediate parent's name, a boolean value telling
if the relation is a leaf in the tree and an integer telling its level
in the partition tree with given table considered as root, beginning at
zero for the root, and incrementing by one each time the scan goes one
level down.

Author: Amit Langote
Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp
2018-10-30 10:25:06 +09:00
Michael Paquier 10074651e3 Add pg_promote function
This function is able to promote a standby with this new SQL-callable
function.  Execution access can be granted to non-superusers so that
failover tools can observe the principle of least privilege.

Catalog version is bumped.

Author: Laurenz Albe
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6e7c79b3ec916cf49742fb8849ed17cd87aed620.camel@cybertec.at
2018-10-25 09:46:00 +09:00
Michael Paquier 55853d666c Clarify descriptions of relhassubclass and relispartition in pg_class
Three places are fixed, one for each author.

Reported-by: Tom Lane
Author: Tom Lane, Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/82470.1540177167@sss.pgh.pa.us
2018-10-22 15:26:28 +09:00
Andres Freund 02a30a09f9 Correct constness of system attributes in heap.c & prerequisites.
This allows the compiler / linker to mark affected pages as read-only.

There's a fair number of pre-requisite changes, to allow the const
properly be propagated. Most of consts were already required for
correctness anyway, just not represented on the type-level.  Arguably
we could be more aggressive in using consts in related code, but..

This requires using a few of the types underlying typedefs that
removes pointers (e.g. const NameData *) as declaring the typedefed
type constant doesn't have the same meaning (it makes the variable
const, not what it points to).

Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
2018-10-16 09:44:43 -07:00
Andres Freund cda6a8d01d Remove deprecated abstime, reltime, tinterval datatypes.
These types have been deprecated for a *long* time.

Catversion bump, for obvious reasons.

Author: Andres Freund
Discussion:
    https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
    https://postgr.es/m/20171213080506.cwjkpcz3bkk6yz2u@alap3.anarazel.de
    https://postgr.es/m/25615.1513115237@sss.pgh.pa.us
2018-10-11 11:59:15 -07:00
Michael Paquier c481016201 Add pg_ls_archive_statusdir function
This function lists the contents of the WAL archive status directory,
and is intended to be used by monitoring tools.  Unlike pg_ls_dir(),
access to it can be granted to non-superusers so that those monitoring
tools can observe the principle of least privilege.  Access is also
given by default to members of pg_monitor.

Author:  Christoph Moench-Tegeder
Reviewed-by: Aya Iwata
Discussion: https://postgr.es/m/20180930205920.GA64534@elch.exwg.net
2018-10-09 22:29:09 +09:00
Thomas Munro 212fab9926 Relax transactional restrictions on ALTER TYPE ... ADD VALUE (redux).
Originally committed as 15bc038f (plus some follow-ups), this was
reverted in 28e07270 due to a problem discovered in parallel
workers.  This new version corrects that problem by sending the
list of uncommitted enum values to parallel workers.

Here follows the original commit message describing the change:

To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we
can be sure that any such column is new in the current transaction.

Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE
from being executed at all in a transaction block, unless the target
enum type had been created in the current transaction.  This patch
removes that restriction, and instead insists that an uncommitted enum
value can't be referenced unless it belongs to an enum type created
in the same transaction as the value.  Per discussion, this should be
a bit less onerous.  It does require each function that could possibly
return a new enum value to SQL operations to check this restriction,
but there aren't so many of those that this seems unmaintainable.

Author: Andrew Dunstan and Tom Lane, with parallel query fix by Thomas Munro
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D0Ei7g6PaNTbcmAh9tCRahQrk%3Dr5ZWLD-jr7hXweYX3yg%40mail.gmail.com
Discussion: https://postgr.es/m/4075.1459088427%40sss.pgh.pa.us
2018-10-09 12:51:01 +13:00
Alvaro Herrera ad08006ba0 Fix event triggers for partitioned tables
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly.  This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing.  Fix the crash by
creating a stack of event trigger state objects.  There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.

Backpatch to 9.5, where DDL deparsing of event triggers was introduced.

Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
2018-10-06 19:17:46 -03:00
Tom Lane 07ee62ce9e Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.
Previously, a worker process would establish values for these based on
its own start time.  In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe.  It seems
likely that other behaviors might diverge from what happens in the parent
as well.

It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.

In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were).  While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.

Konstantin Knizhnik, whacked around a bit by me

Discussion: https://postgr.es/m/6406dbd2-5d37-4cb6-6eb2-9c44172c7e7c@postgrespro.ru
2018-10-06 12:00:09 -04:00
Michael Paquier 9cd92d1a33 Add pg_ls_tmpdir function
This lists the contents of a temporary directory associated to a given
tablespace, useful to get information about on-disk consumption caused
by temporary files used by a session query.  By default, pg_default is
scanned, and a tablespace can be specified as argument.

This function is intended to be used by monitoring tools, and, unlike
pg_ls_dir(), access to them can be granted to non-superusers so that
those monitoring tools can observe the principle of least privilege.
Access is also given by default to members of pg_monitor.

Author: Nathan Bossart
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
2018-10-05 09:21:48 +09:00
Tom Lane fdba460a26 Create an RTE field to record the query's lock mode for each relation.
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for
each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock,
or RowExclusiveLock depending on the RTE's role in the query).

This patch creates the field and makes all creators of RTE nodes fill it
in reasonably, but for the moment nothing much is done with it.  The plan
is to replace assorted post-parser logic that re-determines the right
lockmode to use with simple uses of rte->rellockmode.  For now, just add
Asserts in each of those places that the rellockmode matches what they are
computing today.  (In some cases the match isn't perfect, so the Asserts
are weaker than you might expect; but this seems OK, as per discussion.)

This passes check-world for me, but it seems worth pushing in this state
to see if the buildfarm finds any problems in cases I failed to test.

catversion bump due to change of stored rules.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-09-30 13:55:51 -04:00
Alvaro Herrera 62e533d3f1 Remove fmgr.h inclusion from partition.h
It's not needed anymore.
2018-09-25 17:52:07 -03:00
Tom Lane fd582317e1 Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(.  While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time.  Update our
copies of these files.

Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days.  Fortunately,
their source code turns out to be a breeze to build.

Notable changes:

* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.

* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil.  These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.

(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)

Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.

initdb forced because the contents of snowball_create.sql have
changed.

Still TODO: see about updating the stopword lists.

Arthur Zakirov, minor mods and doc work by me

Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 17:29:38 -04:00
Joe Conway c62dd80cdf Document aclitem functions and operators
aclitem functions and operators have been heretofore undocumented.
Fix that. While at it, ensure the non-operator aclitem functions have
pg_description strings.

Does not seem worthwhile to back-patch.

Author: Fabien Coelho, with pg_description from John Naylor, and significant
refactoring and editorialization by me.
Reviewed by: Tom Lane
Discussion: https://postgr.es/m/flat/alpine.DEB.2.21.1808010825490.18204%40lancre
2018-09-24 10:14:57 -04:00
Tom Lane b09a64d602 Add missing pg_description strings for pg_type entries.
I noticed that all non-composite, non-array entries in pg_type.dat
had descr strings, except for "json" and the pseudo-types.  The
lack for json seems certainly an oversight, and there's surely
little reason to not have entries for the pseudo-types either.
So add some.

"make reformat-dat-files" turned up some formatting issues in
pg_amop.dat, too, so fix those in passing.

No catversion bump since the backend doesn't care too much what is
in pg_description.
2018-09-20 16:06:18 -04:00
Tom Lane 3dc820c43e Teach genbki.pl to auto-generate pg_type entries for array types.
This eliminates some more tedium in adding new catalog entries,
specifically the need to set up an array type when adding a new
built-in data type.  Now it's sufficient to assign an OID for the
array type and write it in an "array_type_oid" metadata field.
You don't have to fill the base type's typarray link explicitly, either.

No catversion bump since the contents of pg_type aren't changed.
(Well, their order might be different, but that doesn't matter.)

John Naylor, reviewed and whacked around a bit by
Dagfinn Ilmari Mannsåker, and some more by me.

Discussion: https://postgr.es/m/CAJVSVGVTb6m9pJF49b3SuA8J+T-THO9c0hxOmoyv-yGKh-FbNg@mail.gmail.com
2018-09-20 15:14:46 -04:00
Alexander Korotkov 2a6368343f Add support for nearest-neighbor (KNN) searches to SP-GiST
Currently, KNN searches were supported only by GiST.  SP-GiST also capable to
support them.  This commit implements that support.  SP-GiST scan stack is
replaced with queue, which serves as stack if no ordering is specified.  KNN
support is provided for three SP-GIST opclasses: quad_point_ops, kd_point_ops
and poly_ops (catversion is bumped).  Some common parts between GiST and SP-GiST
KNNs are extracted into separate functions.

Discussion: https://postgr.es/m/570825e8-47d0-4732-2bf6-88d67d2d51c8%40postgrespro.ru
Author: Nikita Glukhov, Alexander Korotkov based on GSoC work by Vlad Sterzhanov
Review: Andrey Borodin, Alexander Korotkov
2018-09-19 01:54:10 +03:00
Tom Lane db1071d4ee Fix some minor issues exposed by outfuncs/readfuncs testing.
A test patch to pass parse and plan trees through outfuncs + readfuncs
exposed several issues that need to be fixed to get clean matches:

Query.withCheckOptions failed to get copied; it's intentionally ignored
by outfuncs/readfuncs on the grounds that it'd always be NIL anyway in
stored rules.  This seems less than future-proof, and it's not even
saving very much, so just undo the decision and treat the field like
all others.

Several places that convert a view RTE into a subquery RTE, or similar
manipulations, failed to clear out fields that were specific to the
original RTE type and should be zero in a subquery RTE.  Since readfuncs.c
will leave such fields as zero, equalfuncs.c thinks the nodes are different
leading to a reported mismatch.  It seems like a good idea to clear out the
no-longer-needed fields, even though in principle nothing should look at
them; the node ought to be indistinguishable from how it would look if
we'd built a new node instead of scribbling on the old one.

BuildOnConflictExcludedTargetlist randomly set the resname of some
TargetEntries to "" not NULL.  outfuncs/readfuncs don't distinguish those
cases, and so the string will read back in as NULL ... but equalfuncs.c
does distinguish.  Perhaps we ought to try to make things more consistent
in this area --- but it's just useless extra code space for
BuildOnConflictExcludedTargetlist to not use NULL here, so I fixed it for
now by making it do that.

catversion bumped because the change in handling of Query.withCheckOptions
affects stored rules.

Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 15:08:28 -04:00
Michael Paquier 1d6fbc38d9 Refactor routines for subscription and publication lookups
Those routines gain a missing_ok argument, allowing a caller to get a
NULL result instead of an error if set to true.  This is part of a
larger refactoring effort for objectaddress.c where trying to check for
non-existing objects does not result in cache lookup failures.

Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
2018-09-18 12:00:18 +09:00
Tom Lane ae5205c8a8 Make argument names of pg_get_object_address consistent, and fix docs.
pg_get_object_address and pg_identify_object_as_address are supposed
to be inverses, but they disagreed as to the names of the arguments
representing the textual form of an object address.  Moreover, the
documented argument names didn't agree with reality at all, either
for these functions or pg_identify_object.

In HEAD and v11, I think we can get away with renaming the input
arguments of pg_get_object_address to match the outputs of
pg_identify_object_as_address.  In theory that might break queries
using named-argument notation to call pg_get_object_address, but
it seems really unlikely that anybody is doing that, or that they'd
have much trouble adjusting if they were.  In older branches, we'll
just live with the lack of consistency.

Aside from fixing the documentation of these functions to match reality,
I couldn't resist the temptation to do some copy-editing.

Per complaint from Jean-Pierre Pelletier.  Back-patch to 9.5 where these
functions were introduced.  (Before v11, this is a documentation change
only.)

Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com
2018-09-05 13:47:28 -04:00
Tom Lane 17b7c302b5 Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.

Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.

To fix, create a unique index on (conrelid, contypid, conname).  Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain.  (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed.  But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.)  This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.

Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.

Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André.  And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.

While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.

Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 13:45:35 -04:00
Alvaro Herrera c076f3d74a Remove pg_constraint.conincluding
This column was added in commit 8224de4f42 ("Indexes with INCLUDE
columns and their support in B-tree") to ease writing the ruleutils.c
supporting code for that feature, but it turns out to be unnecessary --
we can do the same thing with just one more syscache lookup.

Even the documentation for the new column being removed in this commit
is awkward.

Discussion: https://postgr.es/m/20180902165018.33otxftp3olgtu4t@alvherre.pgsql
2018-09-03 12:59:26 -03:00
Peter Eisentraut a4a232b1e7 Error position support for defaults and check constraints
Add support for error position reporting for the expressions contained
in defaults and check constraint definitions.  This currently works only
for CREATE TABLE, not ALTER TABLE, because the latter is not set up to
pass around the original query string.

Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
2018-08-30 08:20:23 +02:00
Peter Eisentraut 9b39b799db Add some not null constraints to catalogs
Use BKI_FORCE_NOT_NULL on some catalog field declarations that are never
null (according to the source code that accesses them).
2018-08-27 16:21:23 +02:00
Michael Paquier 246a6c8f7b Make autovacuum more aggressive to remove orphaned temp tables
Commit dafa084, added in 10, made the removal of temporary orphaned
tables more aggressive.  This commit makes an extra step into the
aggressiveness by adding a flag in each backend's MyProc which tracks
down any temporary namespace currently in use.  The flag is set when the
namespace gets created and can be reset if the temporary namespace has
been created in a transaction or sub-transaction which is aborted.  The
flag value assignment is assumed to be atomic, so this can be done in a
lock-less fashion like other flags already present in PGPROC like
databaseId or backendId, still the fact that the temporary namespace and
table created are still locked until the transaction creating those
commits acts as a barrier for other backends.

This new flag gets used by autovacuum to discard more aggressively
orphaned tables by additionally checking for the database a backend is
connected to as well as its temporary namespace in-use, removing
orphaned temporary relations even if a backend reuses the same slot as
one which created temporary relations in a past session.

The base idea of this patch comes from Robert Haas, has been written in
its first version by Tsunakawa Takayuki, then heavily reviewed by me.

Author: Tsunakawa Takayuki
Reviewed-by: Michael Paquier, Kyotaro Horiguchi, Andres Freund
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8A4DC6@G01JPEXMBYT05
Backpatch: 11-, as PGPROC gains a new flag and we don't want silent ABI
breakages on already released versions.
2018-08-13 11:49:04 +02:00
Michael Paquier f2b1316a94 Bump catalog version for recent toast table additions
This has been forgotten in 96cdeae.
2018-07-20 09:28:19 +09:00
Michael Paquier 96cdeae07f Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one.  Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.

To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index.  Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data.  Those relations have no reason to use a
toast table anyway.

Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 07:43:41 +09:00
Michael Paquier ce89ad0fa0 Fix argument of pg_create_logical_replication_slot for slot name
All attributes and arguments using a slot name map to the data type
"name", but this function has been using "text".  This is cosmetic, as
even if text is used then the slot name would be truncated to 64
characters anyway and stored as such.  The documentation already said
so and the function already assumed that the argument was of this type
when fetching its value.

Bump catalog version.

Author: Sawada Masahiko
Discussion: https://postgr.es/m/CAD21AoADYz_-eAqH5AVFaCaojcRgwpo9PW=u8kgTMys63oB8Cw@mail.gmail.com
2018-07-13 09:32:12 +09:00
Tom Lane 39a96512b3 Mark built-in btree comparison functions as leakproof where it's safe.
Generally, if the comparison operators for a datatype or pair of datatypes
are leakproof, the corresponding btree comparison support function can be
considered so as well.  But we had not originally worried about marking
support functions as leakproof, reasoning that they'd not likely be used in
queries so the marking wouldn't matter.  It turns out there's at least one
place where it does matter: calc_arraycontsel() finds the target datatype's
default btree comparison function and tries to use that to estimate
selectivity, but it will be blocked in some cases if the function isn't
leakproof.  This leads to unnecessarily poor selectivity estimates and bad
plans, as seen in bug #15251.

Hence, run around and apply proleakproof markings where the corresponding
btree comparison operators are leakproof.  (I did eyeball each function
to verify that it wasn't doing anything surprising, too.)

This isn't a full solution to bug #15251, and it's not back-patchable
because of the need for a catversion bump.  A more useful response probably
is to consider whether we can check permissions on the parent table instead
of the child.  However, this change will help in some cases where that
won't, and it's easy enough to do in HEAD, so let's do so.

Discussion: https://postgr.es/m/3876.1531261875@sss.pgh.pa.us
2018-07-11 18:47:31 -04:00
Andrew Dunstan 123efbccea Mark binary_upgrade_set_missing_value as parallel_unsafe
per buildfarm.

Bump catalog version again although in practice nobody is going to use
this in a parallel query.
2018-06-23 08:43:05 -04:00
Andrew Dunstan 2448adf29c Allow for pg_upgrade of attributes with missing values
Commit 16828d5c02 neglected to do this, so upgraded databases would
silently get null instead of the specified default in rows without the
attribute defined.

A new binary upgrade function is provided to perform this and pg_dump is
adjusted to output a call to the function if required in binary upgrade
mode.

Also included is code to drop missing attribute values for dropped
columns. That way if the type is later dropped the missing value won't
have a dangling reference to the type.

Finally the regression tests are adjusted to ensure that there is a row
with a missing value so that this code is exercised in upgrade testing.

Catalog version unfortunately bumped.

Regression test changes from Tom Lane.
Remainder from me, reviewed by Tom Lane, Andres Freund, Alvaro Herrera

Discussion: https://postgr.es/m/19987.1529420110@sss.pgh.pa.us
2018-06-22 08:42:36 -04:00
Andrew Dunstan 3a7cc727c7 Don't fall off the end of perl functions
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.

A small cosmetic piece of tidying of the pgperlcritic script is
included.

Mike Blackwell

Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
2018-05-27 09:08:42 -04:00
Tom Lane 71b349aef4 Update a couple of long-obsolete comments in pg_type.h. 2018-05-26 13:47:26 -04:00
Tom Lane f755a152d4 Improve spelling of new FINALFUNC_MODIFY aggregate attribute.
I'd used SHARABLE as a value originally, but Peter Eisentraut points out
that dictionaries agree that SHAREABLE is the preferred spelling.
Run around and change that before it's too late.

Discussion: https://postgr.es/m/d2e1afd4-659c-50d6-1b20-7cfd3675e909@2ndquadrant.com
2018-05-21 11:41:42 -04:00
Tom Lane e7a808f947 Assorted minor cleanups for bootstrap-data Perl scripts.
FindDefinedSymbol was intended to take an array of possible include
paths, but it never actually worked correctly for any but the first
array element.  Since there's no use-case for more than one path
anyway, let's just simplify this code and its callers by redefining
it as taking only one include path.

Minor other code-beautification without functional effects, except
that in one place we format the output as pgindent would do.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGXM_n32hTTkircW4_K1LQFsJNb6xjs0pAP4QC0ZpyJfPQ@mail.gmail.com
2018-05-19 16:04:47 -04:00
Tom Lane 85475afdb6 Cosmetic improvement: use BKI_DEFAULT and BKI_LOOKUP in pg_language.
The point of this is not really to remove redundancy in pg_language.dat;
with only three entries, it's hardly worth it.  Rather, it is to get
to a point where there are exactly zero hard-coded numeric pg_proc OID
references in the catalog .dat files.  The lanvalidator column was the
only remaining location of such references, and it seems like a good
thing for future-proofing reasons to make it not be a special case.

There are still a few places in the .dat files with numeric OID references
to other catalogs, but after review I don't see any that seem worth
changing at present.  In each case there are just too few entries to make
it worth the trouble to create lookup infrastructure.

This doesn't change the emitted postgres.bki file, so no catversion bump.
2018-04-29 13:26:26 -04:00
Tom Lane 84549ebd4c Tweak reformat_dat_file.pl to make it more easily hand-invokable.
Use the same code we already applied in duplicate_oids and unused_oids
to let this script find Catalog.pm without help.  This removes the need
to supply a -I switch in most cases.

Also, mark the script executable, again to follow the precedent of
duplicate_oids and unused_oids.  Now you can just do
"./reformat_dat_file.pl pg_proc.dat"
if you want to reformat only one or a few .dat files rather than all.

It'd be possible to remove the -I switches in the Makefile's convenience
targets, but I chose to leave them: they don't hurt anything, and it's
possible that in weird VPATH situations they might be of value.
2018-04-28 16:09:03 -04:00
Tom Lane 45c6d75f8c Clarify handling of special-case values in bootstrap catalog data.
I (tgl) originally coded the special case for pg_proc.pronargs as
though it were a kind of default value.  It seems better though to
treat computable columns as an independent concern: this makes the
code clearer, and probably a bit faster too since we needn't do
work inside the per-column loop.

Improve related comments, as well, in the expectation that there
might be more cases like this in future.

John Naylor, some additional comment-hacking by me

Discussion: https://postgr.es/m/CAJVSVGW-D7OobzU=dybVT2JqZAx-4X1yvBJdavBmqQL05Q6CLw@mail.gmail.com
2018-04-28 15:27:16 -04:00
Tom Lane bdf46af748 Post-feature-freeze pgindent run.
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-04-26 14:47:16 -04:00
Tom Lane dd4cc9d706 Fix duplicate_oids and unused_oids so user needn't cd to catalog dir.
Previously, you had to cd into src/include/catalog before running either
of these scripts.  That's a bit tedious, so let's make the scripts do it
for you.

In passing, improve the initial comments in both scripts.  Also remove
unused_oids' code to complain about duplicate oids.  That was added in
yesterday's commit 5602265f7, but on second thought we shouldn't be
randomly redefining the script's behavior that way.

John Naylor and Tom Lane

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-26 11:20:01 -04:00
Tom Lane 5602265f77 Convert unused_oids and duplicate_oids to use Catalog.pm infrastructure.
unused_oids was previously a shell script, which of course didn't work at
all on Windows.  Also, commit 372728b0d introduced some other portability
problems, as complained of by Stas Kelvich.  We can improve matters by
converting it to Perl.

While we're at it, let's future-proof both this script and duplicate_oids
to use Catalog.pm rather than having a bunch of ad-hoc logic for parsing
catalog headers and .dat files.  These scripts are thereby a bit slower,
which doesn't seem like a problem for typical manual use.  It is a little
annoying for buildfarm purposes, but we should be able to fix that case
by having genbki.pl make the check instead of parsing the headers twice.
(That's not done in this commit, though.)

Stas Kelvich, adjusted a bit by me

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-25 16:01:58 -04:00
Tom Lane 1eb3a09e93 Make Catalog.pm's representation of toast and index decls more abstract.
Instead of immediately constructing the string we need to emit into the
.BKI file, preserve the items we extracted from the header file in a hash.
This eases using the info for other purposes.

John Naylor (with cosmetic adjustments by me)

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-25 16:01:58 -04:00
Tom Lane f04d4ac919 Reindent Perl files with perltidy version 20170521.
Discussion: https://postgr.es/m/CABUevEzK3cNiHZQ18f5tK0guoT+cN_jWeVzhYYxY=r+1Q3SmoA@mail.gmail.com
2018-04-25 14:00:19 -04:00
Tom Lane 68c23cba34 Improve consistency of comments in system catalog headers.
Use the term "system catalog" rather than "system relation" in assorted
places where it's clearly referring to a table rather than, say, an
index.  Use more natural word order in the header boilerplate, improve
some of the one-liner catalog descriptions, and fix assorted random
deviations from the normal boilerplate.  All purely neatnik-ism, but
why not.

John Naylor, some additional cleanup by me

Discussion: https://postgr.es/m/CAJVSVGUeJmFB3h-NJ18P32NPa+kzC165nm7GSoGHfPaN80Wxcw@mail.gmail.com
2018-04-19 17:14:09 -04:00
Teodor Sigaev ff4943042f Fix datatype for number of heap tuples during last cleanup
It appears that new fields introduced in 857f9c36 have inconsistent datatypes:
BTMetaPageData.btm_last_cleanup_num_heap_tuples is of float4 type,
while xl_btree_metadata.last_cleanup_num_heap_tuples is of double type.
IndexVacuumInfo.num_heap_tuples, which is a source of values for
both former fields is of double type.  So, make both those fields in
BTMetaPageData and xl_btree_metadata use float8 type in order to match the
precision of the source.  That shouldn't be double type, because we always
use types with explicit width in WAL.

Patch introduces incompatibility of on-disk format since 857f9c36 commit, but
that versions never was released, so just bump catalog version to avoid
possible confusion.

Author: Alexander Korortkov
2018-04-19 11:28:03 +03:00
Tom Lane 55d26ff638 Rationalize handling of single and double quotes in bootstrap data.
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)

Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly.  This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.

Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.

Per an earlier discussion with John Naylor.

Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
2018-04-17 19:53:50 -04:00
Alvaro Herrera da6f3e45dd Reorganize partitioning code
There's been a massive addition of partitioning code in PostgreSQL 11,
with little oversight on its placement, resulting in a
catalog/partition.c with poorly defined boundaries and responsibilities.
This commit tries to set a couple of distinct modules to separate things
a little bit.  There are no code changes here, only code movement.

There are three new files:
  src/backend/utils/cache/partcache.c
  src/include/partitioning/partdefs.h
  src/include/utils/partcache.h

The previous arrangement of #including catalog/partition.h almost
everywhere is no more.

Authors: Amit Langote and Álvaro Herrera
Discussion: https://postgr.es/m/98e8d509-790a-128c-be7f-e48a5b2d8d97@lab.ntt.co.jp
	https://postgr.es/m/11aa0c50-316b-18bb-722d-c23814f39059@lab.ntt.co.jp
	https://postgr.es/m/143ed9a4-6038-76d4-9a55-502035815e68@lab.ntt.co.jp
	https://postgr.es/m/20180413193503.nynq7bnmgh6vs5vm@alvherre.pgsql
2018-04-14 21:12:14 -03:00
Tom Lane 2cdf359fc4 Make reformat_dat_file.pl preserve all blank lines.
In its original form, reformat_dat_file.pl smashed consecutive blank
lines to a single blank line, which was helpful for mopping up excess
whitespace during the bootstrap data format conversion.  But going
forward, there seems little reason to do that; if developers want to
put in multiple blank lines, let 'em.  This makes it conform to the
documentation I (tgl) wrote, too.

In passing, clean up some sloppy markup choices in bki.sgml.

John Naylor

Discussion: https://postgr.es/m/28827.1523039259@sss.pgh.pa.us
2018-04-09 14:58:39 -04:00
Tom Lane af1a949109 Further cleanup of client dependencies on src/include/catalog headers.
In commit 9c0a0de4c, I'd failed to notice that catalog/catalog.h
should also be considered a frontend-unsafe header, because it includes
(and needs) the full form of pg_class.h, not to mention relcache.h.
However, various frontend code was depending on it to get
TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for.

The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY,
as well as the OIDCHARS symbol, to common/relpath.h.  Do that, and mop up
inclusions as necessary.  (I found that quite a few current users of
catalog/catalog.h don't seem to need it at all anymore, apparently as a
result of the refactorings that created common/relpath.[hc].  And
initdb.c needed it only as a route to pg_class_d.h.)

Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
2018-04-09 14:39:58 -04:00
Magnus Hagander f5543d47bc catversion bump for online-checksums revert
Lack thereof pointed out by Tom Lane.
2018-04-09 19:26:58 +02:00
Magnus Hagander a228cc13ae Revert "Allow on-line enabling and disabling of data checksums"
This reverts the backend sides of commit 1fde38beaa.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
2018-04-09 19:03:42 +02:00
Tom Lane 4f85f66469 Cosmetic cleanups in initial catalog data.
Write ',' and ';' for typdelim values instead of the obscurantist
ASCII octal equivalents.  Not sure why anybody ever thought the
latter were better; maybe it had something to do with lack of
a better quoting convention, twenty-plus years ago?

Reassign a couple of high-numbered OIDs that were left in during
yesterday's mad rush to commit stuff of uncertain internal
temperature.

The latter requires a catversion bump, though the former wouldn't
since the end-result catalog data is unchanged.
2018-04-08 15:55:49 -04:00
Tom Lane cefa387153 Merge catalog/pg_foo_fn.h headers back into pg_foo.h headers.
Traditionally, include/catalog/pg_foo.h contains extern declarations
for functions in backend/catalog/pg_foo.c, in addition to its function
as the authoritative definition of the pg_foo catalog's rowtype.
In some cases, we'd been forced to split out those extern declarations
into separate pg_foo_fn.h headers so that the catalog definitions
could be #include'd by frontend code.  That problem is gone as of
commit 9c0a0de4c, so let's undo the splits to make things less
confusing.

Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
2018-04-08 14:35:29 -04:00
Tom Lane 372728b0d4 Replace our traditional initial-catalog-data format with a better design.
Historically, the initial catalog data to be installed during bootstrap
has been written in DATA() lines in the catalog header files.  This had
lots of disadvantages: the format was badly underdocumented, it was
very difficult to edit the data in any mechanized way, and due to the
lack of any abstraction the data was verbose, hard to read/understand,
and easy to get wrong.

Hence, move this data into separate ".dat" files and represent it in a way
that can easily be read and rewritten by Perl scripts.  The new format is
essentially "key => value" for each column; while it's a bit repetitive,
explicit labeling of each value makes the data far more readable and less
error-prone.  Provide a way to abbreviate entries by omitting field values
that match a specified default value for their column.  This allows removal
of a large amount of repetitive boilerplate and also lowers the barrier to
adding new columns.

Also teach genbki.pl how to translate symbolic OID references into
numeric OIDs for more cases than just "regproc"-like pg_proc references.
It can now do that for regprocedure-like references (thus solving the
problem that regproc is ambiguous for overloaded functions), operators,
types, opfamilies, opclasses, and access methods.  Use this to turn
nearly all OID cross-references in the initial data into symbolic form.
This represents a very large step forward in readability and error
resistance of the initial catalog data.  It should also reduce the
difficulty of renumbering OID assignments in uncommitted patches.

Also, solve the longstanding problem that frontend code that would like to
use OID macros and other information from the catalog headers often had
difficulty with backend-only code in the headers.  To do this, arrange for
all generated macros, plus such other declarations as we deem fit, to be
placed in "derived" header files that are safe for frontend inclusion.
(Once clients migrate to using these pg_*_d.h headers, it will be possible
to get rid of the pg_*_fn.h headers, which only exist to quarantine code
away from clients.  That is left for follow-on patches, however.)

The now-automatically-generated macros include the Anum_xxx and Natts_xxx
constants that we used to have to update by hand when adding or removing
catalog columns.

Replace the former manual method of generating OID macros for pg_type
entries with an automatic method, ensuring that all built-in types have
OID macros.  (But note that this patch does not change the way that
OID macros for pg_proc entries are built and used.  It's not clear that
making that match the other catalogs would be worth extra code churn.)

Add SGML documentation explaining what the new data format is and how to
work with it.

Despite being a very large change in the catalog headers, there is no
catversion bump here, because postgres.bki and related output files
haven't changed at all.

John Naylor, based on ideas from various people; review and minor
additional coding by me; previous review by Alvaro Herrera

Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 13:17:27 -04:00
Teodor Sigaev 8224de4f42 Indexes with INCLUDE columns and their support in B-tree
This patch introduces INCLUDE clause to index definition.  This clause
specifies a list of columns which will be included as a non-key part in
the index.  The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans.  Also, such columns don't need to have
appropriate operator classes.  Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.

Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine.  For now, only B-tree indexes support INCLUDE clause.

In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys).  Therefore, B-tree indexes
now might have variable number of attributes.  This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid.  Free 13th bit of t_info is used for indicating
that.  This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.

Bump catalog version

Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
			 David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
2018-04-07 23:00:39 +03:00
Teodor Sigaev 1c1791e000 Add json(b)_to_tsvector function
Jsonb has a complex nature so there isn't best-for-everything way to convert it
to tsvector for full text search. Current to_tsvector(json(b)) suggests to
convert only string values, but it's possible to index keys, numerics and even
booleans value. To solve that json(b)_to_tsvector has a second required
argument contained a list of desired types of json fields. Second argument is
a jsonb scalar or array right now with possibility to add new options in a
future.

Bump catalog version

Author: Dmitry Dolgov with some editorization by me
Reviewed by: Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
2018-04-07 20:58:03 +03:00
Peter Eisentraut 039eb6e92f Logical replication support for TRUNCATE
Update the built-in logical replication system to make use of the
previously added logical decoding for TRUNCATE support.  Add the
required truncate callback to pgoutput and a new logical replication
protocol message.

Publications get a new attribute to determine whether to replicate
truncate actions.  When updating a publication via pg_dump from an older
version, this is not set, thus preserving the previous behavior.

Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-04-07 11:34:11 -04:00
Tom Lane cb1ff1e5af Remove some unnecessary quote marks from catalog DATA lines.
This has no functional impact whatsoever.  However, it causes
these unnecessary quote marks to disappear from the generated
postgres.bki file, making it easier to verify that the upcoming
bootstrap data conversion patch doesn't change the generated file.
2018-04-06 18:58:38 -04:00
Alvaro Herrera 9fdb675fc5 Faster partition pruning
Add a new module backend/partitioning/partprune.c, implementing a more
sophisticated algorithm for partition pruning.  The new module uses each
partition's "boundinfo" for pruning instead of constraint exclusion,
based on an idea proposed by Robert Haas of a "pruning program": a list
of steps generated from the query quals which are run iteratively to
obtain a list of partitions that must be scanned in order to satisfy
those quals.

At present, this targets planner-time partition pruning, but there exist
further patches to apply partition pruning at execution time as well.

This commit also moves some definitions from include/catalog/partition.h
to a new file include/partitioning/partbounds.h, in an attempt to
rationalize partitioning related code.

Authors: Amit Langote, David Rowley, Dilip Kumar
Reviewers: Robert Haas, Kyotaro Horiguchi, Ashutosh Bapat, Jesper Pedersen.
Discussion: https://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
2018-04-06 16:44:05 -03:00
Stephen Frost 11523e860f Support new default roles with adminpack
This provides a newer version of adminpack which works with the newly
added default roles to support GRANT'ing to non-superusers access to
read and write files, along with related functions (unlinking files,
getting file length, renaming/removing files, scanning the log file
directory) which are supported through adminpack.

Note that new versions of the functions are required because an
environment might have an updated version of the library but still have
the old adminpack 1.0 catalog definitions (where EXECUTE is GRANT'd to
PUBLIC for the functions).

This patch also removes the long-deprecated alternative names for
functions that adminpack used to include and which are now included in
the backend, in adminpack v1.1.  Applications using the deprecated names
should be updated to use the backend functions instead.  Existing
installations which continue to use adminpack v1.0 should continue to
function until/unless adminpack is upgraded.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20171231191939.GR2416%40tamriel.snowman.net
2018-04-06 14:47:10 -04:00
Stephen Frost 0fdc8495bf Add default roles for file/program access
This patch adds new default roles named 'pg_read_server_files',
'pg_write_server_files', 'pg_execute_server_program' which
allow an administrator to GRANT to a non-superuser role the ability to
access server-side files or run programs through PostgreSQL (as the user
the database is running as).  Having one of these roles allows a
non-superuser to use server-side COPY to read, write, or with a program,
and to use file_fdw (if installed by a superuser and GRANT'd USAGE on
it) to read from files or run a program.

The existing misc file functions are also changed to allow a user with
the 'pg_read_server_files' default role to read any files on the
filesystem, matching the privileges given to that role through COPY and
file_fdw from above.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20171231191939.GR2416%40tamriel.snowman.net
2018-04-06 14:47:10 -04:00
Peter Eisentraut bcf79b5bb6 Split the SetSubscriptionRelState function into two
We don't actually need the insert-or-update logic, so it's clearer to
have separate functions for the inserting and updating.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
2018-04-06 10:00:26 -04:00