From c29c578908dc0271eeb13a4014e54bff07a29c05 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 8 Oct 2017 21:44:17 -0400 Subject: [PATCH] Don't use SGML empty tags MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit For DocBook XML compatibility, don't use SGML empty tags () anymore, replace by the full tag name. Add a warning option to catch future occurrences. Alexander Lakhin, Jürgen Purtz --- doc/src/sgml/Makefile | 3 +- doc/src/sgml/acronyms.sgml | 18 +- doc/src/sgml/adminpack.sgml | 54 +- doc/src/sgml/advanced.sgml | 110 +- doc/src/sgml/amcheck.sgml | 66 +- doc/src/sgml/arch-dev.sgml | 64 +- doc/src/sgml/array.sgml | 110 +- doc/src/sgml/auth-delay.sgml | 6 +- doc/src/sgml/auto-explain.sgml | 36 +- doc/src/sgml/backup.sgml | 496 +-- doc/src/sgml/bgworker.sgml | 80 +- doc/src/sgml/biblio.sgml | 2 +- doc/src/sgml/bki.sgml | 86 +- doc/src/sgml/bloom.sgml | 24 +- doc/src/sgml/brin.sgml | 78 +- doc/src/sgml/btree-gin.sgml | 18 +- doc/src/sgml/btree-gist.sgml | 32 +- doc/src/sgml/catalogs.sgml | 1012 +++---- doc/src/sgml/charset.sgml | 270 +- doc/src/sgml/citext.sgml | 120 +- doc/src/sgml/client-auth.sgml | 328 +- doc/src/sgml/config.sgml | 2156 ++++++------- doc/src/sgml/contrib-spi.sgml | 80 +- doc/src/sgml/contrib.sgml | 26 +- doc/src/sgml/cube.sgml | 158 +- doc/src/sgml/custom-scan.sgml | 136 +- doc/src/sgml/datatype.sgml | 692 ++--- doc/src/sgml/datetime.sgml | 72 +- doc/src/sgml/dblink.sgml | 270 +- doc/src/sgml/ddl.sgml | 390 +-- doc/src/sgml/dfunc.sgml | 40 +- doc/src/sgml/dict-int.sgml | 18 +- doc/src/sgml/dict-xsyn.sgml | 40 +- doc/src/sgml/diskusage.sgml | 16 +- doc/src/sgml/dml.sgml | 32 +- doc/src/sgml/docguide.sgml | 2 +- doc/src/sgml/earthdistance.sgml | 32 +- doc/src/sgml/ecpg.sgml | 734 ++--- doc/src/sgml/errcodes.sgml | 18 +- doc/src/sgml/event-trigger.sgml | 78 +- doc/src/sgml/extend.sgml | 362 +-- doc/src/sgml/external-projects.sgml | 18 +- doc/src/sgml/fdwhandler.sgml | 880 +++--- doc/src/sgml/file-fdw.sgml | 48 +- doc/src/sgml/func.sgml | 2528 ++++++++-------- doc/src/sgml/fuzzystrmatch.sgml | 26 +- doc/src/sgml/generate-errcodes-table.pl | 4 +- doc/src/sgml/generic-wal.sgml | 42 +- doc/src/sgml/geqo.sgml | 12 +- doc/src/sgml/gin.sgml | 286 +- doc/src/sgml/gist.sgml | 402 +-- doc/src/sgml/high-availability.sgml | 516 ++-- doc/src/sgml/history.sgml | 4 +- doc/src/sgml/hstore.sgml | 152 +- doc/src/sgml/indexam.sgml | 446 +-- doc/src/sgml/indices.sgml | 224 +- doc/src/sgml/info.sgml | 6 +- doc/src/sgml/information_schema.sgml | 364 +-- doc/src/sgml/install-windows.sgml | 58 +- doc/src/sgml/installation.sgml | 460 +-- doc/src/sgml/intagg.sgml | 28 +- doc/src/sgml/intarray.sgml | 68 +- doc/src/sgml/intro.sgml | 4 +- doc/src/sgml/isn.sgml | 28 +- doc/src/sgml/json.sgml | 174 +- doc/src/sgml/libpq.sgml | 1414 ++++----- doc/src/sgml/lo.sgml | 34 +- doc/src/sgml/lobj.sgml | 128 +- doc/src/sgml/logicaldecoding.sgml | 18 +- doc/src/sgml/ltree.sgml | 234 +- doc/src/sgml/maintenance.sgml | 338 +-- doc/src/sgml/manage-ag.sgml | 196 +- doc/src/sgml/monitoring.sgml | 1440 ++++----- doc/src/sgml/mvcc.sgml | 130 +- doc/src/sgml/nls.sgml | 24 +- doc/src/sgml/notation.sgml | 8 +- doc/src/sgml/oid2name.sgml | 60 +- doc/src/sgml/pageinspect.sgml | 36 +- doc/src/sgml/parallel.sgml | 96 +- doc/src/sgml/perform.sgml | 344 +-- doc/src/sgml/pgbuffercache.sgml | 16 +- doc/src/sgml/pgcrypto.sgml | 176 +- doc/src/sgml/pgfreespacemap.sgml | 8 +- doc/src/sgml/pgprewarm.sgml | 16 +- doc/src/sgml/pgrowlocks.sgml | 10 +- doc/src/sgml/pgstandby.sgml | 100 +- doc/src/sgml/pgstatstatements.sgml | 106 +- doc/src/sgml/pgstattuple.sgml | 30 +- doc/src/sgml/pgtrgm.sgml | 90 +- doc/src/sgml/pgvisibility.sgml | 12 +- doc/src/sgml/planstats.sgml | 52 +- doc/src/sgml/plhandler.sgml | 50 +- doc/src/sgml/plperl.sgml | 148 +- doc/src/sgml/plpgsql.sgml | 1204 ++++---- doc/src/sgml/plpython.sgml | 104 +- doc/src/sgml/pltcl.sgml | 210 +- doc/src/sgml/postgres-fdw.sgml | 198 +- doc/src/sgml/postgres.sgml | 16 +- doc/src/sgml/problems.sgml | 20 +- doc/src/sgml/protocol.sgml | 474 +-- doc/src/sgml/queries.sgml | 674 ++--- doc/src/sgml/query.sgml | 36 +- doc/src/sgml/rangetypes.sgml | 66 +- doc/src/sgml/recovery-config.sgml | 132 +- doc/src/sgml/ref/abort.sgml | 2 +- doc/src/sgml/ref/alter_aggregate.sgml | 18 +- doc/src/sgml/ref/alter_collation.sgml | 2 +- doc/src/sgml/ref/alter_conversion.sgml | 2 +- doc/src/sgml/ref/alter_database.sgml | 4 +- .../sgml/ref/alter_default_privileges.sgml | 20 +- doc/src/sgml/ref/alter_domain.sgml | 20 +- doc/src/sgml/ref/alter_extension.sgml | 22 +- .../sgml/ref/alter_foreign_data_wrapper.sgml | 18 +- doc/src/sgml/ref/alter_foreign_table.sgml | 42 +- doc/src/sgml/ref/alter_function.sgml | 28 +- doc/src/sgml/ref/alter_group.sgml | 8 +- doc/src/sgml/ref/alter_index.sgml | 10 +- doc/src/sgml/ref/alter_materialized_view.sgml | 4 +- doc/src/sgml/ref/alter_opclass.sgml | 2 +- doc/src/sgml/ref/alter_operator.sgml | 2 +- doc/src/sgml/ref/alter_opfamily.sgml | 32 +- doc/src/sgml/ref/alter_publication.sgml | 8 +- doc/src/sgml/ref/alter_role.sgml | 22 +- doc/src/sgml/ref/alter_schema.sgml | 2 +- doc/src/sgml/ref/alter_sequence.sgml | 34 +- doc/src/sgml/ref/alter_server.sgml | 12 +- doc/src/sgml/ref/alter_statistics.sgml | 4 +- doc/src/sgml/ref/alter_subscription.sgml | 4 +- doc/src/sgml/ref/alter_system.sgml | 12 +- doc/src/sgml/ref/alter_table.sgml | 140 +- doc/src/sgml/ref/alter_tablespace.sgml | 4 +- doc/src/sgml/ref/alter_trigger.sgml | 4 +- doc/src/sgml/ref/alter_tsconfig.sgml | 20 +- doc/src/sgml/ref/alter_tsdictionary.sgml | 8 +- doc/src/sgml/ref/alter_tsparser.sgml | 2 +- doc/src/sgml/ref/alter_tstemplate.sgml | 2 +- doc/src/sgml/ref/alter_type.sgml | 6 +- doc/src/sgml/ref/alter_user_mapping.sgml | 14 +- doc/src/sgml/ref/alter_view.sgml | 10 +- doc/src/sgml/ref/analyze.sgml | 16 +- doc/src/sgml/ref/begin.sgml | 4 +- doc/src/sgml/ref/close.sgml | 4 +- doc/src/sgml/ref/cluster.sgml | 10 +- doc/src/sgml/ref/clusterdb.sgml | 62 +- doc/src/sgml/ref/comment.sgml | 26 +- doc/src/sgml/ref/commit.sgml | 2 +- doc/src/sgml/ref/commit_prepared.sgml | 2 +- doc/src/sgml/ref/copy.sgml | 226 +- doc/src/sgml/ref/create_access_method.sgml | 8 +- doc/src/sgml/ref/create_aggregate.sgml | 164 +- doc/src/sgml/ref/create_cast.sgml | 80 +- doc/src/sgml/ref/create_collation.sgml | 2 +- doc/src/sgml/ref/create_conversion.sgml | 6 +- doc/src/sgml/ref/create_database.sgml | 54 +- doc/src/sgml/ref/create_domain.sgml | 22 +- doc/src/sgml/ref/create_event_trigger.sgml | 6 +- doc/src/sgml/ref/create_extension.sgml | 38 +- .../sgml/ref/create_foreign_data_wrapper.sgml | 12 +- doc/src/sgml/ref/create_foreign_table.sgml | 44 +- doc/src/sgml/ref/create_function.sgml | 128 +- doc/src/sgml/ref/create_index.sgml | 120 +- doc/src/sgml/ref/create_language.sgml | 42 +- .../sgml/ref/create_materialized_view.sgml | 10 +- doc/src/sgml/ref/create_opclass.sgml | 38 +- doc/src/sgml/ref/create_operator.sgml | 24 +- doc/src/sgml/ref/create_opfamily.sgml | 4 +- doc/src/sgml/ref/create_policy.sgml | 20 +- doc/src/sgml/ref/create_publication.sgml | 14 +- doc/src/sgml/ref/create_role.sgml | 94 +- doc/src/sgml/ref/create_rule.sgml | 32 +- doc/src/sgml/ref/create_schema.sgml | 32 +- doc/src/sgml/ref/create_sequence.sgml | 28 +- doc/src/sgml/ref/create_server.sgml | 8 +- doc/src/sgml/ref/create_statistics.sgml | 10 +- doc/src/sgml/ref/create_subscription.sgml | 4 +- doc/src/sgml/ref/create_table.sgml | 294 +- doc/src/sgml/ref/create_table_as.sgml | 36 +- doc/src/sgml/ref/create_tablespace.sgml | 22 +- doc/src/sgml/ref/create_trigger.sgml | 172 +- doc/src/sgml/ref/create_tsconfig.sgml | 2 +- doc/src/sgml/ref/create_tstemplate.sgml | 2 +- doc/src/sgml/ref/create_type.sgml | 98 +- doc/src/sgml/ref/create_user.sgml | 4 +- doc/src/sgml/ref/create_user_mapping.sgml | 10 +- doc/src/sgml/ref/create_view.sgml | 140 +- doc/src/sgml/ref/createdb.sgml | 66 +- doc/src/sgml/ref/createuser.sgml | 114 +- doc/src/sgml/ref/declare.sgml | 56 +- doc/src/sgml/ref/delete.sgml | 62 +- doc/src/sgml/ref/discard.sgml | 10 +- doc/src/sgml/ref/do.sgml | 18 +- doc/src/sgml/ref/drop_access_method.sgml | 4 +- doc/src/sgml/ref/drop_aggregate.sgml | 10 +- doc/src/sgml/ref/drop_collation.sgml | 4 +- doc/src/sgml/ref/drop_conversion.sgml | 2 +- doc/src/sgml/ref/drop_database.sgml | 2 +- doc/src/sgml/ref/drop_domain.sgml | 6 +- doc/src/sgml/ref/drop_extension.sgml | 6 +- .../sgml/ref/drop_foreign_data_wrapper.sgml | 6 +- doc/src/sgml/ref/drop_foreign_table.sgml | 2 +- doc/src/sgml/ref/drop_function.sgml | 12 +- doc/src/sgml/ref/drop_index.sgml | 12 +- doc/src/sgml/ref/drop_language.sgml | 4 +- doc/src/sgml/ref/drop_opclass.sgml | 12 +- doc/src/sgml/ref/drop_opfamily.sgml | 4 +- doc/src/sgml/ref/drop_owned.sgml | 2 +- doc/src/sgml/ref/drop_publication.sgml | 2 +- doc/src/sgml/ref/drop_role.sgml | 4 +- doc/src/sgml/ref/drop_schema.sgml | 2 +- doc/src/sgml/ref/drop_sequence.sgml | 2 +- doc/src/sgml/ref/drop_server.sgml | 6 +- doc/src/sgml/ref/drop_subscription.sgml | 2 +- doc/src/sgml/ref/drop_table.sgml | 6 +- doc/src/sgml/ref/drop_tablespace.sgml | 6 +- doc/src/sgml/ref/drop_tsconfig.sgml | 4 +- doc/src/sgml/ref/drop_tsdictionary.sgml | 2 +- doc/src/sgml/ref/drop_tsparser.sgml | 2 +- doc/src/sgml/ref/drop_tstemplate.sgml | 2 +- doc/src/sgml/ref/drop_type.sgml | 4 +- doc/src/sgml/ref/drop_user_mapping.sgml | 14 +- doc/src/sgml/ref/drop_view.sgml | 2 +- doc/src/sgml/ref/dropdb.sgml | 48 +- doc/src/sgml/ref/dropuser.sgml | 46 +- doc/src/sgml/ref/ecpg-ref.sgml | 6 +- doc/src/sgml/ref/end.sgml | 2 +- doc/src/sgml/ref/execute.sgml | 4 +- doc/src/sgml/ref/explain.sgml | 18 +- doc/src/sgml/ref/fetch.sgml | 36 +- doc/src/sgml/ref/grant.sgml | 98 +- doc/src/sgml/ref/import_foreign_schema.sgml | 14 +- doc/src/sgml/ref/initdb.sgml | 42 +- doc/src/sgml/ref/insert.sgml | 86 +- doc/src/sgml/ref/listen.sgml | 6 +- doc/src/sgml/ref/load.sgml | 14 +- doc/src/sgml/ref/lock.sgml | 72 +- doc/src/sgml/ref/move.sgml | 2 +- doc/src/sgml/ref/notify.sgml | 12 +- doc/src/sgml/ref/pg_basebackup.sgml | 40 +- doc/src/sgml/ref/pg_config-ref.sgml | 100 +- doc/src/sgml/ref/pg_controldata.sgml | 8 +- doc/src/sgml/ref/pg_ctl-ref.sgml | 50 +- doc/src/sgml/ref/pg_dump.sgml | 280 +- doc/src/sgml/ref/pg_dumpall.sgml | 102 +- doc/src/sgml/ref/pg_isready.sgml | 32 +- doc/src/sgml/ref/pg_receivewal.sgml | 34 +- doc/src/sgml/ref/pg_recvlogical.sgml | 26 +- doc/src/sgml/ref/pg_resetwal.sgml | 56 +- doc/src/sgml/ref/pg_restore.sgml | 150 +- doc/src/sgml/ref/pg_rewind.sgml | 48 +- doc/src/sgml/ref/pg_waldump.sgml | 16 +- doc/src/sgml/ref/pgarchivecleanup.sgml | 50 +- doc/src/sgml/ref/pgbench.sgml | 568 ++-- doc/src/sgml/ref/pgtestfsync.sgml | 16 +- doc/src/sgml/ref/pgtesttiming.sgml | 10 +- doc/src/sgml/ref/pgupgrade.sgml | 232 +- doc/src/sgml/ref/postgres-ref.sgml | 76 +- doc/src/sgml/ref/postmaster.sgml | 2 +- doc/src/sgml/ref/prepare.sgml | 18 +- doc/src/sgml/ref/prepare_transaction.sgml | 30 +- doc/src/sgml/ref/psql-ref.sgml | 662 ++-- doc/src/sgml/ref/reassign_owned.sgml | 2 +- .../sgml/ref/refresh_materialized_view.sgml | 4 +- doc/src/sgml/ref/reindex.sgml | 34 +- doc/src/sgml/ref/reindexdb.sgml | 80 +- doc/src/sgml/ref/release_savepoint.sgml | 2 +- doc/src/sgml/ref/reset.sgml | 10 +- doc/src/sgml/ref/revoke.sgml | 48 +- doc/src/sgml/ref/rollback.sgml | 2 +- doc/src/sgml/ref/rollback_prepared.sgml | 2 +- doc/src/sgml/ref/rollback_to.sgml | 28 +- doc/src/sgml/ref/savepoint.sgml | 8 +- doc/src/sgml/ref/security_label.sgml | 22 +- doc/src/sgml/ref/select.sgml | 598 ++-- doc/src/sgml/ref/set.sgml | 44 +- doc/src/sgml/ref/set_constraints.sgml | 14 +- doc/src/sgml/ref/set_role.sgml | 36 +- doc/src/sgml/ref/set_session_auth.sgml | 16 +- doc/src/sgml/ref/set_transaction.sgml | 12 +- doc/src/sgml/ref/show.sgml | 2 +- doc/src/sgml/ref/start_transaction.sgml | 6 +- doc/src/sgml/ref/truncate.sgml | 38 +- doc/src/sgml/ref/unlisten.sgml | 2 +- doc/src/sgml/ref/update.sgml | 88 +- doc/src/sgml/ref/vacuum.sgml | 24 +- doc/src/sgml/ref/vacuumdb.sgml | 52 +- doc/src/sgml/ref/values.sgml | 60 +- doc/src/sgml/regress.sgml | 122 +- doc/src/sgml/release-10.sgml | 736 ++--- doc/src/sgml/release-7.4.sgml | 700 ++--- doc/src/sgml/release-8.0.sgml | 1266 ++++---- doc/src/sgml/release-8.1.sgml | 1344 ++++---- doc/src/sgml/release-8.2.sgml | 1598 +++++----- doc/src/sgml/release-8.3.sgml | 1682 +++++----- doc/src/sgml/release-8.4.sgml | 2468 +++++++-------- doc/src/sgml/release-9.0.sgml | 2572 ++++++++-------- doc/src/sgml/release-9.1.sgml | 2678 ++++++++-------- doc/src/sgml/release-9.2.sgml | 2694 ++++++++--------- doc/src/sgml/release-9.3.sgml | 2518 +++++++-------- doc/src/sgml/release-9.4.sgml | 2142 ++++++------- doc/src/sgml/release-9.5.sgml | 1800 +++++------ doc/src/sgml/release-9.6.sgml | 1516 +++++----- doc/src/sgml/release-old.sgml | 314 +- doc/src/sgml/release.sgml | 2 +- doc/src/sgml/rowtypes.sgml | 130 +- doc/src/sgml/rules.sgml | 326 +- doc/src/sgml/runtime.sgml | 610 ++-- doc/src/sgml/seg.sgml | 70 +- doc/src/sgml/sepgsql.sgml | 190 +- doc/src/sgml/sourcerepo.sgml | 24 +- doc/src/sgml/sources.sgml | 170 +- doc/src/sgml/spgist.sgml | 468 +-- doc/src/sgml/spi.sgml | 226 +- doc/src/sgml/sslinfo.sgml | 14 +- doc/src/sgml/start.sgml | 22 +- doc/src/sgml/storage.sgml | 328 +- doc/src/sgml/syntax.sgml | 362 +-- doc/src/sgml/tablefunc.sgml | 168 +- doc/src/sgml/tablesample-method.sgml | 128 +- doc/src/sgml/tcn.sgml | 8 +- doc/src/sgml/test-decoding.sgml | 4 +- doc/src/sgml/textsearch.sgml | 734 ++--- doc/src/sgml/trigger.sgml | 220 +- doc/src/sgml/tsm-system-rows.sgml | 8 +- doc/src/sgml/tsm-system-time.sgml | 8 +- doc/src/sgml/typeconv.sgml | 122 +- doc/src/sgml/unaccent.sgml | 44 +- doc/src/sgml/user-manag.sgml | 138 +- doc/src/sgml/uuid-ossp.sgml | 26 +- doc/src/sgml/vacuumlo.sgml | 42 +- doc/src/sgml/wal.sgml | 138 +- doc/src/sgml/xaggr.sgml | 160 +- doc/src/sgml/xfunc.sgml | 614 ++-- doc/src/sgml/xindex.sgml | 192 +- doc/src/sgml/xml2.sgml | 58 +- doc/src/sgml/xoper.sgml | 142 +- doc/src/sgml/xplang.sgml | 26 +- doc/src/sgml/xtypes.sgml | 68 +- 337 files changed, 31636 insertions(+), 31635 deletions(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 164c00bb63..428eb569fc 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -66,10 +66,11 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML) # Enable some extra warnings # -wfully-tagged needed to throw a warning on missing tags # for older tool chains, 2007-08-31 -override SPFLAGS += -wall -wno-unused-param -wno-empty -wfully-tagged +override SPFLAGS += -wall -wno-unused-param -wfully-tagged # Additional warnings for XML compatibility. The conditional is meant # to detect whether we are using OpenSP rather than the ancient # original SP. +override SPFLAGS += -wempty ifneq (,$(filter o%,$(notdir $(OSX)))) override SPFLAGS += -wdata-delim -winstance-ignore-ms -winstance-include-ms -winstance-param-entity endif diff --git a/doc/src/sgml/acronyms.sgml b/doc/src/sgml/acronyms.sgml index 29f85e0846..35514d4d9a 100644 --- a/doc/src/sgml/acronyms.sgml +++ b/doc/src/sgml/acronyms.sgml @@ -4,8 +4,8 @@ Acronyms - This is a list of acronyms commonly used in the PostgreSQL - documentation and in discussions about PostgreSQL. + This is a list of acronyms commonly used in the PostgreSQL + documentation and in discussions about PostgreSQL. @@ -153,7 +153,7 @@ Data Definition Language, SQL commands such as CREATE - TABLE, ALTER USER + TABLE, ALTER USER @@ -164,8 +164,8 @@ Data - Manipulation Language, SQL commands such as INSERT, - UPDATE, DELETE + Manipulation Language, SQL commands such as INSERT, + UPDATE, DELETE @@ -281,7 +281,7 @@ Grand Unified Configuration, - the PostgreSQL subsystem that handles server configuration + the PostgreSQL subsystem that handles server configuration @@ -384,7 +384,7 @@ LSN - Log Sequence Number, see pg_lsn + Log Sequence Number, see pg_lsn and WAL Internals. @@ -486,7 +486,7 @@ PGSQL - PostgreSQL + PostgreSQL @@ -495,7 +495,7 @@ PGXS - PostgreSQL Extension System + PostgreSQL Extension System diff --git a/doc/src/sgml/adminpack.sgml b/doc/src/sgml/adminpack.sgml index fddf90c4a5..b27a4a325d 100644 --- a/doc/src/sgml/adminpack.sgml +++ b/doc/src/sgml/adminpack.sgml @@ -8,8 +8,8 @@ - adminpack provides a number of support functions which - pgAdmin and other administration and management tools can + adminpack provides a number of support functions which + pgAdmin and other administration and management tools can use to provide additional functionality, such as remote management of server log files. Use of all these functions is restricted to superusers. @@ -25,7 +25,7 @@ - <filename>adminpack</> Functions + <filename>adminpack</filename> Functions Name Return Type Description @@ -58,7 +58,7 @@ pg_catalog.pg_logdir_ls() setof record - List the log files in the log_directory directory + List the log files in the log_directory directory @@ -69,9 +69,9 @@ pg_file_write - pg_file_write writes the specified data into - the file named by filename. If append is - false, the file must not already exist. If append is true, + pg_file_write writes the specified data into + the file named by filename. If append is + false, the file must not already exist. If append is true, the file can already exist, and will be appended to if so. Returns the number of bytes written. @@ -80,15 +80,15 @@ pg_file_rename - pg_file_rename renames a file. If archivename - is omitted or NULL, it simply renames oldname - to newname (which must not already exist). - If archivename is provided, it first - renames newname to archivename (which must - not already exist), and then renames oldname - to newname. In event of failure of the second rename step, - it will try to rename archivename back - to newname before reporting the error. + pg_file_rename renames a file. If archivename + is omitted or NULL, it simply renames oldname + to newname (which must not already exist). + If archivename is provided, it first + renames newname to archivename (which must + not already exist), and then renames oldname + to newname. In event of failure of the second rename step, + it will try to rename archivename back + to newname before reporting the error. Returns true on success, false if the source file(s) are not present or not writable; other cases throw errors. @@ -97,19 +97,19 @@ pg_file_unlink - pg_file_unlink removes the specified file. + pg_file_unlink removes the specified file. Returns true on success, false if the specified file is not present - or the unlink() call fails; other cases throw errors. + or the unlink() call fails; other cases throw errors. pg_logdir_ls - pg_logdir_ls returns the start timestamps and path + pg_logdir_ls returns the start timestamps and path names of all the log files in the directory. The parameter must have its - default setting (postgresql-%Y-%m-%d_%H%M%S.log) to use this + default setting (postgresql-%Y-%m-%d_%H%M%S.log) to use this function. @@ -119,12 +119,12 @@ and should not be used in new applications; instead use those shown in and . These functions are - provided in adminpack only for compatibility with old - versions of pgAdmin. + provided in adminpack only for compatibility with old + versions of pgAdmin.
- Deprecated <filename>adminpack</> Functions + Deprecated <filename>adminpack</filename> Functions Name Return Type Description @@ -136,22 +136,22 @@ pg_catalog.pg_file_read(filename text, offset bigint, nbytes bigint) text - Alternate name for pg_read_file() + Alternate name for pg_read_file() pg_catalog.pg_file_length(filename text) bigint - Same as size column returned - by pg_stat_file() + Same as size column returned + by pg_stat_file() pg_catalog.pg_logfile_rotate() integer - Alternate name for pg_rotate_logfile(), but note that it + Alternate name for pg_rotate_logfile(), but note that it returns integer 0 or 1 rather than boolean diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml index f47c01987b..bf87df4dcb 100644 --- a/doc/src/sgml/advanced.sgml +++ b/doc/src/sgml/advanced.sgml @@ -145,7 +145,7 @@ DETAIL: Key (city)=(Berkeley) is not present in table "cities". - Transactions are a fundamental concept of all database + Transactions are a fundamental concept of all database systems. The essential point of a transaction is that it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps are not visible to other concurrent transactions, @@ -182,8 +182,8 @@ UPDATE branches SET balance = balance + 100.00 remain a happy customer if she was debited without Bob being credited. We need a guarantee that if something goes wrong partway through the operation, none of the steps executed so far will take effect. Grouping - the updates into a transaction gives us this guarantee. - A transaction is said to be atomic: from the point of + the updates into a transaction gives us this guarantee. + A transaction is said to be atomic: from the point of view of other transactions, it either happens completely or not at all. @@ -216,9 +216,9 @@ UPDATE branches SET balance = balance + 100.00 - In PostgreSQL, a transaction is set up by surrounding + In PostgreSQL, a transaction is set up by surrounding the SQL commands of the transaction with - BEGIN and COMMIT commands. So our banking + BEGIN and COMMIT commands. So our banking transaction would actually look like: @@ -233,23 +233,23 @@ COMMIT; If, partway through the transaction, we decide we do not want to commit (perhaps we just noticed that Alice's balance went negative), - we can issue the command ROLLBACK instead of - COMMIT, and all our updates so far will be canceled. + we can issue the command ROLLBACK instead of + COMMIT, and all our updates so far will be canceled. - PostgreSQL actually treats every SQL statement as being - executed within a transaction. If you do not issue a BEGIN + PostgreSQL actually treats every SQL statement as being + executed within a transaction. If you do not issue a BEGIN command, - then each individual statement has an implicit BEGIN and - (if successful) COMMIT wrapped around it. A group of - statements surrounded by BEGIN and COMMIT - is sometimes called a transaction block. + then each individual statement has an implicit BEGIN and + (if successful) COMMIT wrapped around it. A group of + statements surrounded by BEGIN and COMMIT + is sometimes called a transaction block. - Some client libraries issue BEGIN and COMMIT + Some client libraries issue BEGIN and COMMIT commands automatically, so that you might get the effect of transaction blocks without asking. Check the documentation for the interface you are using. @@ -258,11 +258,11 @@ COMMIT; It's possible to control the statements in a transaction in a more - granular fashion through the use of savepoints. Savepoints + granular fashion through the use of savepoints. Savepoints allow you to selectively discard parts of the transaction, while committing the rest. After defining a savepoint with - SAVEPOINT, you can if needed roll back to the savepoint - with ROLLBACK TO. All the transaction's database changes + SAVEPOINT, you can if needed roll back to the savepoint + with ROLLBACK TO. All the transaction's database changes between defining the savepoint and rolling back to it are discarded, but changes earlier than the savepoint are kept. @@ -308,7 +308,7 @@ COMMIT; This example is, of course, oversimplified, but there's a lot of control possible in a transaction block through the use of savepoints. - Moreover, ROLLBACK TO is the only way to regain control of a + Moreover, ROLLBACK TO is the only way to regain control of a transaction block that was put in aborted state by the system due to an error, short of rolling it back completely and starting again. @@ -325,7 +325,7 @@ COMMIT; - A window function performs a calculation across a set of + A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. However, window functions do not cause rows to become grouped into a single @@ -360,31 +360,31 @@ SELECT depname, empno, salary, avg(salary) OVER (PARTITION BY depname) FROM emps The first three output columns come directly from the table - empsalary, and there is one output row for each row in the + empsalary, and there is one output row for each row in the table. The fourth column represents an average taken across all the table - rows that have the same depname value as the current row. - (This actually is the same function as the non-window avg - aggregate, but the OVER clause causes it to be + rows that have the same depname value as the current row. + (This actually is the same function as the non-window avg + aggregate, but the OVER clause causes it to be treated as a window function and computed across the window frame.) - A window function call always contains an OVER clause + A window function call always contains an OVER clause directly following the window function's name and argument(s). This is what syntactically distinguishes it from a normal function or non-window - aggregate. The OVER clause determines exactly how the + aggregate. The OVER clause determines exactly how the rows of the query are split up for processing by the window function. - The PARTITION BY clause within OVER + The PARTITION BY clause within OVER divides the rows into groups, or partitions, that share the same - values of the PARTITION BY expression(s). For each row, + values of the PARTITION BY expression(s). For each row, the window function is computed across the rows that fall into the same partition as the current row. You can also control the order in which rows are processed by - window functions using ORDER BY within OVER. - (The window ORDER BY does not even have to match the + window functions using ORDER BY within OVER. + (The window ORDER BY does not even have to match the order in which the rows are output.) Here is an example: @@ -409,39 +409,39 @@ FROM empsalary; (10 rows) - As shown here, the rank function produces a numerical rank - for each distinct ORDER BY value in the current row's - partition, using the order defined by the ORDER BY clause. - rank needs no explicit parameter, because its behavior - is entirely determined by the OVER clause. + As shown here, the rank function produces a numerical rank + for each distinct ORDER BY value in the current row's + partition, using the order defined by the ORDER BY clause. + rank needs no explicit parameter, because its behavior + is entirely determined by the OVER clause. The rows considered by a window function are those of the virtual - table produced by the query's FROM clause as filtered by its - WHERE, GROUP BY, and HAVING clauses + table produced by the query's FROM clause as filtered by its + WHERE, GROUP BY, and HAVING clauses if any. For example, a row removed because it does not meet the - WHERE condition is not seen by any window function. + WHERE condition is not seen by any window function. A query can contain multiple window functions that slice up the data - in different ways using different OVER clauses, but + in different ways using different OVER clauses, but they all act on the same collection of rows defined by this virtual table. - We already saw that ORDER BY can be omitted if the ordering + We already saw that ORDER BY can be omitted if the ordering of rows is not important. It is also possible to omit PARTITION - BY, in which case there is a single partition containing all rows. + BY, in which case there is a single partition containing all rows. There is another important concept associated with window functions: for each row, there is a set of rows within its partition called its - window frame. Some window functions act only + window frame. Some window functions act only on the rows of the window frame, rather than of the whole partition. - By default, if ORDER BY is supplied then the frame consists of + By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the - ORDER BY clause. When ORDER BY is omitted the + ORDER BY clause. When ORDER BY is omitted the default frame consists of all rows in the partition. @@ -450,7 +450,7 @@ FROM empsalary; for details. - Here is an example using sum: + Here is an example using sum: @@ -474,11 +474,11 @@ SELECT salary, sum(salary) OVER () FROM empsalary; - Above, since there is no ORDER BY in the OVER + Above, since there is no ORDER BY in the OVER clause, the window frame is the same as the partition, which for lack of - PARTITION BY is the whole table; in other words each sum is + PARTITION BY is the whole table; in other words each sum is taken over the whole table and so we get the same result for each output - row. But if we add an ORDER BY clause, we get very different + row. But if we add an ORDER BY clause, we get very different results: @@ -510,8 +510,8 @@ SELECT salary, sum(salary) OVER (ORDER BY salary) FROM empsalary; Window functions are permitted only in the SELECT list - and the ORDER BY clause of the query. They are forbidden - elsewhere, such as in GROUP BY, HAVING + and the ORDER BY clause of the query. They are forbidden + elsewhere, such as in GROUP BY, HAVING and WHERE clauses. This is because they logically execute after the processing of those clauses. Also, window functions execute after non-window aggregate functions. This means it is valid to @@ -534,15 +534,15 @@ WHERE pos < 3; The above query only shows the rows from the inner query having - rank less than 3. + rank less than 3. When a query involves multiple window functions, it is possible to write - out each one with a separate OVER clause, but this is + out each one with a separate OVER clause, but this is duplicative and error-prone if the same windowing behavior is wanted for several functions. Instead, each windowing behavior can be named - in a WINDOW clause and then referenced in OVER. + in a WINDOW clause and then referenced in OVER. For example: @@ -623,13 +623,13 @@ CREATE TABLE capitals ( In this case, a row of capitals - inherits all columns (name, - population, and altitude) from its + inherits all columns (name, + population, and altitude) from its parent, cities. The type of the column name is text, a native PostgreSQL type for variable length character strings. State capitals have - an extra column, state, that shows their state. In + an extra column, state, that shows their state. In PostgreSQL, a table can inherit from zero or more other tables. diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml index dd71dbd679..0dd68f0ba1 100644 --- a/doc/src/sgml/amcheck.sgml +++ b/doc/src/sgml/amcheck.sgml @@ -8,19 +8,19 @@ - The amcheck module provides functions that allow you to + The amcheck module provides functions that allow you to verify the logical consistency of the structure of indexes. If the structure appears to be valid, no error is raised. - The functions verify various invariants in the + The functions verify various invariants in the structure of the representation of particular indexes. The correctness of the access method functions behind index scans and other important operations relies on these invariants always holding. For example, certain functions verify, among other things, - that all B-Tree pages have items in logical order (e.g., - for B-Tree indexes on text, index tuples should be in + that all B-Tree pages have items in logical order (e.g., + for B-Tree indexes on text, index tuples should be in collated lexical order). If that particular invariant somehow fails to hold, we can expect binary searches on the affected page to incorrectly guide index scans, resulting in wrong answers to SQL @@ -35,7 +35,7 @@ functions. - amcheck functions may be used only by superusers. + amcheck functions may be used only by superusers. @@ -82,7 +82,7 @@ ORDER BY c.relpages DESC LIMIT 10; (10 rows) This example shows a session that performs verification of every - catalog index in the database test. Details of just + catalog index in the database test. Details of just the 10 largest indexes verified are displayed. Since no error is raised, all indexes tested appear to be logically consistent. Naturally, this query could easily be changed to call @@ -90,10 +90,10 @@ ORDER BY c.relpages DESC LIMIT 10; database where verification is supported. - bt_index_check acquires an AccessShareLock + bt_index_check acquires an AccessShareLock on the target index and the heap relation it belongs to. This lock mode is the same lock mode acquired on relations by simple - SELECT statements. + SELECT statements. bt_index_check does not verify invariants that span child/parent relationships, nor does it verify that the target index is consistent with its heap relation. When a @@ -132,13 +132,13 @@ ORDER BY c.relpages DESC LIMIT 10; logical inconsistency or other problem. - A ShareLock is required on the target index by + A ShareLock is required on the target index by bt_index_parent_check (a - ShareLock is also acquired on the heap relation). + ShareLock is also acquired on the heap relation). These locks prevent concurrent data modification from - INSERT, UPDATE, and DELETE + INSERT, UPDATE, and DELETE commands. The locks also prevent the underlying relation from - being concurrently processed by VACUUM, as well as + being concurrently processed by VACUUM, as well as all other utility commands. Note that the function holds locks only while running, not for the entire transaction. @@ -159,13 +159,13 @@ ORDER BY c.relpages DESC LIMIT 10; - Using <filename>amcheck</> effectively + Using <filename>amcheck</filename> effectively - amcheck can be effective at detecting various types of + amcheck can be effective at detecting various types of failure modes that data page - checksums will always fail to catch. These include: + checksums will always fail to catch. These include: @@ -176,13 +176,13 @@ ORDER BY c.relpages DESC LIMIT 10; This includes issues caused by the comparison rules of operating system collations changing. Comparisons of datums of a collatable - type like text must be immutable (just as all + type like text must be immutable (just as all comparisons used for B-Tree index scans must be immutable), which implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the collation order between a master server and a standby server is - implicated, possibly because the major operating + implicated, possibly because the major operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally only be detected on standby servers. @@ -190,25 +190,25 @@ ORDER BY c.relpages DESC LIMIT 10; If a problem like this arises, it may not affect each individual index that is ordered using an affected collation, simply because - indexed values might happen to have the same + indexed values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See and for - further details about how PostgreSQL uses + further details about how PostgreSQL uses operating system locales and collations. Corruption caused by hypothetical undiscovered bugs in the - underlying PostgreSQL access method code or sort + underlying PostgreSQL access method code or sort code. Automatic verification of the structural integrity of indexes plays a role in the general testing of new or proposed - PostgreSQL features that could plausibly allow a + PostgreSQL features that could plausibly allow a logical inconsistency to be introduced. One obvious testing - strategy is to call amcheck functions continuously + strategy is to call amcheck functions continuously when running the standard regression tests. See for details on running the tests. @@ -219,12 +219,12 @@ ORDER BY c.relpages DESC LIMIT 10; simply not be enabled. - Note that amcheck examines a page as represented in some + Note that amcheck examines a page as represented in some shared memory buffer at the time of verification if there is only a shared buffer hit when accessing the block. Consequently, - amcheck does not necessarily examine data read from the + amcheck does not necessarily examine data read from the file system at the time of verification. Note that when checksums are - enabled, amcheck may raise an error due to a checksum + enabled, amcheck may raise an error due to a checksum failure when a corrupt block is read into a buffer. @@ -234,7 +234,7 @@ ORDER BY c.relpages DESC LIMIT 10; and operating system. - PostgreSQL does not protect against correctable + PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. However, ECC memory is typically only immune to @@ -244,7 +244,7 @@ ORDER BY c.relpages DESC LIMIT 10; - In general, amcheck can only prove the presence of + In general, amcheck can only prove the presence of corruption; it cannot prove its absence. @@ -252,19 +252,19 @@ ORDER BY c.relpages DESC LIMIT 10; Repairing corruption - No error concerning corruption raised by amcheck should - ever be a false positive. In practice, amcheck is more + No error concerning corruption raised by amcheck should + ever be a false positive. In practice, amcheck is more likely to find software bugs than problems with hardware. - amcheck raises errors in the event of conditions that, + amcheck raises errors in the event of conditions that, by definition, should never happen, and so careful analysis of - amcheck errors is often required. + amcheck errors is often required. There is no general method of repairing problems that - amcheck detects. An explanation for the root cause of + amcheck detects. An explanation for the root cause of an invariant violation should be sought. may play a useful role in diagnosing - corruption that amcheck detects. A REINDEX + corruption that amcheck detects. A REINDEX may not be effective in repairing corruption. diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index c835e87215..5423aadb9c 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -118,7 +118,7 @@ PostgreSQL is implemented using a - simple process per user client/server model. In this model + simple process per user client/server model. In this model there is one client process connected to exactly one server process. As we do not know ahead of time how many connections will be made, we have to @@ -137,9 +137,9 @@ The client process can be any program that understands the PostgreSQL protocol described in . Many clients are based on the - C-language library libpq, but several independent + C-language library libpq, but several independent implementations of the protocol exist, such as the Java - JDBC driver. + JDBC driver. @@ -184,8 +184,8 @@ text) for valid syntax. If the syntax is correct a parse tree is built up and handed back; otherwise an error is returned. The parser and lexer are - implemented using the well-known Unix tools bison - and flex. + implemented using the well-known Unix tools bison + and flex. @@ -251,7 +251,7 @@ back by the parser as input and does the semantic interpretation needed to understand which tables, functions, and operators are referenced by the query. The data structure that is built to represent this - information is called the query tree. + information is called the query tree. @@ -259,10 +259,10 @@ system catalog lookups can only be done within a transaction, and we do not wish to start a transaction immediately upon receiving a query string. The raw parsing stage is sufficient to identify the transaction - control commands (BEGIN, ROLLBACK, etc), and + control commands (BEGIN, ROLLBACK, etc), and these can then be correctly executed without any further analysis. Once we know that we are dealing with an actual query (such as - SELECT or UPDATE), it is okay to + SELECT or UPDATE), it is okay to start a transaction if we're not already in one. Only then can the transformation process be invoked. @@ -270,10 +270,10 @@ The query tree created by the transformation process is structurally similar to the raw parse tree in most places, but it has many differences - in detail. For example, a FuncCall node in the + in detail. For example, a FuncCall node in the parse tree represents something that looks syntactically like a function - call. This might be transformed to either a FuncExpr - or Aggref node depending on whether the referenced + call. This might be transformed to either a FuncExpr + or Aggref node depending on whether the referenced name turns out to be an ordinary function or an aggregate function. Also, information about the actual data types of columns and expression results is added to the query tree. @@ -354,10 +354,10 @@ The planner's search procedure actually works with data structures - called paths, which are simply cut-down representations of + called paths, which are simply cut-down representations of plans containing only as much information as the planner needs to make its decisions. After the cheapest path is determined, a full-fledged - plan tree is built to pass to the executor. This represents + plan tree is built to pass to the executor. This represents the desired execution plan in sufficient detail for the executor to run it. In the rest of this section we'll ignore the distinction between paths and plans. @@ -378,12 +378,12 @@ relation.attribute OPR constant. If relation.attribute happens to match the key of the B-tree index and OPR is one of the operators listed in - the index's operator class, another plan is created using + the index's operator class, another plan is created using the B-tree index to scan the relation. If there are further indexes present and the restrictions in the query happen to match a key of an index, further plans will be considered. Index scan plans are also generated for indexes that have a sort ordering that can match the - query's ORDER BY clause (if any), or a sort ordering that + query's ORDER BY clause (if any), or a sort ordering that might be useful for merge joining (see below). @@ -462,9 +462,9 @@ the base relations, plus nested-loop, merge, or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function calculation nodes. Most of these plan node - types have the additional ability to do selection + types have the additional ability to do selection (discarding rows that do not meet a specified Boolean condition) - and projection (computation of a derived column set + and projection (computation of a derived column set based on given column values, that is, evaluation of scalar expressions where needed). One of the responsibilities of the planner is to attach selection conditions from the @@ -496,7 +496,7 @@ subplan) is, let's say, a Sort node and again recursion is needed to obtain an input row. The child node of the Sort might - be a SeqScan node, representing actual reading of a table. + be a SeqScan node, representing actual reading of a table. Execution of this node causes the executor to fetch a row from the table and return it up to the calling node. The Sort node will repeatedly call its child to obtain all the rows to be sorted. @@ -529,24 +529,24 @@ The executor mechanism is used to evaluate all four basic SQL query types: - SELECT, INSERT, UPDATE, and - DELETE. For SELECT, the top-level executor + SELECT, INSERT, UPDATE, and + DELETE. For SELECT, the top-level executor code only needs to send each row returned by the query plan tree off - to the client. For INSERT, each returned row is inserted - into the target table specified for the INSERT. This is - done in a special top-level plan node called ModifyTable. + to the client. For INSERT, each returned row is inserted + into the target table specified for the INSERT. This is + done in a special top-level plan node called ModifyTable. (A simple - INSERT ... VALUES command creates a trivial plan tree - consisting of a single Result node, which computes just one - result row, and ModifyTable above it to perform the insertion. - But INSERT ... SELECT can demand the full power - of the executor mechanism.) For UPDATE, the planner arranges + INSERT ... VALUES command creates a trivial plan tree + consisting of a single Result node, which computes just one + result row, and ModifyTable above it to perform the insertion. + But INSERT ... SELECT can demand the full power + of the executor mechanism.) For UPDATE, the planner arranges that each computed row includes all the updated column values, plus - the TID (tuple ID, or row ID) of the original target row; - this data is fed into a ModifyTable node, which uses the + the TID (tuple ID, or row ID) of the original target row; + this data is fed into a ModifyTable node, which uses the information to create a new updated row and mark the old row deleted. - For DELETE, the only column that is actually returned by the - plan is the TID, and the ModifyTable node simply uses the TID + For DELETE, the only column that is actually returned by the + plan is the TID, and the ModifyTable node simply uses the TID to visit each target row and mark it deleted. diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 88eb4be04d..9187f6e02e 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -32,7 +32,7 @@ CREATE TABLE sal_emp ( ); As shown, an array data type is named by appending square brackets - ([]) to the data type name of the array elements. The + ([]) to the data type name of the array elements. The above command will create a table named sal_emp with a column of type text (name), a @@ -69,7 +69,7 @@ CREATE TABLE tictactoe ( An alternative syntax, which conforms to the SQL standard by using - the keyword ARRAY, can be used for one-dimensional arrays. + the keyword ARRAY, can be used for one-dimensional arrays. pay_by_quarter could have been defined as: @@ -79,7 +79,7 @@ CREATE TABLE tictactoe ( pay_by_quarter integer ARRAY, - As before, however, PostgreSQL does not enforce the + As before, however, PostgreSQL does not enforce the size restriction in any case. @@ -107,8 +107,8 @@ CREATE TABLE tictactoe ( for the type, as recorded in its pg_type entry. Among the standard data types provided in the PostgreSQL distribution, all use a comma - (,), except for type box which uses a semicolon - (;). Each val is + (,), except for type box which uses a semicolon + (;). Each val is either a constant of the array element type, or a subarray. An example of an array constant is: @@ -119,10 +119,10 @@ CREATE TABLE tictactoe ( - To set an element of an array constant to NULL, write NULL + To set an element of an array constant to NULL, write NULL for the element value. (Any upper- or lower-case variant of - NULL will do.) If you want an actual string value - NULL, you must put double quotes around it. + NULL will do.) If you want an actual string value + NULL, you must put double quotes around it. @@ -176,7 +176,7 @@ ERROR: multidimensional arrays must have array expressions with matching dimens - The ARRAY constructor syntax can also be used: + The ARRAY constructor syntax can also be used: INSERT INTO sal_emp VALUES ('Bill', @@ -190,7 +190,7 @@ INSERT INTO sal_emp Notice that the array elements are ordinary SQL constants or expressions; for instance, string literals are single quoted, instead of - double quoted as they would be in an array literal. The ARRAY + double quoted as they would be in an array literal. The ARRAY constructor syntax is discussed in more detail in . @@ -222,8 +222,8 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] <> pay_by_quarter[2]; The array subscript numbers are written within square brackets. By default PostgreSQL uses a one-based numbering convention for arrays, that is, - an array of n elements starts with array[1] and - ends with array[n]. + an array of n elements starts with array[1] and + ends with array[n]. @@ -259,8 +259,8 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 - to the number specified. For example, [2] is treated as - [1:2], as in this example: + to the number specified. For example, [2] is treated as + [1:2], as in this example: SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; @@ -272,7 +272,7 @@ SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; To avoid confusion with the non-slice case, it's best to use slice syntax - for all dimensions, e.g., [1:2][1:1], not [2][1:1]. + for all dimensions, e.g., [1:2][1:1], not [2][1:1]. @@ -302,9 +302,9 @@ SELECT schedule[:][1:1] FROM sal_emp WHERE name = 'Bill'; An array subscript expression will return null if either the array itself or any of the subscript expressions are null. Also, null is returned if a subscript is outside the array bounds (this case does not raise an error). - For example, if schedule - currently has the dimensions [1:3][1:2] then referencing - schedule[3][3] yields NULL. Similarly, an array reference + For example, if schedule + currently has the dimensions [1:3][1:2] then referencing + schedule[3][3] yields NULL. Similarly, an array reference with the wrong number of subscripts yields a null rather than an error. @@ -423,16 +423,16 @@ UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}' A stored array value can be enlarged by assigning to elements not already present. Any positions between those previously present and the newly assigned elements will be filled with nulls. For example, if array - myarray currently has 4 elements, it will have six - elements after an update that assigns to myarray[6]; - myarray[5] will contain null. + myarray currently has 4 elements, it will have six + elements after an update that assigns to myarray[6]; + myarray[5] will contain null. Currently, enlargement in this fashion is only allowed for one-dimensional arrays, not multidimensional arrays. Subscripted assignment allows creation of arrays that do not use one-based - subscripts. For example one might assign to myarray[-2:7] to + subscripts. For example one might assign to myarray[-2:7] to create an array with subscript values from -2 to 7. @@ -457,8 +457,8 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; The concatenation operator allows a single element to be pushed onto the beginning or end of a one-dimensional array. It also accepts two - N-dimensional arrays, or an N-dimensional - and an N+1-dimensional array. + N-dimensional arrays, or an N-dimensional + and an N+1-dimensional array. @@ -501,10 +501,10 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]); - When an N-dimensional array is pushed onto the beginning - or end of an N+1-dimensional array, the result is - analogous to the element-array case above. Each N-dimensional - sub-array is essentially an element of the N+1-dimensional + When an N-dimensional array is pushed onto the beginning + or end of an N+1-dimensional array, the result is + analogous to the element-array case above. Each N-dimensional + sub-array is essentially an element of the N+1-dimensional array's outer dimension. For example: SELECT array_dims(ARRAY[1,2] || ARRAY[[3,4],[5,6]]); @@ -587,9 +587,9 @@ SELECT array_append(ARRAY[1, 2], NULL); -- this might have been meant The heuristic it uses to resolve the constant's type is to assume it's of the same type as the operator's other input — in this case, integer array. So the concatenation operator is presumed to - represent array_cat, not array_append. When + represent array_cat, not array_append. When that's the wrong choice, it could be fixed by casting the constant to the - array's element type; but explicit use of array_append might + array's element type; but explicit use of array_append might be a preferable solution. @@ -633,7 +633,7 @@ SELECT * FROM sal_emp WHERE 10000 = ALL (pay_by_quarter); - Alternatively, the generate_subscripts function can be used. + Alternatively, the generate_subscripts function can be used. For example: @@ -648,7 +648,7 @@ SELECT * FROM - You can also search an array using the && operator, + You can also search an array using the && operator, which checks whether the left operand overlaps with the right operand. For instance: @@ -662,8 +662,8 @@ SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; - You can also search for specific values in an array using the array_position - and array_positions functions. The former returns the subscript of + You can also search for specific values in an array using the array_position + and array_positions functions. The former returns the subscript of the first occurrence of a value in an array; the latter returns an array with the subscripts of all occurrences of the value in the array. For example: @@ -703,13 +703,13 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The external text representation of an array value consists of items that are interpreted according to the I/O conversion rules for the array's element type, plus decoration that indicates the array structure. - The decoration consists of curly braces ({ and }) + The decoration consists of curly braces ({ and }) around the array value plus delimiter characters between adjacent items. - The delimiter character is usually a comma (,) but can be - something else: it is determined by the typdelim setting + The delimiter character is usually a comma (,) but can be + something else: it is determined by the typdelim setting for the array's element type. Among the standard data types provided in the PostgreSQL distribution, all use a comma, - except for type box, which uses a semicolon (;). + except for type box, which uses a semicolon (;). In a multidimensional array, each dimension (row, plane, cube, etc.) gets its own level of curly braces, and delimiters must be written between adjacent curly-braced entities of the same level. @@ -719,7 +719,7 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The array output routine will put double quotes around element values if they are empty strings, contain curly braces, delimiter characters, double quotes, backslashes, or white space, or match the word - NULL. Double quotes and backslashes + NULL. Double quotes and backslashes embedded in element values will be backslash-escaped. For numeric data types it is safe to assume that double quotes will never appear, but for textual data types one should be prepared to cope with either the presence @@ -731,10 +731,10 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); set to one. To represent arrays with other lower bounds, the array subscript ranges can be specified explicitly before writing the array contents. - This decoration consists of square brackets ([]) + This decoration consists of square brackets ([]) around each array dimension's lower and upper bounds, with - a colon (:) delimiter character in between. The - array dimension decoration is followed by an equal sign (=). + a colon (:) delimiter character in between. The + array dimension decoration is followed by an equal sign (=). For example: SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 @@ -750,23 +750,23 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 - If the value written for an element is NULL (in any case + If the value written for an element is NULL (in any case variant), the element is taken to be NULL. The presence of any quotes or backslashes disables this and allows the literal string value - NULL to be entered. Also, for backward compatibility with - pre-8.2 versions of PostgreSQL, the NULL to be entered. Also, for backward compatibility with + pre-8.2 versions of PostgreSQL, the configuration parameter can be turned - off to suppress recognition of NULL as a NULL. + off to suppress recognition of NULL as a NULL. As shown previously, when writing an array value you can use double - quotes around any individual array element. You must do so + quotes around any individual array element. You must do so if the element value would otherwise confuse the array-value parser. For example, elements containing curly braces, commas (or the data type's delimiter character), double quotes, backslashes, or leading or trailing whitespace must be double-quoted. Empty strings and strings matching the - word NULL must be quoted, too. To put a double quote or + word NULL must be quoted, too. To put a double quote or backslash in a quoted array element value, use escape string syntax and precede it with a backslash. Alternatively, you can avoid quotes and use backslash-escaping to protect all data characters that would otherwise @@ -785,17 +785,17 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 Remember that what you write in an SQL command will first be interpreted as a string literal, and then as an array. This doubles the number of - backslashes you need. For example, to insert a text array + backslashes you need. For example, to insert a text array value containing a backslash and a double quote, you'd need to write: INSERT ... VALUES (E'{"\\\\","\\""}'); The escape string processor removes one level of backslashes, so that - what arrives at the array-value parser looks like {"\\","\""}. - In turn, the strings fed to the text data type's input routine - become \ and " respectively. (If we were working + what arrives at the array-value parser looks like {"\\","\""}. + In turn, the strings fed to the text data type's input routine + become \ and " respectively. (If we were working with a data type whose input routine also treated backslashes specially, - bytea for example, we might need as many as eight backslashes + bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored array element.) Dollar quoting (see ) can be used to avoid the need to double backslashes. @@ -804,10 +804,10 @@ INSERT ... VALUES (E'{"\\\\","\\""}'); - The ARRAY constructor syntax (see + The ARRAY constructor syntax (see ) is often easier to work with than the array-literal syntax when writing array values in SQL - commands. In ARRAY, individual element values are written the + commands. In ARRAY, individual element values are written the same way they would be written when not members of an array. diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml index 9a6e3e9bb4..9221d2dfb6 100644 --- a/doc/src/sgml/auth-delay.sgml +++ b/doc/src/sgml/auth-delay.sgml @@ -18,7 +18,7 @@ In order to function, this module must be loaded via - in postgresql.conf. + in postgresql.conf. @@ -29,7 +29,7 @@ auth_delay.milliseconds (int) - auth_delay.milliseconds configuration parameter + auth_delay.milliseconds configuration parameter @@ -42,7 +42,7 @@ - These parameters must be set in postgresql.conf. + These parameters must be set in postgresql.conf. Typical usage might be: diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml index 38e6f50c80..240098c82f 100644 --- a/doc/src/sgml/auto-explain.sgml +++ b/doc/src/sgml/auto-explain.sgml @@ -24,10 +24,10 @@ LOAD 'auto_explain'; (You must be superuser to do that.) More typical usage is to preload - it into some or all sessions by including auto_explain in + it into some or all sessions by including auto_explain in or in - postgresql.conf. Then you can track unexpectedly slow queries + postgresql.conf. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that. @@ -47,7 +47,7 @@ LOAD 'auto_explain'; auto_explain.log_min_duration (integer) - auto_explain.log_min_duration configuration parameter + auto_explain.log_min_duration configuration parameter @@ -66,13 +66,13 @@ LOAD 'auto_explain'; auto_explain.log_analyze (boolean) - auto_explain.log_analyze configuration parameter + auto_explain.log_analyze configuration parameter - auto_explain.log_analyze causes EXPLAIN ANALYZE - output, rather than just EXPLAIN output, to be printed + auto_explain.log_analyze causes EXPLAIN ANALYZE + output, rather than just EXPLAIN output, to be printed when an execution plan is logged. This parameter is off by default. Only superusers can change this setting. @@ -92,14 +92,14 @@ LOAD 'auto_explain'; auto_explain.log_buffers (boolean) - auto_explain.log_buffers configuration parameter + auto_explain.log_buffers configuration parameter auto_explain.log_buffers controls whether buffer usage statistics are printed when an execution plan is logged; it's - equivalent to the BUFFERS option of EXPLAIN. + equivalent to the BUFFERS option of EXPLAIN. This parameter has no effect unless auto_explain.log_analyze is enabled. This parameter is off by default. @@ -112,14 +112,14 @@ LOAD 'auto_explain'; auto_explain.log_timing (boolean) - auto_explain.log_timing configuration parameter + auto_explain.log_timing configuration parameter auto_explain.log_timing controls whether per-node timing information is printed when an execution plan is logged; it's - equivalent to the TIMING option of EXPLAIN. + equivalent to the TIMING option of EXPLAIN. The overhead of repeatedly reading the system clock can slow down queries significantly on some systems, so it may be useful to set this parameter to off when only actual row counts, and not exact times, are @@ -136,7 +136,7 @@ LOAD 'auto_explain'; auto_explain.log_triggers (boolean) - auto_explain.log_triggers configuration parameter + auto_explain.log_triggers configuration parameter @@ -155,14 +155,14 @@ LOAD 'auto_explain'; auto_explain.log_verbose (boolean) - auto_explain.log_verbose configuration parameter + auto_explain.log_verbose configuration parameter auto_explain.log_verbose controls whether verbose details are printed when an execution plan is logged; it's - equivalent to the VERBOSE option of EXPLAIN. + equivalent to the VERBOSE option of EXPLAIN. This parameter is off by default. Only superusers can change this setting. @@ -173,13 +173,13 @@ LOAD 'auto_explain'; auto_explain.log_format (enum) - auto_explain.log_format configuration parameter + auto_explain.log_format configuration parameter auto_explain.log_format selects the - EXPLAIN output format to be used. + EXPLAIN output format to be used. The allowed values are text, xml, json, and yaml. The default is text. Only superusers can change this setting. @@ -191,7 +191,7 @@ LOAD 'auto_explain'; auto_explain.log_nested_statements (boolean) - auto_explain.log_nested_statements configuration parameter + auto_explain.log_nested_statements configuration parameter @@ -208,7 +208,7 @@ LOAD 'auto_explain'; auto_explain.sample_rate (real) - auto_explain.sample_rate configuration parameter + auto_explain.sample_rate configuration parameter @@ -224,7 +224,7 @@ LOAD 'auto_explain'; In ordinary usage, these parameters are set - in postgresql.conf, although superusers can alter them + in postgresql.conf, although superusers can alter them on-the-fly within their own sessions. Typical usage might be: diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index bd55e8bb77..dd9c1bff5b 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -3,10 +3,10 @@ Backup and Restore - backup + backup - As with everything that contains valuable data, PostgreSQL + As with everything that contains valuable data, PostgreSQL databases should be backed up regularly. While the procedure is essentially simple, it is important to have a clear understanding of the underlying techniques and assumptions. @@ -14,9 +14,9 @@ There are three fundamentally different approaches to backing up - PostgreSQL data: + PostgreSQL data: - SQL dump + SQL dump File system level backup Continuous archiving @@ -25,30 +25,30 @@ - <acronym>SQL</> Dump + <acronym>SQL</acronym> Dump The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. - PostgreSQL provides the utility program + PostgreSQL provides the utility program for this purpose. The basic usage of this command is: pg_dump dbname > outfile - As you see, pg_dump writes its result to the + As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. - While the above command creates a text file, pg_dump + While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration. - pg_dump is a regular PostgreSQL + pg_dump is a regular PostgreSQL client application (albeit a particularly clever one). This means that you can perform this backup procedure from any remote host that has - access to the database. But remember that pg_dump + access to the database. But remember that pg_dump does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in order to back up the entire database you almost always have to run it as a @@ -60,9 +60,9 @@ pg_dump dbname > - To specify which database server pg_dump should + To specify which database server pg_dump should contact, use the command line options ). psql + supports options similar to pg_dump for specifying the database server to connect to and the user name to use. See the reference page for more information. Non-text file dumps are restored using the dbname < - By default, the psql script will continue to + By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with - the ON_ERROR_STOP variable set to alter that + the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs: @@ -147,8 +147,8 @@ psql --set ON_ERROR_STOP=on dbname < infile Alternatively, you can specify that the whole dump should be restored as a single transaction, so the restore is either fully completed or fully rolled back. This mode can be specified by - passing the - The ability of pg_dump and psql to + The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one server to another, for example: -pg_dump -h host1 dbname | psql -h host2 dbname +pg_dump -h host1 dbname | psql -h host2 dbname - The dumps produced by pg_dump are relative to - template0. This means that any languages, procedures, - etc. added via template1 will also be dumped by - pg_dump. As a result, when restoring, if you are - using a customized template1, you must create the - empty database from template0, as in the example + The dumps produced by pg_dump are relative to + template0. This means that any languages, procedures, + etc. added via template1 will also be dumped by + pg_dump. As a result, when restoring, if you are + using a customized template1, you must create the + empty database from template0, as in the example above. @@ -183,52 +183,52 @@ pg_dump -h host1 dbname | psql -h h see and for more information. For more advice on how to load large amounts of data - into PostgreSQL efficiently, refer to PostgreSQL efficiently, refer to . - Using <application>pg_dumpall</> + Using <application>pg_dumpall</application> - pg_dump dumps only a single database at a time, + pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the program is provided. - pg_dumpall backs up each database in a given + pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is: -pg_dumpall > outfile +pg_dumpall > outfile - The resulting dump can be restored with psql: + The resulting dump can be restored with psql: psql -f infile postgres (Actually, you can specify any existing database name to start from, - but if you are loading into an empty cluster then postgres + but if you are loading into an empty cluster then postgres should usually be used.) It is always necessary to have - database superuser access when restoring a pg_dumpall + database superuser access when restoring a pg_dumpall dump, as that is required to restore the role and tablespace information. If you use tablespaces, make sure that the tablespace paths in the dump are appropriate for the new installation. - pg_dumpall works by emitting commands to re-create + pg_dumpall works by emitting commands to re-create roles, tablespaces, and empty databases, then invoking - pg_dump for each database. This means that while + pg_dump for each database. This means that while each database will be internally consistent, the snapshots of different databases are not synchronized. Cluster-wide data can be dumped alone using the - pg_dumpall option. This is necessary to fully backup the cluster if running the - pg_dump command on individual databases. + pg_dump command on individual databases. @@ -237,8 +237,8 @@ psql -f infile postgres Some operating systems have maximum file size limits that cause - problems when creating large pg_dump output files. - Fortunately, pg_dump can write to the standard + problems when creating large pg_dump output files. + Fortunately, pg_dump can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods: @@ -268,7 +268,7 @@ cat filename.gz | gunzip | psql - Use <command>split</>. + Use <command>split</command>. The split command allows you to split the output into smaller files that are @@ -288,10 +288,10 @@ cat filename* | psql - Use <application>pg_dump</>'s custom dump format. + Use <application>pg_dump</application>'s custom dump format. If PostgreSQL was built on a system with the - zlib compression library installed, the custom dump + zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively. The @@ -301,8 +301,8 @@ cat filename* | psql dbname > filename - A custom-format dump is not a script for psql, but - instead must be restored with pg_restore, for example: + A custom-format dump is not a script for psql, but + instead must be restored with pg_restore, for example: pg_restore -d dbname filename @@ -314,12 +314,12 @@ pg_restore -d dbname - For very large databases, you might need to combine split + For very large databases, you might need to combine split with one of the other two approaches. - Use <application>pg_dump</>'s parallel dump feature. + Use <application>pg_dump</application>'s parallel dump feature. To speed up the dump of a large database, you can use pg_dump's parallel mode. This will dump @@ -344,7 +344,7 @@ pg_dump -j num -F d -f An alternative backup strategy is to directly copy the files that - PostgreSQL uses to store the data in the database; + PostgreSQL uses to store the data in the database; explains where these files are located. You can use whatever method you prefer for doing file system backups; for example: @@ -356,13 +356,13 @@ tar -cf backup.tar /usr/local/pgsql/data There are two restrictions, however, which make this method - impractical, or at least inferior to the pg_dump + impractical, or at least inferior to the pg_dump method: - The database server must be shut down in order to + The database server must be shut down in order to get a usable backup. Half-way measures such as disallowing all connections will not work (in part because tar and similar tools do not take @@ -379,7 +379,7 @@ tar -cf backup.tar /usr/local/pgsql/data If you have dug into the details of the file system layout of the database, you might be tempted to try to back up or restore only certain individual tables or databases from their respective files or - directories. This will not work because the + directories. This will not work because the information contained in these files is not usable without the commit log files, pg_xact/*, which contain the commit status of @@ -399,7 +399,7 @@ tar -cf backup.tar /usr/local/pgsql/data consistent snapshot of the data directory, if the file system supports that functionality (and you are willing to trust that it is implemented correctly). The typical procedure is - to make a frozen snapshot of the volume containing the + to make a frozen snapshot of the volume containing the database, then copy the whole data directory (not just parts, see above) from the snapshot to a backup device, then release the frozen snapshot. This will work even while the database server is running. @@ -419,7 +419,7 @@ tar -cf backup.tar /usr/local/pgsql/data the volumes. For example, if your data files and WAL log are on different disks, or if tablespaces are on different file systems, it might not be possible to use snapshot backup because the snapshots - must be simultaneous. + must be simultaneous. Read your file system documentation very carefully before trusting the consistent-snapshot technique in such situations. @@ -435,13 +435,13 @@ tar -cf backup.tar /usr/local/pgsql/data - Another option is to use rsync to perform a file - system backup. This is done by first running rsync + Another option is to use rsync to perform a file + system backup. This is done by first running rsync while the database server is running, then shutting down the database - server long enough to do an rsync --checksum. - ( @@ -508,7 +508,7 @@ tar -cf backup.tar /usr/local/pgsql/data It is not necessary to replay the WAL entries all the way to the end. We could stop the replay at any point and have a consistent snapshot of the database as it was at that time. Thus, - this technique supports point-in-time recovery: it is + this technique supports point-in-time recovery: it is possible to restore the database to its state at any time since your base backup was taken. @@ -517,7 +517,7 @@ tar -cf backup.tar /usr/local/pgsql/data If we continuously feed the series of WAL files to another machine that has been loaded with the same base backup file, we - have a warm standby system: at any point we can bring up + have a warm standby system: at any point we can bring up the second machine and it will have a nearly-current copy of the database. @@ -530,7 +530,7 @@ tar -cf backup.tar /usr/local/pgsql/data pg_dump and pg_dumpall do not produce file-system-level backups and cannot be used as part of a continuous-archiving solution. - Such dumps are logical and do not contain enough + Such dumps are logical and do not contain enough information to be used by WAL replay. @@ -546,10 +546,10 @@ tar -cf backup.tar /usr/local/pgsql/data To recover successfully using continuous archiving (also called - online backup by many database vendors), you need a continuous + online backup by many database vendors), you need a continuous sequence of archived WAL files that extends back at least as far as the start time of your backup. So to get started, you should set up and test - your procedure for archiving WAL files before you take your + your procedure for archiving WAL files before you take your first base backup. Accordingly, we first discuss the mechanics of archiving WAL files. @@ -558,15 +558,15 @@ tar -cf backup.tar /usr/local/pgsql/data Setting Up WAL Archiving - In an abstract sense, a running PostgreSQL system + In an abstract sense, a running PostgreSQL system produces an indefinitely long sequence of WAL records. The system physically divides this sequence into WAL segment - files, which are normally 16MB apiece (although the segment size - can be altered during initdb). The segment + files, which are normally 16MB apiece (although the segment size + can be altered during initdb). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then - recycles them by renaming no-longer-needed segment files + recycles them by renaming no-longer-needed segment files to higher segment numbers. It's assumed that segment files whose contents precede the checkpoint-before-last are no longer of interest and can be recycled. @@ -577,33 +577,33 @@ tar -cf backup.tar /usr/local/pgsql/data file once it is filled, and save that data somewhere before the segment file is recycled for reuse. Depending on the application and the available hardware, there could be many different ways of saving - the data somewhere: we could copy the segment files to an NFS-mounted + the data somewhere: we could copy the segment files to an NFS-mounted directory on another machine, write them onto a tape drive (ensuring that you have a way of identifying the original name of each file), or batch them together and burn them onto CDs, or something else entirely. To provide the database administrator with flexibility, - PostgreSQL tries not to make any assumptions about how - the archiving will be done. Instead, PostgreSQL lets + PostgreSQL tries not to make any assumptions about how + the archiving will be done. Instead, PostgreSQL lets the administrator specify a shell command to be executed to copy a completed segment file to wherever it needs to go. The command could be - as simple as a cp, or it could invoke a complex shell + as simple as a cp, or it could invoke a complex shell script — it's all up to you. To enable WAL archiving, set the - configuration parameter to replica or higher, - to on, + configuration parameter to replica or higher, + to on, and specify the shell command to use in the configuration parameter. In practice these settings will always be placed in the postgresql.conf file. - In archive_command, - %p is replaced by the path name of the file to - archive, while %f is replaced by only the file name. + In archive_command, + %p is replaced by the path name of the file to + archive, while %f is replaced by only the file name. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Use %% if you need to embed an actual % + Use %% if you need to embed an actual % character in the command. The simplest useful command is something like: @@ -611,9 +611,9 @@ archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/ser archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows which will copy archivable WAL segments to the directory - /mnt/server/archivedir. (This is an example, not a + /mnt/server/archivedir. (This is an example, not a recommendation, and might not work on all platforms.) After the - %p and %f parameters have been replaced, + %p and %f parameters have been replaced, the actual command executed might look like this: test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 @@ -623,7 +623,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 The archive command will be executed under the ownership of the same - user that the PostgreSQL server is running as. Since + user that the PostgreSQL server is running as. Since the series of WAL files being archived contains effectively everything in your database, you will want to be sure that the archived data is protected from prying eyes; for example, archive into a directory that @@ -633,9 +633,9 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is important that the archive command return zero exit status if and only if it succeeds. Upon getting a zero result, - PostgreSQL will assume that the file has been + PostgreSQL will assume that the file has been successfully archived, and will remove or recycle it. However, a nonzero - status tells PostgreSQL that the file was not archived; + status tells PostgreSQL that the file was not archived; it will try again periodically until it succeeds. @@ -650,14 +650,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is advisable to test your proposed archive command to ensure that it indeed does not overwrite an existing file, and that it returns - nonzero status in this case. + nonzero status in this case. The example command above for Unix ensures this by including a separate - test step. On some Unix platforms, cp has - switches such as that can be used to do the same thing less verbosely, but you should not rely on these without verifying that - the right exit status is returned. (In particular, GNU cp - will return status zero when @@ -668,10 +668,10 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 fills, nothing further can be archived until the tape is swapped. You should ensure that any error condition or request to a human operator is reported appropriately so that the situation can be - resolved reasonably quickly. The pg_wal/ directory will + resolved reasonably quickly. The pg_wal/ directory will continue to fill with WAL segment files until the situation is resolved. - (If the file system containing pg_wal/ fills up, - PostgreSQL will do a PANIC shutdown. No committed + (If the file system containing pg_wal/ fills up, + PostgreSQL will do a PANIC shutdown. No committed transactions will be lost, but the database will remain offline until you free some space.) @@ -682,7 +682,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 operation continues even if the archiving process falls a little behind. If archiving falls significantly behind, this will increase the amount of data that would be lost in the event of a disaster. It will also mean that - the pg_wal/ directory will contain large numbers of + the pg_wal/ directory will contain large numbers of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to monitor the archiving process to ensure that it is working as you intend. @@ -692,16 +692,16 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 In writing your archive command, you should assume that the file names to be archived can be up to 64 characters long and can contain any combination of ASCII letters, digits, and dots. It is not necessary to - preserve the original relative path (%p) but it is necessary to - preserve the file name (%f). + preserve the original relative path (%p) but it is necessary to + preserve the file name (%f). Note that although WAL archiving will allow you to restore any - modifications made to the data in your PostgreSQL database, + modifications made to the data in your PostgreSQL database, it will not restore changes made to configuration files (that is, - postgresql.conf, pg_hba.conf and - pg_ident.conf), since those are edited manually rather + postgresql.conf, pg_hba.conf and + pg_ident.conf), since those are edited manually rather than through SQL operations. You might wish to keep the configuration files in a location that will be backed up by your regular file system backup procedures. See @@ -719,32 +719,32 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very - short archive_timeout — it will bloat your archive - storage. archive_timeout settings of a minute or so are + short archive_timeout — it will bloat your archive + storage. archive_timeout settings of a minute or so are usually reasonable. Also, you can force a segment switch manually with - pg_switch_wal if you want to ensure that a + pg_switch_wal if you want to ensure that a just-finished transaction is archived as soon as possible. Other utility functions related to WAL management are listed in . - When wal_level is minimal some SQL commands + When wal_level is minimal some SQL commands are optimized to avoid WAL logging, as described in . If archiving or streaming replication were turned on during execution of one of these statements, WAL would not contain enough information for archive recovery. (Crash recovery is - unaffected.) For this reason, wal_level can only be changed at - server start. However, archive_command can be changed with a + unaffected.) For this reason, wal_level can only be changed at + server start. However, archive_command can be changed with a configuration file reload. If you wish to temporarily stop archiving, - one way to do it is to set archive_command to the empty - string (''). - This will cause WAL files to accumulate in pg_wal/ until a - working archive_command is re-established. + one way to do it is to set archive_command to the empty + string (''). + This will cause WAL files to accumulate in pg_wal/ until a + working archive_command is re-established. @@ -763,8 +763,8 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is not necessary to be concerned about the amount of time it takes to make a base backup. However, if you normally run the - server with full_page_writes disabled, you might notice a drop - in performance while the backup runs since full_page_writes is + server with full_page_writes disabled, you might notice a drop + in performance while the backup runs since full_page_writes is effectively forced on during backup mode. @@ -772,13 +772,13 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 To make use of the backup, you will need to keep all the WAL segment files generated during and after the file system backup. To aid you in doing this, the base backup process - creates a backup history file that is immediately + creates a backup history file that is immediately stored into the WAL archive area. This file is named after the first WAL segment file that you need for the file system backup. For example, if the starting WAL file is - 0000000100001234000055CD the backup history file will be + 0000000100001234000055CD the backup history file will be named something like - 0000000100001234000055CD.007C9330.backup. (The second + 0000000100001234000055CD.007C9330.backup. (The second part of the file name stands for an exact position within the WAL file, and can ordinarily be ignored.) Once you have safely archived the file system backup and the WAL segment files used during the @@ -847,14 +847,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 SELECT pg_start_backup('label', false, false); - where label is any string you want to use to uniquely + where label is any string you want to use to uniquely identify this backup operation. The connection - calling pg_start_backup must be maintained until the end of + calling pg_start_backup must be maintained until the end of the backup, or the backup will be automatically aborted. - By default, pg_start_backup can take a long time to finish. + By default, pg_start_backup can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -862,19 +862,19 @@ SELECT pg_start_backup('label', false, false); ). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as - possible, change the second parameter to true, which will + possible, change the second parameter to true, which will issue an immediate checkpoint using as much I/O as available. - The third parameter being false tells - pg_start_backup to initiate a non-exclusive base backup. + The third parameter being false tells + pg_start_backup to initiate a non-exclusive base backup. Perform the backup, using any convenient file-system-backup tool - such as tar or cpio (not + such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database @@ -898,45 +898,45 @@ SELECT * FROM pg_stop_backup(false, true); ready to archive. - The pg_stop_backup will return one row with three + The pg_stop_backup will return one row with three values. The second of these fields should be written to a file named - backup_label in the root directory of the backup. The + backup_label in the root directory of the backup. The third field should be written to a file named - tablespace_map unless the field is empty. These files are + tablespace_map unless the field is empty. These files are vital to the backup working, and must be written without modification. Once the WAL segment files active during the backup are archived, you are - done. The file identified by pg_stop_backup's first return + done. The file identified by pg_stop_backup's first return value is the last segment that is required to form a complete set of - backup files. On a primary, if archive_mode is enabled and the - wait_for_archive parameter is true, - pg_stop_backup does not return until the last segment has + backup files. On a primary, if archive_mode is enabled and the + wait_for_archive parameter is true, + pg_stop_backup does not return until the last segment has been archived. - On a standby, archive_mode must be always in order - for pg_stop_backup to wait. + On a standby, archive_mode must be always in order + for pg_stop_backup to wait. Archiving of these files happens automatically since you have - already configured archive_command. In most cases this + already configured archive_command. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - pg_stop_backup, set an appropriate + pg_stop_backup, set an appropriate statement_timeout value, but make note that if - pg_stop_backup terminates because of this your backup + pg_stop_backup terminates because of this your backup may not be valid. If the backup process monitors and ensures that all WAL segment files required for the backup are successfully archived then the - wait_for_archive parameter (which defaults to true) can be set + wait_for_archive parameter (which defaults to true) can be set to false to have - pg_stop_backup return as soon as the stop backup record is - written to the WAL. By default, pg_stop_backup will wait + pg_stop_backup return as soon as the stop backup record is + written to the WAL. By default, pg_stop_backup will wait until all WAL has been archived, which can take some time. This option must be used with caution: if WAL archiving is not monitored correctly then the backup might not include all of the WAL files and will @@ -952,7 +952,7 @@ SELECT * FROM pg_stop_backup(false, true); The process for an exclusive backup is mostly the same as for a non-exclusive one, but it differs in a few key steps. This type of backup can only be taken on a primary and does not allow concurrent backups. - Prior to PostgreSQL 9.6, this + Prior to PostgreSQL 9.6, this was the only low-level method available, but it is now recommended that all users upgrade their scripts to use non-exclusive backups if possible. @@ -971,20 +971,20 @@ SELECT * FROM pg_stop_backup(false, true); SELECT pg_start_backup('label'); - where label is any string you want to use to uniquely + where label is any string you want to use to uniquely identify this backup operation. - pg_start_backup creates a backup label file, - called backup_label, in the cluster directory with + pg_start_backup creates a backup label file, + called backup_label, in the cluster directory with information about your backup, including the start time and label string. - The function also creates a tablespace map file, - called tablespace_map, in the cluster directory with - information about tablespace symbolic links in pg_tblspc/ if + The function also creates a tablespace map file, + called tablespace_map, in the cluster directory with + information about tablespace symbolic links in pg_tblspc/ if one or more such link is present. Both files are critical to the integrity of the backup, should you need to restore from it. - By default, pg_start_backup can take a long time to finish. + By default, pg_start_backup can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -1002,7 +1002,7 @@ SELECT pg_start_backup('label', true); Perform the backup, using any convenient file-system-backup tool - such as tar or cpio (not + such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database @@ -1012,7 +1012,7 @@ SELECT pg_start_backup('label', true); Note that if the server crashes during the backup it may not be - possible to restart until the backup_label file has been + possible to restart until the backup_label file has been manually deleted from the PGDATA directory. @@ -1033,22 +1033,22 @@ SELECT pg_stop_backup(); Once the WAL segment files active during the backup are archived, you are - done. The file identified by pg_stop_backup's result is + done. The file identified by pg_stop_backup's result is the last segment that is required to form a complete set of backup files. - If archive_mode is enabled, - pg_stop_backup does not return until the last segment has + If archive_mode is enabled, + pg_stop_backup does not return until the last segment has been archived. Archiving of these files happens automatically since you have - already configured archive_command. In most cases this + already configured archive_command. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - pg_stop_backup, set an appropriate + pg_stop_backup, set an appropriate statement_timeout value, but make note that if - pg_stop_backup terminates because of this your backup + pg_stop_backup terminates because of this your backup may not be valid. @@ -1063,21 +1063,21 @@ SELECT pg_stop_backup(); When taking a base backup of an active database, this situation is normal and not an error. However, you need to ensure that you can distinguish complaints of this sort from real errors. For example, some versions - of rsync return a separate exit code for - vanished source files, and you can write a driver script to + of rsync return a separate exit code for + vanished source files, and you can write a driver script to accept this exit code as a non-error case. Also, some versions of - GNU tar return an error code indistinguishable from - a fatal error if a file was truncated while tar was - copying it. Fortunately, GNU tar versions 1.16 and + GNU tar return an error code indistinguishable from + a fatal error if a file was truncated while tar was + copying it. Fortunately, GNU tar versions 1.16 and later exit with 1 if a file was changed during the backup, - and 2 for other errors. With GNU tar version 1.23 and + and 2 for other errors. With GNU tar version 1.23 and later, you can use the warning options --warning=no-file-changed --warning=no-file-removed to hide the related warning messages. Be certain that your backup includes all of the files under - the database cluster directory (e.g., /usr/local/pgsql/data). + the database cluster directory (e.g., /usr/local/pgsql/data). If you are using tablespaces that do not reside underneath this directory, be careful to include them as well (and be sure that your backup archives symbolic links as links, otherwise the restore will corrupt @@ -1086,21 +1086,21 @@ SELECT pg_stop_backup(); You should, however, omit from the backup the files within the - cluster's pg_wal/ subdirectory. This + cluster's pg_wal/ subdirectory. This slight adjustment is worthwhile because it reduces the risk of mistakes when restoring. This is easy to arrange if - pg_wal/ is a symbolic link pointing to someplace outside + pg_wal/ is a symbolic link pointing to someplace outside the cluster directory, which is a common setup anyway for performance - reasons. You might also want to exclude postmaster.pid - and postmaster.opts, which record information - about the running postmaster, not about the - postmaster which will eventually use this backup. - (These files can confuse pg_ctl.) + reasons. You might also want to exclude postmaster.pid + and postmaster.opts, which record information + about the running postmaster, not about the + postmaster which will eventually use this backup. + (These files can confuse pg_ctl.) It is often a good idea to also omit from the backup the files - within the cluster's pg_replslot/ directory, so that + within the cluster's pg_replslot/ directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and @@ -1114,10 +1114,10 @@ SELECT pg_stop_backup(); - The contents of the directories pg_dynshmem/, - pg_notify/, pg_serial/, - pg_snapshots/, pg_stat_tmp/, - and pg_subtrans/ (but not the directories themselves) can be + The contents of the directories pg_dynshmem/, + pg_notify/, pg_serial/, + pg_snapshots/, pg_stat_tmp/, + and pg_subtrans/ (but not the directories themselves) can be omitted from the backup as they will be initialized on postmaster startup. If is set and is under the data directory then the contents of that directory can also be omitted. @@ -1131,13 +1131,13 @@ SELECT pg_stop_backup(); The backup label - file includes the label string you gave to pg_start_backup, - as well as the time at which pg_start_backup was run, and + file includes the label string you gave to pg_start_backup, + as well as the time at which pg_start_backup was run, and the name of the starting WAL file. In case of confusion it is therefore possible to look inside a backup file and determine exactly which backup session the dump file came from. The tablespace map file includes the symbolic link names as they exist in the directory - pg_tblspc/ and the full path of each symbolic link. + pg_tblspc/ and the full path of each symbolic link. These files are not merely for your information; their presence and contents are critical to the proper operation of the system's recovery process. @@ -1146,7 +1146,7 @@ SELECT pg_stop_backup(); It is also possible to make a backup while the server is stopped. In this case, you obviously cannot use - pg_start_backup or pg_stop_backup, and + pg_start_backup or pg_stop_backup, and you will therefore be left to your own devices to keep track of which backup is which and how far back the associated WAL files go. It is generally better to follow the continuous archiving procedure above. @@ -1173,7 +1173,7 @@ SELECT pg_stop_backup(); location in case you need them later. Note that this precaution will require that you have enough free space on your system to hold two copies of your existing database. If you do not have enough space, - you should at least save the contents of the cluster's pg_wal + you should at least save the contents of the cluster's pg_wal subdirectory, as it might contain logs which were not archived before the system went down. @@ -1188,17 +1188,17 @@ SELECT pg_stop_backup(); Restore the database files from your file system backup. Be sure that they are restored with the right ownership (the database system user, not - root!) and with the right permissions. If you are using + root!) and with the right permissions. If you are using tablespaces, - you should verify that the symbolic links in pg_tblspc/ + you should verify that the symbolic links in pg_tblspc/ were correctly restored. - Remove any files present in pg_wal/; these came from the + Remove any files present in pg_wal/; these came from the file system backup and are therefore probably obsolete rather than current. - If you didn't archive pg_wal/ at all, then recreate + If you didn't archive pg_wal/ at all, then recreate it with proper permissions, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before. @@ -1207,16 +1207,16 @@ SELECT pg_stop_backup(); If you have unarchived WAL segment files that you saved in step 2, - copy them into pg_wal/. (It is best to copy them, + copy them into pg_wal/. (It is best to copy them, not move them, so you still have the unmodified files if a problem occurs and you have to start over.) - Create a recovery command file recovery.conf in the cluster + Create a recovery command file recovery.conf in the cluster data directory (see ). You might - also want to temporarily modify pg_hba.conf to prevent + also want to temporarily modify pg_hba.conf to prevent ordinary users from connecting until you are sure the recovery was successful. @@ -1227,7 +1227,7 @@ SELECT pg_stop_backup(); recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will rename - recovery.conf to recovery.done (to prevent + recovery.conf to recovery.done (to prevent accidentally re-entering recovery mode later) and then commence normal database operations. @@ -1236,7 +1236,7 @@ SELECT pg_stop_backup(); Inspect the contents of the database to ensure you have recovered to the desired state. If not, return to step 1. If all is well, - allow your users to connect by restoring pg_hba.conf to normal. + allow your users to connect by restoring pg_hba.conf to normal. @@ -1245,32 +1245,32 @@ SELECT pg_stop_backup(); The key part of all this is to set up a recovery configuration file that describes how you want to recover and how far the recovery should - run. You can use recovery.conf.sample (normally - located in the installation's share/ directory) as a + run. You can use recovery.conf.sample (normally + located in the installation's share/ directory) as a prototype. The one thing that you absolutely must specify in - recovery.conf is the restore_command, - which tells PostgreSQL how to retrieve archived - WAL file segments. Like the archive_command, this is - a shell command string. It can contain %f, which is - replaced by the name of the desired log file, and %p, + recovery.conf is the restore_command, + which tells PostgreSQL how to retrieve archived + WAL file segments. Like the archive_command, this is + a shell command string. It can contain %f, which is + replaced by the name of the desired log file, and %p, which is replaced by the path name to copy the log file to. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Write %% if you need to embed an actual % + Write %% if you need to embed an actual % character in the command. The simplest useful command is something like: restore_command = 'cp /mnt/server/archivedir/%f %p' which will copy previously archived WAL segments from the directory - /mnt/server/archivedir. Of course, you can use something + /mnt/server/archivedir. Of course, you can use something much more complicated, perhaps even a shell script that requests the operator to mount an appropriate tape. It is important that the command return nonzero exit status on failure. - The command will be called requesting files that are not + The command will be called requesting files that are not present in the archive; it must return nonzero when so asked. This is not an error condition. An exception is that if the command was terminated by a signal (other than SIGTERM, which is used as @@ -1282,27 +1282,27 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' Not all of the requested files will be WAL segment files; you should also expect requests for files with a suffix of - .backup or .history. Also be aware that - the base name of the %p path will be different from - %f; do not expect them to be interchangeable. + .backup or .history. Also be aware that + the base name of the %p path will be different from + %f; do not expect them to be interchangeable. WAL segments that cannot be found in the archive will be sought in - pg_wal/; this allows use of recent un-archived segments. + pg_wal/; this allows use of recent un-archived segments. However, segments that are available from the archive will be used in - preference to files in pg_wal/. + preference to files in pg_wal/. Normally, recovery will proceed through all available WAL segments, thereby restoring the database to the current point in time (or as close as possible given the available WAL segments). Therefore, a normal - recovery will end with a file not found message, the exact text + recovery will end with a file not found message, the exact text of the error message depending upon your choice of - restore_command. You may also see an error message + restore_command. You may also see an error message at the start of recovery for a file named something like - 00000001.history. This is also normal and does not + 00000001.history. This is also normal and does not indicate a problem in simple recovery situations; see for discussion. @@ -1310,8 +1310,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If you want to recover to some previous point in time (say, right before the junior DBA dropped your main transaction table), just specify the - required stopping point in recovery.conf. You can specify - the stop point, known as the recovery target, either by + required stopping point in recovery.conf. You can specify + the stop point, known as the recovery target, either by date/time, named restore point or by completion of a specific transaction ID. As of this writing only the date/time and named restore point options are very usable, since there are no tools to help you identify with any @@ -1321,7 +1321,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' The stop point must be after the ending time of the base backup, i.e., - the end time of pg_stop_backup. You cannot use a base backup + the end time of pg_stop_backup. You cannot use a base backup to recover to a time when that backup was in progress. (To recover to such a time, you must go back to your previous base backup and roll forward from there.) @@ -1332,14 +1332,14 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If recovery finds corrupted WAL data, recovery will halt at that point and the server will not start. In such a case the recovery process could be re-run from the beginning, specifying a - recovery target before the point of corruption so that recovery + recovery target before the point of corruption so that recovery can complete normally. If recovery fails for an external reason, such as a system crash or if the WAL archive has become inaccessible, then the recovery can simply be restarted and it will restart almost from where it failed. Recovery restart works much like checkpointing in normal operation: the server periodically forces all its state to disk, and then updates - the pg_control file to indicate that the already-processed + the pg_control file to indicate that the already-processed WAL data need not be scanned again. @@ -1359,7 +1359,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' suppose you dropped a critical table at 5:15PM on Tuesday evening, but didn't realize your mistake until Wednesday noon. Unfazed, you get out your backup, restore to the point-in-time 5:14PM - Tuesday evening, and are up and running. In this history of + Tuesday evening, and are up and running. In this history of the database universe, you never dropped the table. But suppose you later realize this wasn't such a great idea, and would like to return to sometime Wednesday morning in the original history. @@ -1372,8 +1372,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' - To deal with this problem, PostgreSQL has a notion - of timelines. Whenever an archive recovery completes, + To deal with this problem, PostgreSQL has a notion + of timelines. Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does @@ -1384,13 +1384,13 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' and so have to do several point-in-time recoveries by trial and error until you find the best place to branch off from the old history. Without timelines this process would soon generate an unmanageable mess. With - timelines, you can recover to any prior state, including + timelines, you can recover to any prior state, including states in timeline branches that you abandoned earlier. - Every time a new timeline is created, PostgreSQL creates - a timeline history file that shows which timeline it branched + Every time a new timeline is created, PostgreSQL creates + a timeline history file that shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are archived into the WAL @@ -1408,7 +1408,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' that was current when the base backup was taken. If you wish to recover into some child timeline (that is, you want to return to some state that was itself generated after a recovery attempt), you need to specify the - target timeline ID in recovery.conf. You cannot recover into + target timeline ID in recovery.conf. You cannot recover into timelines that branched off earlier than the base backup. @@ -1424,18 +1424,18 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' Standalone Hot Backups - It is possible to use PostgreSQL's backup facilities to + It is possible to use PostgreSQL's backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and - restore than pg_dump dumps. (They are also much larger - than pg_dump dumps, so in some cases the speed advantage + restore than pg_dump dumps. (They are also much larger + than pg_dump dumps, so in some cases the speed advantage might be negated.) As with base backups, the easiest way to produce a standalone hot backup is to use the - tool. If you include the -X parameter when calling + tool. If you include the -X parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup. @@ -1445,16 +1445,16 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If more flexibility in copying the backup files is needed, a lower level process can be used for standalone hot backups as well. To prepare for low level standalone hot backups, make sure - wal_level is set to - replica or higher, archive_mode to - on, and set up an archive_command that performs - archiving only when a switch file exists. For example: + wal_level is set to + replica or higher, archive_mode to + on, and set up an archive_command that performs + archiving only when a switch file exists. For example: archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || (test ! -f /var/lib/pgsql/archive/%f && cp %p /var/lib/pgsql/archive/%f)' This command will perform archiving when - /var/lib/pgsql/backup_in_progress exists, and otherwise - silently return zero exit status (allowing PostgreSQL + /var/lib/pgsql/backup_in_progress exists, and otherwise + silently return zero exit status (allowing PostgreSQL to recycle the unwanted WAL file). @@ -1469,11 +1469,11 @@ psql -c "select pg_stop_backup();" rm /var/lib/pgsql/backup_in_progress tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ - The switch file /var/lib/pgsql/backup_in_progress is + The switch file /var/lib/pgsql/backup_in_progress is created first, enabling archiving of completed WAL files to occur. After the backup the switch file is removed. Archived WAL files are then added to the backup so that both base backup and all required - WAL files are part of the same tar file. + WAL files are part of the same tar file. Please remember to add error handling to your backup scripts. @@ -1488,7 +1488,7 @@ tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f' - You will then need to use gunzip during recovery: + You will then need to use gunzip during recovery: restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' @@ -1501,7 +1501,7 @@ restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' Many people choose to use scripts to define their archive_command, so that their - postgresql.conf entry looks very simple: + postgresql.conf entry looks very simple: archive_command = 'local_backup_script.sh "%p" "%f"' @@ -1509,7 +1509,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' more than a single command in the archiving process. This allows all complexity to be managed within the script, which can be written in a popular scripting language such as - bash or perl. + bash or perl. @@ -1543,7 +1543,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' When using an archive_command script, it's desirable to enable . - Any messages written to stderr from the script will then + Any messages written to stderr from the script will then appear in the database server log, allowing complex configurations to be diagnosed easily if they fail. @@ -1563,7 +1563,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' If a command is executed while a base backup is being taken, and then - the template database that the CREATE DATABASE copied + the template database that the CREATE DATABASE copied is modified while the base backup is still in progress, it is possible that recovery will cause those modifications to be propagated into the created database as well. This is of course @@ -1602,7 +1602,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' before you do so.) Turning off page snapshots does not prevent use of the logs for PITR operations. An area for future development is to compress archived WAL data by removing - unnecessary page copies even when full_page_writes is + unnecessary page copies even when full_page_writes is on. In the meantime, administrators might wish to reduce the number of page snapshots included in WAL by increasing the checkpoint interval parameters as much as feasible. diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index ea1b5c0c8e..0b092f6e49 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -11,17 +11,17 @@ PostgreSQL can be extended to run user-supplied code in separate processes. Such processes are started, stopped and monitored by postgres, which permits them to have a lifetime closely linked to the server's status. - These processes have the option to attach to PostgreSQL's + These processes have the option to attach to PostgreSQL's shared memory area and to connect to databases internally; they can also run multiple transactions serially, just like a regular client-connected server - process. Also, by linking to libpq they can connect to the + process. Also, by linking to libpq they can connect to the server and behave like a regular client application. There are considerable robustness and security risks in using background - worker processes because, being written in the C language, + worker processes because, being written in the C language, they have unrestricted access to data. Administrators wishing to enable modules that include background worker process should exercise extreme caution. Only carefully audited modules should be permitted to run @@ -31,15 +31,15 @@ Background workers can be initialized at the time that - PostgreSQL is started by including the module name in - shared_preload_libraries. A module wishing to run a background + PostgreSQL is started by including the module name in + shared_preload_libraries. A module wishing to run a background worker can register it by calling RegisterBackgroundWorker(BackgroundWorker *worker) - from its _PG_init(). Background workers can also be started + from its _PG_init(). Background workers can also be started after the system is up and running by calling the function RegisterDynamicBackgroundWorker(BackgroundWorker *worker, BackgroundWorkerHandle **handle). Unlike - RegisterBackgroundWorker, which can only be called from within + RegisterBackgroundWorker, which can only be called from within the postmaster, RegisterDynamicBackgroundWorker must be called from a regular backend. @@ -65,7 +65,7 @@ typedef struct BackgroundWorker - bgw_name and bgw_type are + bgw_name and bgw_type are strings to be used in log messages, process listings and similar contexts. bgw_type should be the same for all background workers of the same type, so that it is possible to group such workers in a @@ -76,7 +76,7 @@ typedef struct BackgroundWorker - bgw_flags is a bitwise-or'd bit mask indicating the + bgw_flags is a bitwise-or'd bit mask indicating the capabilities that the module wants. Possible values are: @@ -114,14 +114,14 @@ typedef struct BackgroundWorker bgw_start_time is the server state during which - postgres should start the process; it can be one of - BgWorkerStart_PostmasterStart (start as soon as - postgres itself has finished its own initialization; processes + postgres should start the process; it can be one of + BgWorkerStart_PostmasterStart (start as soon as + postgres itself has finished its own initialization; processes requesting this are not eligible for database connections), - BgWorkerStart_ConsistentState (start as soon as a consistent state + BgWorkerStart_ConsistentState (start as soon as a consistent state has been reached in a hot standby, allowing processes to connect to databases and run read-only queries), and - BgWorkerStart_RecoveryFinished (start as soon as the system has + BgWorkerStart_RecoveryFinished (start as soon as the system has entered normal read-write state). Note the last two values are equivalent in a server that's not a hot standby. Note that this setting only indicates when the processes are to be started; they do not stop when a different state @@ -152,9 +152,9 @@ typedef struct BackgroundWorker - bgw_main_arg is the Datum argument + bgw_main_arg is the Datum argument to the background worker main function. This main function should take a - single argument of type Datum and return void. + single argument of type Datum and return void. bgw_main_arg will be passed as the argument. In addition, the global variable MyBgworkerEntry points to a copy of the BackgroundWorker structure @@ -165,39 +165,39 @@ typedef struct BackgroundWorker On Windows (and anywhere else where EXEC_BACKEND is defined) or in dynamic background workers it is not safe to pass a - Datum by reference, only by value. If an argument is required, it + Datum by reference, only by value. If an argument is required, it is safest to pass an int32 or other small value and use that as an index - into an array allocated in shared memory. If a value like a cstring + into an array allocated in shared memory. If a value like a cstring or text is passed then the pointer won't be valid from the new background worker process. bgw_extra can contain extra data to be passed - to the background worker. Unlike bgw_main_arg, this data + to the background worker. Unlike bgw_main_arg, this data is not passed as an argument to the worker's main function, but it can be accessed via MyBgworkerEntry, as discussed above. bgw_notify_pid is the PID of a PostgreSQL - backend process to which the postmaster should send SIGUSR1 + backend process to which the postmaster should send SIGUSR1 when the process is started or exits. It should be 0 for workers registered at postmaster startup time, or when the backend registering the worker does not wish to wait for the worker to start up. Otherwise, it should be - initialized to MyProcPid. + initialized to MyProcPid. Once running, the process can connect to a database by calling BackgroundWorkerInitializeConnection(char *dbname, char *username) or BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid). This allows the process to run transactions and queries using the - SPI interface. If dbname is NULL or - dboid is InvalidOid, the session is not connected + SPI interface. If dbname is NULL or + dboid is InvalidOid, the session is not connected to any particular database, but shared catalogs can be accessed. - If username is NULL or useroid is - InvalidOid, the process will run as the superuser created - during initdb. + If username is NULL or useroid is + InvalidOid, the process will run as the superuser created + during initdb. A background worker can only call one of these two functions, and only once. It is not possible to switch databases. @@ -207,24 +207,24 @@ typedef struct BackgroundWorker background worker's main function, and must be unblocked by it; this is to allow the process to customize its signal handlers, if necessary. Signals can be unblocked in the new process by calling - BackgroundWorkerUnblockSignals and blocked by calling - BackgroundWorkerBlockSignals. + BackgroundWorkerUnblockSignals and blocked by calling + BackgroundWorkerBlockSignals. If bgw_restart_time for a background worker is - configured as BGW_NEVER_RESTART, or if it exits with an exit - code of 0 or is terminated by TerminateBackgroundWorker, + configured as BGW_NEVER_RESTART, or if it exits with an exit + code of 0 or is terminated by TerminateBackgroundWorker, it will be automatically unregistered by the postmaster on exit. Otherwise, it will be restarted after the time period configured via - bgw_restart_time, or immediately if the postmaster + bgw_restart_time, or immediately if the postmaster reinitializes the cluster due to a backend failure. Backends which need to suspend execution only temporarily should use an interruptible sleep rather than exiting; this can be achieved by calling WaitLatch(). Make sure the - WL_POSTMASTER_DEATH flag is set when calling that function, and + WL_POSTMASTER_DEATH flag is set when calling that function, and verify the return code for a prompt exit in the emergency case that - postgres itself has terminated. + postgres itself has terminated. @@ -238,29 +238,29 @@ typedef struct BackgroundWorker opaque handle that can subsequently be passed to GetBackgroundWorkerPid(BackgroundWorkerHandle *, pid_t *) or TerminateBackgroundWorker(BackgroundWorkerHandle *). - GetBackgroundWorkerPid can be used to poll the status of the - worker: a return value of BGWH_NOT_YET_STARTED indicates that + GetBackgroundWorkerPid can be used to poll the status of the + worker: a return value of BGWH_NOT_YET_STARTED indicates that the worker has not yet been started by the postmaster; BGWH_STOPPED indicates that it has been started but is no longer running; and BGWH_STARTED indicates that it is currently running. In this last case, the PID will also be returned via the second argument. - TerminateBackgroundWorker causes the postmaster to send - SIGTERM to the worker if it is running, and to unregister it + TerminateBackgroundWorker causes the postmaster to send + SIGTERM to the worker if it is running, and to unregister it as soon as it is not. In some cases, a process which registers a background worker may wish to wait for the worker to start up. This can be accomplished by initializing - bgw_notify_pid to MyProcPid and + bgw_notify_pid to MyProcPid and then passing the BackgroundWorkerHandle * obtained at registration time to WaitForBackgroundWorkerStartup(BackgroundWorkerHandle *handle, pid_t *) function. This function will block until the postmaster has attempted to start the background worker, or until the postmaster dies. If the background runner - is running, the return value will BGWH_STARTED, and + is running, the return value will BGWH_STARTED, and the PID will be written to the provided address. Otherwise, the return value will be BGWH_STOPPED or BGWH_POSTMASTER_DIED. @@ -279,7 +279,7 @@ typedef struct BackgroundWorker - The src/test/modules/worker_spi module + The src/test/modules/worker_spi module contains a working example, which demonstrates some useful techniques. diff --git a/doc/src/sgml/biblio.sgml b/doc/src/sgml/biblio.sgml index 5462bc38e4..d7547e6e92 100644 --- a/doc/src/sgml/biblio.sgml +++ b/doc/src/sgml/biblio.sgml @@ -171,7 +171,7 @@ ssimkovi@ag.or.at Discusses SQL history and syntax, and describes the addition of - INTERSECT and EXCEPT constructs into + INTERSECT and EXCEPT constructs into PostgreSQL. Prepared as a Master's Thesis with the support of O. Univ. Prof. Dr. Georg Gottlob and Univ. Ass. Mag. Katrin Seyr at Vienna University of Technology. diff --git a/doc/src/sgml/bki.sgml b/doc/src/sgml/bki.sgml index af6d8d1d2a..33378b46ea 100644 --- a/doc/src/sgml/bki.sgml +++ b/doc/src/sgml/bki.sgml @@ -21,7 +21,7 @@ input file used by initdb is created as part of building and installing PostgreSQL by a program named genbki.pl, which reads some - specially formatted C header files in the src/include/catalog/ + specially formatted C header files in the src/include/catalog/ directory of the source tree. The created BKI file is called postgres.bki and is normally installed in the @@ -67,13 +67,13 @@ - create + create tablename tableoid - bootstrap - shared_relation - without_oids - rowtype_oid oid + bootstrap + shared_relation + without_oids + rowtype_oid oid (name1 = type1 FORCE NOT NULL | FORCE NULL , @@ -93,7 +93,7 @@ The following column types are supported directly by - bootstrap.c: bool, + bootstrap.c: bool, bytea, char (1 byte), name, int2, int4, regproc, regclass, @@ -104,31 +104,31 @@ _oid (array), _char (array), _aclitem (array). Although it is possible to create tables containing columns of other types, this cannot be done until - after pg_type has been created and filled with + after pg_type has been created and filled with appropriate entries. (That effectively means that only these column types can be used in bootstrapped tables, but non-bootstrap catalogs can contain any built-in type.) - When bootstrap is specified, + When bootstrap is specified, the table will only be created on disk; nothing is entered into pg_class, pg_attribute, etc, for it. Thus the table will not be accessible by ordinary SQL operations until - such entries are made the hard way (with insert + such entries are made the hard way (with insert commands). This option is used for creating pg_class etc themselves. - The table is created as shared if shared_relation is + The table is created as shared if shared_relation is specified. - It will have OIDs unless without_oids is specified. - The table's row type OID (pg_type OID) can optionally - be specified via the rowtype_oid clause; if not specified, - an OID is automatically generated for it. (The rowtype_oid - clause is useless if bootstrap is specified, but it can be + It will have OIDs unless without_oids is specified. + The table's row type OID (pg_type OID) can optionally + be specified via the rowtype_oid clause; if not specified, + an OID is automatically generated for it. (The rowtype_oid + clause is useless if bootstrap is specified, but it can be provided anyway for documentation.) @@ -136,7 +136,7 @@ - open tablename + open tablename @@ -150,7 +150,7 @@ - close tablename + close tablename @@ -163,7 +163,7 @@ - insert OID = oid_value ( value1 value2 ... ) + insert OID = oid_value ( value1 value2 ... ) @@ -188,14 +188,14 @@ - declare unique - index indexname + declare unique + index indexname indexoid - on tablename - using amname - ( opclass1 + on tablename + using amname + ( opclass1 name1 - , ... ) + , ... ) @@ -220,10 +220,10 @@ - declare toast + declare toast toasttableoid toastindexoid - on tablename + on tablename @@ -234,14 +234,14 @@ toasttableoid and its index is assigned OID toastindexoid. - As with declare index, filling of the index + As with declare index, filling of the index is postponed. - build indices + build indices @@ -257,17 +257,17 @@ Structure of the Bootstrap <acronym>BKI</acronym> File - The open command cannot be used until the tables it uses + The open command cannot be used until the tables it uses exist and have entries for the table that is to be opened. - (These minimum tables are pg_class, - pg_attribute, pg_proc, and - pg_type.) To allow those tables themselves to be filled, - create with the bootstrap option implicitly opens + (These minimum tables are pg_class, + pg_attribute, pg_proc, and + pg_type.) To allow those tables themselves to be filled, + create with the bootstrap option implicitly opens the created table for data insertion. - Also, the declare index and declare toast + Also, the declare index and declare toast commands cannot be used until the system catalogs they need have been created and filled in. @@ -278,17 +278,17 @@ - create bootstrap one of the critical tables + create bootstrap one of the critical tables - insert data describing at least the critical tables + insert data describing at least the critical tables - close + close @@ -298,22 +298,22 @@ - create (without bootstrap) a noncritical table + create (without bootstrap) a noncritical table - open + open - insert desired data + insert desired data - close + close @@ -328,7 +328,7 @@ - build indices + build indices diff --git a/doc/src/sgml/bloom.sgml b/doc/src/sgml/bloom.sgml index 396348c523..e13ebf80fd 100644 --- a/doc/src/sgml/bloom.sgml +++ b/doc/src/sgml/bloom.sgml @@ -8,7 +8,7 @@ - bloom provides an index access method based on + bloom provides an index access method based on Bloom filters. @@ -42,29 +42,29 @@ Parameters - A bloom index accepts the following parameters in its - WITH clause: + A bloom index accepts the following parameters in its + WITH clause: - length + length Length of each signature (index entry) in bits. The default - is 80 bits and maximum is 4096. + is 80 bits and maximum is 4096. - col1 — col32 + col1 — col32 Number of bits generated for each index column. Each parameter's name refers to the number of the index column that it controls. The default - is 2 bits and maximum is 4095. Parameters for + is 2 bits and maximum is 4095. Parameters for index columns not actually used are ignored. @@ -87,8 +87,8 @@ CREATE INDEX bloomidx ON tbloom USING bloom (i1,i2,i3) The index is created with a signature length of 80 bits, with attributes i1 and i2 mapped to 2 bits, and attribute i3 mapped to 4 bits. We could - have omitted the length, col1, - and col2 specifications since those have the default values. + have omitted the length, col1, + and col2 specifications since those have the default values. @@ -175,7 +175,7 @@ CREATE INDEX Note the relatively large number of false positives: 2439 rows were selected to be visited in the heap, but none actually matched the query. We could reduce that by specifying a larger signature length. - In this example, creating the index with length=200 + In this example, creating the index with length=200 reduced the number of false positives to 55; but it doubled the index size (to 306 MB) and ended up being slower for this query (125 ms overall). @@ -213,7 +213,7 @@ CREATE INDEX An operator class for bloom indexes requires only a hash function for the indexed data type and an equality operator for searching. This example - shows the operator class definition for the text data type: + shows the operator class definition for the text data type: @@ -230,7 +230,7 @@ DEFAULT FOR TYPE text USING bloom AS - Only operator classes for int4 and text are + Only operator classes for int4 and text are included with the module. diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml index 8dcc29925b..91c01700ed 100644 --- a/doc/src/sgml/brin.sgml +++ b/doc/src/sgml/brin.sgml @@ -16,7 +16,7 @@ BRIN is designed for handling very large tables in which certain columns have some natural correlation with their physical location within the table. - A block range is a group of pages that are physically + A block range is a group of pages that are physically adjacent in the table; for each block range, some summary info is stored by the index. For example, a table storing a store's sale orders might have @@ -29,7 +29,7 @@ BRIN indexes can satisfy queries via regular bitmap index scans, and will return all tuples in all pages within each range if - the summary info stored by the index is consistent with the + the summary info stored by the index is consistent with the query conditions. The query executor is in charge of rechecking these tuples and discarding those that do not match the query conditions — in other words, these @@ -51,9 +51,9 @@ The size of the block range is determined at index creation time by - the pages_per_range storage parameter. The number of index + the pages_per_range storage parameter. The number of index entries will be equal to the size of the relation in pages divided by - the selected value for pages_per_range. Therefore, the smaller + the selected value for pages_per_range. Therefore, the smaller the number, the larger the index becomes (because of the need to store more index entries), but at the same time the summary data stored can be more precise and more data blocks can be skipped during an index scan. @@ -99,9 +99,9 @@ - The minmax + The minmax operator classes store the minimum and the maximum values appearing - in the indexed column within the range. The inclusion + in the indexed column within the range. The inclusion operator classes store a value which includes the values in the indexed column within the range. @@ -162,21 +162,21 @@ - box_inclusion_ops + box_inclusion_ops box - << - &< - && - &> - >> - ~= - @> - <@ - &<| - <<| + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| |>> - |&> + |&> @@ -249,11 +249,11 @@ network_inclusion_ops inet - && - >>= + && + >>= <<= = - >> + >> << @@ -346,18 +346,18 @@ - range_inclusion_ops + range_inclusion_ops any range type - << - &< - && - &> - >> - @> - <@ - -|- - = + << + &< + && + &> + >> + @> + <@ + -|- + = < <= = @@ -505,11 +505,11 @@ - BrinOpcInfo *opcInfo(Oid type_oid) + BrinOpcInfo *opcInfo(Oid type_oid) Returns internal information about the indexed columns' summary data. - The return value must point to a palloc'd BrinOpcInfo, + The return value must point to a palloc'd BrinOpcInfo, which has this definition: typedef struct BrinOpcInfo @@ -524,7 +524,7 @@ typedef struct BrinOpcInfo TypeCacheEntry *oi_typcache[FLEXIBLE_ARRAY_MEMBER]; } BrinOpcInfo; - BrinOpcInfo.oi_opaque can be used by the + BrinOpcInfo.oi_opaque can be used by the operator class routines to pass information between support procedures during an index scan. @@ -797,8 +797,8 @@ typedef struct BrinOpcInfo It should accept two arguments with the same data type as the operator class, and return the union of them. The inclusion operator class can store union values with different data types if it is defined with the - STORAGE parameter. The return value of the union - function should match the STORAGE data type. + STORAGE parameter. The return value of the union + function should match the STORAGE data type. @@ -823,11 +823,11 @@ typedef struct BrinOpcInfo on another operator strategy as shown in , or the same operator strategy as themselves. They require the dependency - operator to be defined with the STORAGE data type as the + operator to be defined with the STORAGE data type as the left-hand-side argument and the other supported data type to be the right-hand-side argument of the supported operator. See - float4_minmax_ops as an example of minmax, and - box_inclusion_ops as an example of inclusion. + float4_minmax_ops as an example of minmax, and + box_inclusion_ops as an example of inclusion. diff --git a/doc/src/sgml/btree-gin.sgml b/doc/src/sgml/btree-gin.sgml index 375e7ec4be..e491fa76e7 100644 --- a/doc/src/sgml/btree-gin.sgml +++ b/doc/src/sgml/btree-gin.sgml @@ -8,16 +8,16 @@ - btree_gin provides sample GIN operator classes that + btree_gin provides sample GIN operator classes that implement B-tree equivalent behavior for the data types - int2, int4, int8, float4, - float8, timestamp with time zone, - timestamp without time zone, time with time zone, - time without time zone, date, interval, - oid, money, "char", - varchar, text, bytea, bit, - varbit, macaddr, macaddr8, inet, - cidr, and all enum types. + int2, int4, int8, float4, + float8, timestamp with time zone, + timestamp without time zone, time with time zone, + time without time zone, date, interval, + oid, money, "char", + varchar, text, bytea, bit, + varbit, macaddr, macaddr8, inet, + cidr, and all enum types. diff --git a/doc/src/sgml/btree-gist.sgml b/doc/src/sgml/btree-gist.sgml index f3c639c2f3..dcb939f1fb 100644 --- a/doc/src/sgml/btree-gist.sgml +++ b/doc/src/sgml/btree-gist.sgml @@ -8,16 +8,16 @@ - btree_gist provides GiST index operator classes that + btree_gist provides GiST index operator classes that implement B-tree equivalent behavior for the data types - int2, int4, int8, float4, - float8, numeric, timestamp with time zone, - timestamp without time zone, time with time zone, - time without time zone, date, interval, - oid, money, char, - varchar, text, bytea, bit, - varbit, macaddr, macaddr8, inet, - cidr, uuid, and all enum types. + int2, int4, int8, float4, + float8, numeric, timestamp with time zone, + timestamp without time zone, time with time zone, + time without time zone, date, interval, + oid, money, char, + varchar, text, bytea, bit, + varbit, macaddr, macaddr8, inet, + cidr, uuid, and all enum types. @@ -33,7 +33,7 @@ - In addition to the typical B-tree search operators, btree_gist + In addition to the typical B-tree search operators, btree_gist also provides index support for <> (not equals). This may be useful in combination with an exclusion constraint, @@ -42,14 +42,14 @@ Also, for data types for which there is a natural distance metric, - btree_gist defines a distance operator <->, + btree_gist defines a distance operator <->, and provides GiST index support for nearest-neighbor searches using this operator. Distance operators are provided for - int2, int4, int8, float4, - float8, timestamp with time zone, - timestamp without time zone, - time without time zone, date, interval, - oid, and money. + int2, int4, int8, float4, + float8, timestamp with time zone, + timestamp without time zone, + time without time zone, date, interval, + oid, and money. diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index cfec2465d2..ef60a58631 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -387,7 +387,7 @@
- <structname>pg_aggregate</> Columns + <structname>pg_aggregate</structname> Columns @@ -410,9 +410,9 @@ charAggregate kind: - n for normal aggregates, - o for ordered-set aggregates, or - h for hypothetical-set aggregates + n for normal aggregates, + o for ordered-set aggregates, or + h for hypothetical-set aggregates @@ -421,7 +421,7 @@ Number of direct (non-aggregated) arguments of an ordered-set or hypothetical-set aggregate, counting a variadic array as one argument. - If equal to pronargs, the aggregate must be variadic + If equal to pronargs, the aggregate must be variadic and the variadic array describes the aggregated arguments as well as the final direct arguments. Always zero for normal aggregates. @@ -592,7 +592,7 @@
- <structname>pg_am</> Columns + <structname>pg_am</structname> Columns @@ -644,7 +644,7 @@ - Before PostgreSQL 9.6, pg_am + Before PostgreSQL 9.6, pg_am contained many additional columns representing properties of index access methods. That data is now only directly visible at the C code level. However, pg_index_column_has_property() and related @@ -667,8 +667,8 @@ The catalog pg_amop stores information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A family - member can be either a search operator or an - ordering operator. An operator + member can be either a search operator or an + ordering operator. An operator can appear in more than one family, but cannot appear in more than one search position nor more than one ordering position within a family. (It is allowed, though unlikely, for an operator to be used for both @@ -676,7 +676,7 @@
- <structname>pg_amop</> Columns + <structname>pg_amop</structname> Columns @@ -728,8 +728,8 @@ amoppurposechar - Operator purpose, either s for search or - o for ordering + Operator purpose, either s for search or + o for ordering @@ -759,26 +759,26 @@
- A search operator entry indicates that an index of this operator + A search operator entry indicates that an index of this operator family can be searched to find all rows satisfying - WHERE - indexed_column - operator - constant. + WHERE + indexed_column + operator + constant. Obviously, such an operator must return boolean, and its left-hand input type must match the index's column data type. - An ordering operator entry indicates that an index of this + An ordering operator entry indicates that an index of this operator family can be scanned to return rows in the order represented by - ORDER BY - indexed_column - operator - constant. + ORDER BY + indexed_column + operator + constant. Such an operator could return any sortable data type, though again its left-hand input type must match the index's column data type. - The exact semantics of the ORDER BY are specified by the + The exact semantics of the ORDER BY are specified by the amopsortfamily column, which must reference a B-tree operator family for the operator's result type. @@ -787,19 +787,19 @@ At present, it's assumed that the sort order for an ordering operator is the default for the referenced operator family, i.e., ASC NULLS - LAST. This might someday be relaxed by adding additional columns + LAST. This might someday be relaxed by adding additional columns to specify sort options explicitly. - An entry's amopmethod must match the - opfmethod of its containing operator family (including - amopmethod here is an intentional denormalization of the + An entry's amopmethod must match the + opfmethod of its containing operator family (including + amopmethod here is an intentional denormalization of the catalog structure for performance reasons). Also, - amoplefttype and amoprighttype must match - the oprleft and oprright fields of the - referenced pg_operator entry. + amoplefttype and amoprighttype must match + the oprleft and oprright fields of the + referenced pg_operator entry. @@ -880,14 +880,14 @@ The usual interpretation of the - amproclefttype and amprocrighttype fields + amproclefttype and amprocrighttype fields is that they identify the left and right input types of the operator(s) that a particular support procedure supports. For some access methods these match the input data type(s) of the support procedure itself, for - others not. There is a notion of default support procedures for - an index, which are those with amproclefttype and - amprocrighttype both equal to the index operator class's - opcintype. + others not. There is a notion of default support procedures for + an index, which are those with amproclefttype and + amprocrighttype both equal to the index operator class's + opcintype. @@ -909,7 +909,7 @@ - <structname>pg_attrdef</> Columns + <structname>pg_attrdef</structname> Columns @@ -964,7 +964,7 @@ The adsrc field is historical, and is best not used, because it does not track outside changes that might affect the representation of the default value. Reverse-compiling the - adbin field (with pg_get_expr for + adbin field (with pg_get_expr for example) is a better way to display the default value. @@ -993,7 +993,7 @@
- <structname>pg_attribute</> Columns + <structname>pg_attribute</structname> Columns @@ -1072,7 +1072,7 @@ Number of dimensions, if the column is an array type; otherwise 0. (Presently, the number of dimensions of an array is not enforced, - so any nonzero value effectively means it's an array.) + so any nonzero value effectively means it's an array.) @@ -1096,7 +1096,7 @@ supplied at table creation time (for example, the maximum length of a varchar column). It is passed to type-specific input functions and length coercion functions. - The value will generally be -1 for types that do not need atttypmod. + The value will generally be -1 for types that do not need atttypmod. @@ -1105,7 +1105,7 @@ bool - A copy of pg_type.typbyval of this column's type + A copy of pg_type.typbyval of this column's type @@ -1114,7 +1114,7 @@ char - Normally a copy of pg_type.typstorage of this + Normally a copy of pg_type.typstorage of this column's type. For TOAST-able data types, this can be altered after column creation to control storage policy. @@ -1125,7 +1125,7 @@ char - A copy of pg_type.typalign of this column's type + A copy of pg_type.typalign of this column's type @@ -1216,7 +1216,7 @@ text[] - Attribute-level options, as keyword=value strings + Attribute-level options, as keyword=value strings @@ -1225,7 +1225,7 @@ text[] - Attribute-level foreign data wrapper options, as keyword=value strings + Attribute-level foreign data wrapper options, as keyword=value strings @@ -1237,9 +1237,9 @@ In a dropped column's pg_attribute entry, atttypid is reset to zero, but attlen and the other fields copied from - pg_type are still valid. This arrangement is needed + pg_type are still valid. This arrangement is needed to cope with the situation where the dropped column's data type was - later dropped, and so there is no pg_type row anymore. + later dropped, and so there is no pg_type row anymore. attlen and the other fields can be used to interpret the contents of a row of the table. @@ -1256,9 +1256,9 @@ The catalog pg_authid contains information about database authorization identifiers (roles). A role subsumes the concepts - of users and groups. A user is essentially just a - role with the rolcanlogin flag set. Any role (with or - without rolcanlogin) can have other roles as members; see + of users and groups. A user is essentially just a + role with the rolcanlogin flag set. Any role (with or + without rolcanlogin) can have other roles as members; see pg_auth_members. @@ -1283,7 +1283,7 @@
- <structname>pg_authid</> Columns + <structname>pg_authid</structname> Columns @@ -1390,20 +1390,20 @@ For an MD5 encrypted password, rolpassword - column will begin with the string md5 followed by a + column will begin with the string md5 followed by a 32-character hexadecimal MD5 hash. The MD5 hash will be of the user's password concatenated to their user name. For example, if user - joe has password xyzzy, PostgreSQL - will store the md5 hash of xyzzyjoe. + joe has password xyzzy, PostgreSQL + will store the md5 hash of xyzzyjoe. If the password is encrypted with SCRAM-SHA-256, it has the format: -SCRAM-SHA-256$<iteration count>:<salt>$<StoredKey>:<ServerKey> +SCRAM-SHA-256$<iteration count>:<salt>$<StoredKey>:<ServerKey> - where salt, StoredKey and - ServerKey are in Base64 encoded format. This format is + where salt, StoredKey and + ServerKey are in Base64 encoded format. This format is the same as that specified by RFC 5803. @@ -1435,7 +1435,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_auth_members</> Columns + <structname>pg_auth_members</structname> Columns @@ -1459,7 +1459,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< member oid pg_authid.oid - ID of a role that is a member of roleid + ID of a role that is a member of roleid @@ -1473,8 +1473,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< admin_option bool - True if member can grant membership in - roleid to others + True if member can grant membership in + roleid to others @@ -1501,14 +1501,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< cannot be deduced from some generic rule. For example, casting between a domain and its base type is not explicitly represented in pg_cast. Another important exception is that - automatic I/O conversion casts, those performed using a data - type's own I/O functions to convert to or from text or other + automatic I/O conversion casts, those performed using a data + type's own I/O functions to convert to or from text or other string types, are not explicitly represented in pg_cast.
- <structname>pg_cast</> Columns + <structname>pg_cast</structname> Columns @@ -1558,11 +1558,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< Indicates what contexts the cast can be invoked in. - e means only as an explicit cast (using - CAST or :: syntax). - a means implicitly in assignment + e means only as an explicit cast (using + CAST or :: syntax). + a means implicitly in assignment to a target column, as well as explicitly. - i means implicitly in expressions, as well as the + i means implicitly in expressions, as well as the other cases. @@ -1572,9 +1572,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Indicates how the cast is performed. - f means that the function specified in the castfunc field is used. - i means that the input/output functions are used. - b means that the types are binary-coercible, thus no conversion is required. + f means that the function specified in the castfunc field is used. + i means that the input/output functions are used. + b means that the types are binary-coercible, thus no conversion is required. @@ -1586,18 +1586,18 @@ SCRAM-SHA-256$<iteration count>:<salt>< always take the cast source type as their first argument type, and return the cast destination type as their result type. A cast function can have up to three arguments. The second argument, - if present, must be type integer; it receives the type + if present, must be type integer; it receives the type modifier associated with the destination type, or -1 if there is none. The third argument, - if present, must be type boolean; it receives true - if the cast is an explicit cast, false otherwise. + if present, must be type boolean; it receives true + if the cast is an explicit cast, false otherwise. It is legitimate to create a pg_cast entry in which the source and target types are the same, if the associated function takes more than one argument. Such entries represent - length coercion functions that coerce values of the type + length coercion functions that coerce values of the type to be legal for a particular type modifier value. @@ -1624,14 +1624,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< table. This includes indexes (but see also pg_index), sequences (but see also pg_sequence), views, materialized - views, composite types, and TOAST tables; see relkind. + views, composite types, and TOAST tables; see relkind. Below, when we mean all of these kinds of objects we speak of relations. Not all columns are meaningful for all relation types.
- <structname>pg_class</> Columns + <structname>pg_class</structname> Columns @@ -1673,7 +1673,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_type.oid The OID of the data type that corresponds to this table's row type, - if any (zero for indexes, which have no pg_type entry) + if any (zero for indexes, which have no pg_type entry) @@ -1706,7 +1706,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< oid Name of the on-disk file of this relation; zero means this - is a mapped relation whose disk file name is determined + is a mapped relation whose disk file name is determined by low-level state @@ -1795,8 +1795,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - p = permanent table, u = unlogged table, - t = temporary table + p = permanent table, u = unlogged table, + t = temporary table @@ -1805,15 +1805,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - r = ordinary table, - i = index, - S = sequence, - t = TOAST table, - v = view, - m = materialized view, - c = composite type, - f = foreign table, - p = partitioned table + r = ordinary table, + i = index, + S = sequence, + t = TOAST table, + v = view, + m = materialized view, + c = composite type, + f = foreign table, + p = partitioned table @@ -1834,7 +1834,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< int2 - Number of CHECK constraints on the table; see + Number of CHECK constraints on the table; see pg_constraint catalog @@ -1917,11 +1917,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Columns used to form replica identity for rows: - d = default (primary key, if any), - n = nothing, - f = all columns - i = index with indisreplident set, or default + Columns used to form replica identity for rows: + d = default (primary key, if any), + n = nothing, + f = all columns + i = index with indisreplident set, or default @@ -1938,9 +1938,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< All transaction IDs before this one have been replaced with a permanent - (frozen) transaction ID in this table. This is used to track + (frozen) transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent transaction - ID wraparound or to allow pg_xact to be shrunk. Zero + ID wraparound or to allow pg_xact to be shrunk. Zero (InvalidTransactionId) if the relation is not a table. @@ -1953,7 +1953,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< All multixact IDs before this one have been replaced by a transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent multixact ID - wraparound or to allow pg_multixact to be shrunk. Zero + wraparound or to allow pg_multixact to be shrunk. Zero (InvalidMultiXactId) if the relation is not a table. @@ -1975,7 +1975,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Access-method-specific options, as keyword=value strings + Access-method-specific options, as keyword=value strings @@ -1993,13 +1993,13 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Several of the Boolean flags in pg_class are maintained + Several of the Boolean flags in pg_class are maintained lazily: they are guaranteed to be true if that's the correct state, but may not be reset to false immediately when the condition is no longer - true. For example, relhasindex is set by + true. For example, relhasindex is set by CREATE INDEX, but it is never cleared by DROP INDEX. Instead, VACUUM clears - relhasindex if it finds the table has no indexes. This + relhasindex if it finds the table has no indexes. This arrangement avoids race conditions and improves concurrency. @@ -2019,7 +2019,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_collation</> Columns + <structname>pg_collation</structname> Columns @@ -2082,14 +2082,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< collcollate name - LC_COLLATE for this collation object + LC_COLLATE for this collation object collctype name - LC_CTYPE for this collation object + LC_CTYPE for this collation object @@ -2107,27 +2107,27 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Note that the unique key on this catalog is (collname, - collencoding, collnamespace) not just - (collname, collnamespace). + Note that the unique key on this catalog is (collname, + collencoding, collnamespace) not just + (collname, collnamespace). PostgreSQL generally ignores all - collations that do not have collencoding equal to + collations that do not have collencoding equal to either the current database's encoding or -1, and creation of new entries - with the same name as an entry with collencoding = -1 + with the same name as an entry with collencoding = -1 is forbidden. Therefore it is sufficient to use a qualified SQL name - (schema.name) to identify a collation, + (schema.name) to identify a collation, even though this is not unique according to the catalog definition. The reason for defining the catalog this way is that - initdb fills it in at cluster initialization time with + initdb fills it in at cluster initialization time with entries for all locales available on the system, so it must be able to hold entries for all encodings that might ever be used in the cluster. - In the template0 database, it could be useful to create + In the template0 database, it could be useful to create collations whose encoding does not match the database encoding, since they could match the encodings of databases later cloned from - template0. This would currently have to be done manually. + template0. This would currently have to be done manually. @@ -2143,13 +2143,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< key, unique, foreign key, and exclusion constraints on tables. (Column constraints are not treated specially. Every column constraint is equivalent to some table constraint.) - Not-null constraints are represented in the pg_attribute + Not-null constraints are represented in the pg_attribute catalog, not here. User-defined constraint triggers (created with CREATE CONSTRAINT - TRIGGER) also give rise to an entry in this table. + TRIGGER) also give rise to an entry in this table. @@ -2157,7 +2157,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_constraint</> Columns + <structname>pg_constraint</structname> Columns @@ -2198,12 +2198,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - c = check constraint, - f = foreign key constraint, - p = primary key constraint, - u = unique constraint, - t = constraint trigger, - x = exclusion constraint + c = check constraint, + f = foreign key constraint, + p = primary key constraint, + u = unique constraint, + t = constraint trigger, + x = exclusion constraint @@ -2263,11 +2263,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key update action code: - a = no action, - r = restrict, - c = cascade, - n = set null, - d = set default + a = no action, + r = restrict, + c = cascade, + n = set null, + d = set default @@ -2276,11 +2276,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key deletion action code: - a = no action, - r = restrict, - c = cascade, - n = set null, - d = set default + a = no action, + r = restrict, + c = cascade, + n = set null, + d = set default @@ -2289,9 +2289,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key match type: - f = full, - p = partial, - s = simple + f = full, + p = partial, + s = simple @@ -2329,7 +2329,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< conkey int2[] - pg_attribute.attnum + pg_attribute.attnum If a table constraint (including foreign keys, but not constraint triggers), list of the constrained columns @@ -2337,35 +2337,35 @@ SCRAM-SHA-256$<iteration count>:<salt>< confkey int2[] - pg_attribute.attnum + pg_attribute.attnum If a foreign key, list of the referenced columns conpfeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for PK = FK comparisons conppeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for PK = PK comparisons conffeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for FK = FK comparisons conexclop oid[] - pg_operator.oid + pg_operator.oid If an exclusion constraint, list of the per-column exclusion operators @@ -2392,7 +2392,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For other cases, a zero appears in conkey and the associated index must be consulted to discover the expression that is constrained. (conkey thus has the - same contents as pg_index.indkey for the + same contents as pg_index.indkey for the index.) @@ -2400,7 +2400,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< consrc is not updated when referenced objects change; for example, it won't track renaming of columns. Rather than - relying on this field, it's best to use pg_get_constraintdef() + relying on this field, it's best to use pg_get_constraintdef() to extract the definition of a check constraint. @@ -2429,7 +2429,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_conversion</> Columns + <structname>pg_conversion</structname> Columns @@ -2529,7 +2529,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_database</> Columns + <structname>pg_database</structname> Columns @@ -2592,7 +2592,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, then this database can be cloned by - any user with CREATEDB privileges; + any user with CREATEDB privileges; if false, then only superusers or the owner of the database can clone it. @@ -2604,7 +2604,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If false then no one can connect to this database. This is - used to protect the template0 database from being altered. + used to protect the template0 database from being altered. @@ -2634,11 +2634,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< All transaction IDs before this one have been replaced with a permanent - (frozen) transaction ID in this database. This is used to + (frozen) transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - transaction ID wraparound or to allow pg_xact to be shrunk. + transaction ID wraparound or to allow pg_xact to be shrunk. It is the minimum of the per-table - pg_class.relfrozenxid values. + pg_class.relfrozenxid values. @@ -2650,9 +2650,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< All multixact IDs before this one have been replaced with a transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - multixact ID wraparound or to allow pg_multixact to be shrunk. + multixact ID wraparound or to allow pg_multixact to be shrunk. It is the minimum of the per-table - pg_class.relminmxid values. + pg_class.relminmxid values. @@ -2663,7 +2663,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The default tablespace for the database. Within this database, all tables for which - pg_class.reltablespace is zero + pg_class.reltablespace is zero will be stored in this tablespace; in particular, all the non-shared system catalogs will be there. @@ -2707,7 +2707,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_db_role_setting</> Columns + <structname>pg_db_role_setting</structname> Columns @@ -2754,12 +2754,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_default_acl stores initial + The catalog pg_default_acl stores initial privileges to be assigned to newly created objects.
- <structname>pg_default_acl</> Columns + <structname>pg_default_acl</structname> Columns @@ -2800,10 +2800,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Type of object this entry is for: - r = relation (table, view), - S = sequence, - f = function, - T = type + r = relation (table, view), + S = sequence, + f = function, + T = type @@ -2820,21 +2820,21 @@ SCRAM-SHA-256$<iteration count>:<salt><
- A pg_default_acl entry shows the initial privileges to + A pg_default_acl entry shows the initial privileges to be assigned to an object belonging to the indicated user. There are - currently two types of entry: global entries with - defaclnamespace = 0, and per-schema entries + currently two types of entry: global entries with + defaclnamespace = 0, and per-schema entries that reference a particular schema. If a global entry is present then - it overrides the normal hard-wired default privileges + it overrides the normal hard-wired default privileges for the object type. A per-schema entry, if present, represents privileges - to be added to the global or hard-wired default privileges. + to be added to the global or hard-wired default privileges. Note that when an ACL entry in another catalog is null, it is taken to represent the hard-wired default privileges for its object, - not whatever might be in pg_default_acl - at the moment. pg_default_acl is only consulted during + not whatever might be in pg_default_acl + at the moment. pg_default_acl is only consulted during object creation. @@ -2851,9 +2851,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_depend records the dependency relationships between database objects. This information allows - DROP commands to find which other objects must be dropped - by DROP CASCADE or prevent dropping in the DROP - RESTRICT case. + DROP commands to find which other objects must be dropped + by DROP CASCADE or prevent dropping in the DROP + RESTRICT case. @@ -2863,7 +2863,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_depend</> Columns + <structname>pg_depend</structname> Columns @@ -2896,7 +2896,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objid and classid refer to the + objid and classid refer to the table itself). For all other object types, this column is zero. @@ -2922,7 +2922,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - refobjid and refclassid refer + refobjid and refclassid refer to the table itself). For all other object types, this column is zero. @@ -2945,17 +2945,17 @@ SCRAM-SHA-256$<iteration count>:<salt>< In all cases, a pg_depend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - deptype: + deptype: - DEPENDENCY_NORMAL (n) + DEPENDENCY_NORMAL (n) A normal relationship between separately-created objects. The dependent object can be dropped without affecting the referenced object. The referenced object can only be dropped - by specifying CASCADE, in which case the dependent + by specifying CASCADE, in which case the dependent object is dropped, too. Example: a table column has a normal dependency on its data type. @@ -2963,12 +2963,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_AUTO (a) + DEPENDENCY_AUTO (a) The dependent object can be dropped separately from the referenced object, and should be automatically dropped - (regardless of RESTRICT or CASCADE + (regardless of RESTRICT or CASCADE mode) if the referenced object is dropped. Example: a named constraint on a table is made autodependent on the table, so that it will go away if the table is dropped. @@ -2977,41 +2977,41 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_INTERNAL (i) + DEPENDENCY_INTERNAL (i) The dependent object was created as part of creation of the referenced object, and is really just a part of its internal - implementation. A DROP of the dependent object + implementation. A DROP of the dependent object will be disallowed outright (we'll tell the user to issue a - DROP against the referenced object, instead). A - DROP of the referenced object will be propagated + DROP against the referenced object, instead). A + DROP of the referenced object will be propagated through to drop the dependent object whether - CASCADE is specified or not. Example: a trigger + CASCADE is specified or not. Example: a trigger that's created to enforce a foreign-key constraint is made internally dependent on the constraint's - pg_constraint entry. + pg_constraint entry. - DEPENDENCY_EXTENSION (e) + DEPENDENCY_EXTENSION (e) - The dependent object is a member of the extension that is + The dependent object is a member of the extension that is the referenced object (see pg_extension). The dependent object can be dropped only via - DROP EXTENSION on the referenced object. Functionally + DROP EXTENSION on the referenced object. Functionally this dependency type acts the same as an internal dependency, but - it's kept separate for clarity and to simplify pg_dump. + it's kept separate for clarity and to simplify pg_dump. - DEPENDENCY_AUTO_EXTENSION (x) + DEPENDENCY_AUTO_EXTENSION (x) The dependent object is not a member of the extension that is the @@ -3024,7 +3024,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_PIN (p) + DEPENDENCY_PIN (p) There is no dependent object; this type of entry is a signal @@ -3051,7 +3051,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_description stores optional descriptions + The catalog pg_description stores optional descriptions (comments) for each database object. Descriptions can be manipulated with the command and viewed with psql's \d commands. @@ -3066,7 +3066,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_description</> Columns + <structname>pg_description</structname> Columns @@ -3099,7 +3099,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a comment on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -3133,7 +3133,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_enum</> Columns + <structname>pg_enum</structname> Columns @@ -3157,7 +3157,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< enumtypid oid pg_type.oid - The OID of the pg_type entry owning this enum value + The OID of the pg_type entry owning this enum value @@ -3191,7 +3191,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< When an enum type is created, its members are assigned sort-order - positions 1..n. But members added later might be given + positions 1..n. But members added later might be given negative or fractional values of enumsortorder. The only requirement on these values is that they be correctly ordered and unique within each enum type. @@ -3212,7 +3212,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_event_trigger</> Columns + <structname>pg_event_trigger</structname> Columns @@ -3260,10 +3260,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the event trigger fires. - O = trigger fires in origin and local modes, - D = trigger is disabled, - R = trigger fires in replica mode, - A = trigger fires always. + O = trigger fires in origin and local modes, + D = trigger is disabled, + R = trigger fires in replica mode, + A = trigger fires always. @@ -3296,7 +3296,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_extension</> Columns + <structname>pg_extension</structname> Columns @@ -3355,16 +3355,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< extconfig oid[] pg_class.oid - Array of regclass OIDs for the extension's configuration - table(s), or NULL if none + Array of regclass OIDs for the extension's configuration + table(s), or NULL if none extcondition text[] - Array of WHERE-clause filter conditions for the - extension's configuration table(s), or NULL if none + Array of WHERE-clause filter conditions for the + extension's configuration table(s), or NULL if none @@ -3372,7 +3372,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Note that unlike most catalogs with a namespace column, + Note that unlike most catalogs with a namespace column, extnamespace is not meant to imply that the extension belongs to that schema. Extension names are never schema-qualified. Rather, extnamespace @@ -3399,7 +3399,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_foreign_data_wrapper</> Columns + <structname>pg_foreign_data_wrapper</structname> Columns @@ -3474,7 +3474,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign-data wrapper specific options, as keyword=value strings + Foreign-data wrapper specific options, as keyword=value strings @@ -3498,7 +3498,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_foreign_server</> Columns + <structname>pg_foreign_server</structname> Columns @@ -3570,7 +3570,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign server specific options, as keyword=value strings + Foreign server specific options, as keyword=value strings @@ -3596,7 +3596,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_foreign_table</> Columns + <structname>pg_foreign_table</structname> Columns @@ -3613,7 +3613,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< ftrelid oid pg_class.oid - OID of the pg_class entry for this foreign table + OID of the pg_class entry for this foreign table @@ -3628,7 +3628,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign table options, as keyword=value strings + Foreign table options, as keyword=value strings @@ -3651,7 +3651,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_index</> Columns + <structname>pg_index</structname> Columns @@ -3668,14 +3668,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< indexrelid oid pg_class.oid - The OID of the pg_class entry for this index + The OID of the pg_class entry for this index indrelid oid pg_class.oid - The OID of the pg_class entry for the table this index is for + The OID of the pg_class entry for the table this index is for @@ -3698,7 +3698,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool If true, this index represents the primary key of the table - (indisunique should always be true when this is true) + (indisunique should always be true when this is true) @@ -3714,7 +3714,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the uniqueness check is enforced immediately on insertion - (irrelevant if indisunique is not true) + (irrelevant if indisunique is not true) @@ -3731,7 +3731,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by - INSERT/UPDATE operations, but it cannot safely + INSERT/UPDATE operations, but it cannot safely be used for queries. If it is unique, the uniqueness property is not guaranteed true either. @@ -3742,8 +3742,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool - If true, queries must not use the index until the xmin - of this pg_index row is below their TransactionXmin + If true, queries must not use the index until the xmin + of this pg_index row is below their TransactionXmin event horizon, because the table may contain broken HOT chains with incompatible rows that they can see @@ -3755,7 +3755,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the index is currently ready for inserts. False means the - index must be ignored by INSERT/UPDATE + index must be ignored by INSERT/UPDATE operations. @@ -3775,9 +3775,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool - If true this index has been chosen as replica identity + If true this index has been chosen as replica identity using ALTER TABLE ... REPLICA IDENTITY USING INDEX - ... + ... @@ -3836,7 +3836,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for index attributes that are not simple column references. This is a list with one element for each zero - entry in indkey. Null if all index attributes + entry in indkey. Null if all index attributes are simple references. @@ -3866,14 +3866,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_inherits records information about + The catalog pg_inherits records information about table inheritance hierarchies. There is one entry for each direct parent-child table relationship in the database. (Indirect inheritance can be determined by following chains of entries.)
- <structname>pg_inherits</> Columns + <structname>pg_inherits</structname> Columns @@ -3928,7 +3928,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_init_privs records information about + The catalog pg_init_privs records information about the initial privileges of objects in the system. There is one entry for each object in the database which has a non-default (non-NULL) initial set of privileges. @@ -3936,7 +3936,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Objects can have initial privileges either by having those privileges set - when the system is initialized (by initdb) or when the + when the system is initialized (by initdb) or when the object is created during a CREATE EXTENSION and the extension script sets initial privileges using the GRANT system. Note that the system will automatically handle recording of the @@ -3944,12 +3944,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< only use the GRANT and REVOKE statements in their script to have the privileges recorded. The privtype column indicates if the initial privilege was - set by initdb or during a + set by initdb or during a CREATE EXTENSION command. - Objects which have initial privileges set by initdb will + Objects which have initial privileges set by initdb will have entries where privtype is 'i', while objects which have initial privileges set by CREATE EXTENSION will have entries where @@ -3957,7 +3957,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_init_privs</> Columns + <structname>pg_init_privs</structname> Columns @@ -3990,7 +3990,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objoid and classoid refer to the + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -4039,7 +4039,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_language</> Columns + <structname>pg_language</structname> Columns @@ -4116,7 +4116,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_proc.oid This references a function that is responsible for executing - inline anonymous code blocks + inline anonymous code blocks ( blocks). Zero if inline blocks are not supported. @@ -4162,24 +4162,24 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_largeobject holds the data making up large objects. A large object is identified by an OID assigned when it is created. Each large object is broken into - segments or pages small enough to be conveniently stored as rows + segments or pages small enough to be conveniently stored as rows in pg_largeobject. - The amount of data per page is defined to be LOBLKSIZE (which is currently - BLCKSZ/4, or typically 2 kB). + The amount of data per page is defined to be LOBLKSIZE (which is currently + BLCKSZ/4, or typically 2 kB). - Prior to PostgreSQL 9.0, there was no permission structure + Prior to PostgreSQL 9.0, there was no permission structure associated with large objects. As a result, pg_largeobject was publicly readable and could be used to obtain the OIDs (and contents) of all large objects in the system. This is no longer the case; use - pg_largeobject_metadata + pg_largeobject_metadata to obtain a list of large object OIDs.
- <structname>pg_largeobject</> Columns + <structname>pg_largeobject</structname> Columns @@ -4213,7 +4213,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Actual data stored in the large object. - This will never be more than LOBLKSIZE bytes and might be less. + This will never be more than LOBLKSIZE bytes and might be less. @@ -4223,9 +4223,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Each row of pg_largeobject holds data for one page of a large object, beginning at - byte offset (pageno * LOBLKSIZE) within the object. The implementation + byte offset (pageno * LOBLKSIZE) within the object. The implementation allows sparse storage: pages might be missing, and might be shorter than - LOBLKSIZE bytes even if they are not the last page of the object. + LOBLKSIZE bytes even if they are not the last page of the object. Missing regions within a large object read as zeroes. @@ -4242,11 +4242,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_largeobject_metadata holds metadata associated with large objects. The actual large object data is stored in - pg_largeobject. + pg_largeobject.
- <structname>pg_largeobject_metadata</> Columns + <structname>pg_largeobject_metadata</structname> Columns @@ -4299,14 +4299,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_namespace stores namespaces. + The catalog pg_namespace stores namespaces. A namespace is the structure underlying SQL schemas: each namespace can have a separate collection of relations, types, etc. without name conflicts.
- <structname>pg_namespace</> Columns + <structname>pg_namespace</structname> Columns @@ -4381,7 +4381,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_opclass</> Columns + <structname>pg_opclass</structname> Columns @@ -4447,14 +4447,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< opcdefault bool - True if this operator class is the default for opcintype + True if this operator class is the default for opcintype opckeytype oid pg_type.oid - Type of data stored in index, or zero if same as opcintype + Type of data stored in index, or zero if same as opcintype @@ -4462,11 +4462,11 @@ SCRAM-SHA-256$<iteration count>:<salt><
- An operator class's opcmethod must match the - opfmethod of its containing operator family. + An operator class's opcmethod must match the + opfmethod of its containing operator family. Also, there must be no more than one pg_opclass - row having opcdefault true for any given combination of - opcmethod and opcintype. + row having opcdefault true for any given combination of + opcmethod and opcintype. @@ -4480,13 +4480,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_operator stores information about operators. + The catalog pg_operator stores information about operators. See and for more information. - <structname>pg_operator</> Columns + <structname>pg_operator</structname> Columns @@ -4534,8 +4534,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - b = infix (both), l = prefix - (left), r = postfix (right) + b = infix (both), l = prefix + (left), r = postfix (right) @@ -4632,7 +4632,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Each operator family is a collection of operators and associated support routines that implement the semantics specified for a particular index access method. Furthermore, the operators in a family are all - compatible, in a way that is specified by the access method. + compatible, in a way that is specified by the access method. The operator family concept allows cross-data-type operators to be used with indexes and to be reasoned about using knowledge of access method semantics. @@ -4643,7 +4643,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_opfamily</> Columns + <structname>pg_opfamily</structname> Columns @@ -4720,7 +4720,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_partitioned_table</> Columns + <structname>pg_partitioned_table</structname> Columns @@ -4738,7 +4738,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< partrelid oid pg_class.oid - The OID of the pg_class entry for this partitioned table + The OID of the pg_class entry for this partitioned table @@ -4746,8 +4746,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Partitioning strategy; l = list partitioned table, - r = range partitioned table + Partitioning strategy; l = list partitioned table, + r = range partitioned table @@ -4763,7 +4763,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< oid pg_class.oid - The OID of the pg_class entry for the default partition + The OID of the pg_class entry for the default partition of this partitioned table, or zero if this partitioned table does not have a default partition. @@ -4813,7 +4813,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for partition key columns that are not simple column references. This is a list with one element for each zero - entry in partattrs. Null if all partition key columns + entry in partattrs. Null if all partition key columns are simple references. @@ -4833,9 +4833,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_pltemplate stores - template information for procedural languages. + template information for procedural languages. A template for a language allows the language to be created in a - particular database by a simple CREATE LANGUAGE command, + particular database by a simple CREATE LANGUAGE command, with no need to specify implementation details. @@ -4848,7 +4848,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_pltemplate</> Columns + <structname>pg_pltemplate</structname> Columns @@ -4921,7 +4921,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - It is likely that pg_pltemplate will be removed in some + It is likely that pg_pltemplate will be removed in some future release of PostgreSQL, in favor of keeping this knowledge about procedural languages in their respective extension installation scripts. @@ -4944,7 +4944,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< command that it applies to (possibly all commands), the roles that it applies to, the expression to be added as a security-barrier qualification to queries that include the table, and the expression - to be added as a WITH CHECK option for queries that attempt to + to be added as a WITH CHECK option for queries that attempt to add new records to the table. @@ -4982,11 +4982,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char The command type to which the policy is applied: - r for SELECT, - a for INSERT, - w for UPDATE, - d for DELETE, - or * for all + r for SELECT, + a for INSERT, + w for UPDATE, + d for DELETE, + or * for all @@ -5023,8 +5023,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< - Policies stored in pg_policy are applied only when - pg_class.relrowsecurity is set for + Policies stored in pg_policy are applied only when + pg_class.relrowsecurity is set for their table. @@ -5039,7 +5039,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_proc stores information about functions (or procedures). + The catalog pg_proc stores information about functions (or procedures). See and for more information. @@ -5051,7 +5051,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_proc</> Columns + <structname>pg_proc</structname> Columns @@ -5106,7 +5106,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< float4 Estimated execution cost (in units of - ); if proretset, + ); if proretset, this is cost per row returned @@ -5114,7 +5114,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< prorows float4 - Estimated number of result rows (zero if not proretset) + Estimated number of result rows (zero if not proretset) @@ -5151,7 +5151,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< prosecdef bool - Function is a security definer (i.e., a setuid + Function is a security definer (i.e., a setuid function) @@ -5195,11 +5195,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< provolatile tells whether the function's result depends only on its input arguments, or is affected by outside factors. - It is i for immutable functions, + It is i for immutable functions, which always deliver the same result for the same inputs. - It is s for stable functions, + It is s for stable functions, whose results (for fixed inputs) do not change within a scan. - It is v for volatile functions, + It is v for volatile functions, whose results might change at any time. (Use v also for functions with side-effects, so that calls to them cannot get optimized away.) @@ -5251,7 +5251,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< An array with the data types of the function arguments. This includes only input arguments (including INOUT and - VARIADIC arguments), and thus represents + VARIADIC arguments), and thus represents the call signature of the function. @@ -5266,7 +5266,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< INOUT arguments); however, if all the arguments are IN arguments, this field will be null. Note that subscripting is 1-based, whereas for historical reasons - proargtypes is subscripted from 0. + proargtypes is subscripted from 0. @@ -5276,15 +5276,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< An array with the modes of the function arguments, encoded as - i for IN arguments, - o for OUT arguments, - b for INOUT arguments, - v for VARIADIC arguments, - t for TABLE arguments. + i for IN arguments, + o for OUT arguments, + b for INOUT arguments, + v for VARIADIC arguments, + t for TABLE arguments. If all the arguments are IN arguments, this field will be null. Note that subscripts correspond to positions of - proallargtypes not proargtypes. + proallargtypes not proargtypes. @@ -5297,7 +5297,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Arguments without a name are set to empty strings in the array. If none of the arguments have a name, this field will be null. Note that subscripts correspond to positions of - proallargtypes not proargtypes. + proallargtypes not proargtypes. @@ -5308,9 +5308,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for default values. This is a list with - pronargdefaults elements, corresponding to the last - N input arguments (i.e., the last - N proargtypes positions). + pronargdefaults elements, corresponding to the last + N input arguments (i.e., the last + N proargtypes positions). If none of the arguments have defaults, this field will be null. @@ -5525,7 +5525,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_range</> Columns + <structname>pg_range</structname> Columns @@ -5586,10 +5586,10 @@ SCRAM-SHA-256$<iteration count>:<salt><
- rngsubopc (plus rngcollation, if the + rngsubopc (plus rngcollation, if the element type is collatable) determines the sort ordering used by the range - type. rngcanonical is used when the element type is - discrete. rngsubdiff is optional but should be supplied to + type. rngcanonical is used when the element type is + discrete. rngsubdiff is optional but should be supplied to improve performance of GiST indexes on the range type. @@ -5655,7 +5655,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_rewrite</> Columns + <structname>pg_rewrite</structname> Columns @@ -5694,9 +5694,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Event type that the rule is for: 1 = SELECT, 2 = - UPDATE, 3 = INSERT, 4 = - DELETE + Event type that the rule is for: 1 = SELECT, 2 = + UPDATE, 3 = INSERT, 4 = + DELETE @@ -5707,10 +5707,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the rule fires. - O = rule fires in origin and local modes, - D = rule is disabled, - R = rule fires in replica mode, - A = rule fires always. + O = rule fires in origin and local modes, + D = rule is disabled, + R = rule fires in replica mode, + A = rule fires always. @@ -5809,7 +5809,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a security label on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -5847,7 +5847,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_sequence</> Columns + <structname>pg_sequence</structname> Columns @@ -5864,7 +5864,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< seqrelid oid pg_class.oid - The OID of the pg_class entry for this sequence + The OID of the pg_class entry for this sequence @@ -5949,7 +5949,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_shdepend</> Columns + <structname>pg_shdepend</structname> Columns @@ -5990,7 +5990,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objid and classid refer to the + objid and classid refer to the table itself). For all other object types, this column is zero. @@ -6027,11 +6027,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< In all cases, a pg_shdepend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - deptype: + deptype: - SHARED_DEPENDENCY_OWNER (o) + SHARED_DEPENDENCY_OWNER (o) The referenced object (which must be a role) is the owner of the @@ -6041,20 +6041,20 @@ SCRAM-SHA-256$<iteration count>:<salt>< - SHARED_DEPENDENCY_ACL (a) + SHARED_DEPENDENCY_ACL (a) The referenced object (which must be a role) is mentioned in the ACL (access control list, i.e., privileges list) of the - dependent object. (A SHARED_DEPENDENCY_ACL entry is + dependent object. (A SHARED_DEPENDENCY_ACL entry is not made for the owner of the object, since the owner will have - a SHARED_DEPENDENCY_OWNER entry anyway.) + a SHARED_DEPENDENCY_OWNER entry anyway.) - SHARED_DEPENDENCY_POLICY (r) + SHARED_DEPENDENCY_POLICY (r) The referenced object (which must be a role) is mentioned as the @@ -6064,7 +6064,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - SHARED_DEPENDENCY_PIN (p) + SHARED_DEPENDENCY_PIN (p) There is no dependent object; this type of entry is a signal @@ -6111,7 +6111,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_shdescription</> Columns + <structname>pg_shdescription</structname> Columns @@ -6235,16 +6235,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< - Normally there is one entry, with stainherit = - false, for each table column that has been analyzed. + Normally there is one entry, with stainherit = + false, for each table column that has been analyzed. If the table has inheritance children, a second entry with - stainherit = true is also created. This row + stainherit = true is also created. This row represents the column's statistics over the inheritance tree, i.e., statistics for the data you'd see with - SELECT column FROM table*, - whereas the stainherit = false row represents + SELECT column FROM table*, + whereas the stainherit = false row represents the results of - SELECT column FROM ONLY table. + SELECT column FROM ONLY table. @@ -6254,7 +6254,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< references the index. No entry is made for an ordinary non-expression index column, however, since it would be redundant with the entry for the underlying table column. Currently, entries for index expressions - always have stainherit = false. + always have stainherit = false. @@ -6281,7 +6281,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_statistic</> Columns + <structname>pg_statistic</structname> Columns @@ -6339,56 +6339,56 @@ SCRAM-SHA-256$<iteration count>:<salt>< A value less than zero is the negative of a multiplier for the number of rows in the table; for example, a column in which about 80% of the values are nonnull and each nonnull value appears about twice on - average could be represented by stadistinct = -0.4. + average could be represented by stadistinct = -0.4. A zero value means the number of distinct values is unknown. - stakindN + stakindN int2 A code number indicating the kind of statistics stored in the - Nth slot of the + Nth slot of the pg_statistic row. - staopN + staopN oid pg_operator.oid An operator used to derive the statistics stored in the - Nth slot. For example, a + Nth slot. For example, a histogram slot would show the < operator that defines the sort order of the data. - stanumbersN + stanumbersN float4[] Numerical statistics of the appropriate kind for the - Nth slot, or null if the slot + Nth slot, or null if the slot kind does not involve numerical values - stavaluesN + stavaluesN anyarray Column data values of the appropriate kind for the - Nth slot, or null if the slot + Nth slot, or null if the slot kind does not store any data values. Each array's element values are actually of the specific column's data type, or a related type such as an array's element type, so there is no way to define - these columns' type more specifically than anyarray. + these columns' type more specifically than anyarray. @@ -6407,12 +6407,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_statistic_ext holds extended planner statistics. - Each row in this catalog corresponds to a statistics object + Each row in this catalog corresponds to a statistics object created with .
- <structname>pg_statistic_ext</> Columns + <structname>pg_statistic_ext</structname> Columns @@ -6485,7 +6485,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_ndistinct - N-distinct counts, serialized as pg_ndistinct type + N-distinct counts, serialized as pg_ndistinct type @@ -6495,7 +6495,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Functional dependency statistics, serialized - as pg_dependencies type + as pg_dependencies type @@ -6507,7 +6507,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The stxkind field is filled at creation of the statistics object, indicating which statistic type(s) are desired. The fields after it are initially NULL and are filled only when the - corresponding statistic has been computed by ANALYZE. + corresponding statistic has been computed by ANALYZE. @@ -6677,10 +6677,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< State code: - i = initialize, - d = data is being copied, - s = synchronized, - r = ready (normal replication) + i = initialize, + d = data is being copied, + s = synchronized, + r = ready (normal replication) @@ -6689,7 +6689,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_lsn - End LSN for s and r states. + End LSN for s and r states. @@ -6718,7 +6718,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_tablespace</> Columns + <structname>pg_tablespace</structname> Columns @@ -6769,7 +6769,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Tablespace-level options, as keyword=value strings + Tablespace-level options, as keyword=value strings @@ -6792,7 +6792,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_transform</> Columns + <structname>pg_transform</structname> Columns @@ -6861,7 +6861,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_trigger</> Columns + <structname>pg_trigger</structname> Columns @@ -6916,10 +6916,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the trigger fires. - O = trigger fires in origin and local modes, - D = trigger is disabled, - R = trigger fires in replica mode, - A = trigger fires always. + O = trigger fires in origin and local modes, + D = trigger is disabled, + R = trigger fires in replica mode, + A = trigger fires always. @@ -6928,7 +6928,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool True if trigger is internally generated (usually, to enforce - the constraint identified by tgconstraint) + the constraint identified by tgconstraint) @@ -6950,7 +6950,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgconstraint oid pg_constraint.oid - The pg_constraint entry associated with the trigger, if any + The pg_constraint entry associated with the trigger, if any @@ -6994,7 +6994,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_node_tree Expression tree (in nodeToString() - representation) for the trigger's WHEN condition, or null + representation) for the trigger's WHEN condition, or null if none @@ -7002,7 +7002,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgoldtable name - REFERENCING clause name for OLD TABLE, + REFERENCING clause name for OLD TABLE, or null if none @@ -7010,7 +7010,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgnewtable name - REFERENCING clause name for NEW TABLE, + REFERENCING clause name for NEW TABLE, or null if none @@ -7019,18 +7019,18 @@ SCRAM-SHA-256$<iteration count>:<salt>< Currently, column-specific triggering is supported only for - UPDATE events, and so tgattr is relevant + UPDATE events, and so tgattr is relevant only for that event type. tgtype might contain bits for other event types as well, but those are presumed - to be table-wide regardless of what is in tgattr. + to be table-wide regardless of what is in tgattr. - When tgconstraint is nonzero, - tgconstrrelid, tgconstrindid, - tgdeferrable, and tginitdeferred are - largely redundant with the referenced pg_constraint entry. + When tgconstraint is nonzero, + tgconstrrelid, tgconstrindid, + tgdeferrable, and tginitdeferred are + largely redundant with the referenced pg_constraint entry. However, it is possible for a non-deferrable trigger to be associated with a deferrable constraint: foreign key constraints can have some deferrable and some non-deferrable triggers. @@ -7070,7 +7070,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_config</> Columns + <structname>pg_ts_config</structname> Columns @@ -7145,7 +7145,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_config_map</> Columns + <structname>pg_ts_config_map</structname> Columns @@ -7162,7 +7162,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< mapcfg oid pg_ts_config.oid - The OID of the pg_ts_config entry owning this map entry + The OID of the pg_ts_config entry owning this map entry @@ -7177,7 +7177,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< integer Order in which to consult this entry (lower - mapseqnos first) + mapseqnos first) @@ -7206,7 +7206,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< needed; the dictionary itself provides values for the user-settable parameters supported by the template. This division of labor allows dictionaries to be created by unprivileged users. The parameters - are specified by a text string dictinitoption, + are specified by a text string dictinitoption, whose format and meaning vary depending on the template. @@ -7216,7 +7216,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_dict</> Columns + <structname>pg_ts_dict</structname> Columns @@ -7299,7 +7299,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_parser</> Columns + <structname>pg_ts_parser</structname> Columns @@ -7396,7 +7396,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_template</> Columns + <structname>pg_ts_template</structname> Columns @@ -7470,7 +7470,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_type</> Columns + <structname>pg_type</structname> Columns @@ -7521,7 +7521,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a fixed-size type, typlen is the number of bytes in the internal representation of the type. But for a variable-length type, typlen is negative. - -1 indicates a varlena type (one that has a length word), + -1 indicates a varlena type (one that has a length word), -2 indicates a null-terminated C string. @@ -7566,7 +7566,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typcategory is an arbitrary classification of data types that is used by the parser to determine which implicit - casts should be preferred. + casts should be preferred. See . @@ -7711,7 +7711,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typalign is the alignment required when storing a value of this type. It applies to storage on disk as well as most representations of the value inside - PostgreSQL. + PostgreSQL. When multiple values are stored consecutively, such as in the representation of a complete row on disk, padding is inserted before a datum of this type so that it begins on the @@ -7723,16 +7723,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< Possible values are: - c = char alignment, i.e., no alignment needed. + c = char alignment, i.e., no alignment needed. - s = short alignment (2 bytes on most machines). + s = short alignment (2 bytes on most machines). - i = int alignment (4 bytes on most machines). + i = int alignment (4 bytes on most machines). - d = double alignment (8 bytes on many machines, but by no means all). + d = double alignment (8 bytes on many machines, but by no means all). @@ -7757,24 +7757,24 @@ SCRAM-SHA-256$<iteration count>:<salt>< Possible values are - p: Value must always be stored plain. + p: Value must always be stored plain. - e: Value can be stored in a secondary + e: Value can be stored in a secondary relation (if relation has one, see pg_class.reltoastrelid). - m: Value can be stored compressed inline. + m: Value can be stored compressed inline. - x: Value can be stored compressed inline or stored in secondary storage. + x: Value can be stored compressed inline or stored in secondary storage. - Note that m columns can also be moved out to secondary - storage, but only as a last resort (e and x columns are + Note that m columns can also be moved out to secondary + storage, but only as a last resort (e and x columns are moved first). @@ -7805,9 +7805,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< int4 - Domains use typtypmod to record the typmod + Domains use typtypmod to record the typmod to be applied to their base type (-1 if base type does not use a - typmod). -1 if this type is not a domain. + typmod). -1 if this type is not a domain. @@ -7817,7 +7817,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typndims is the number of array dimensions - for a domain over an array (that is, typbasetype is + for a domain over an array (that is, typbasetype is an array type). Zero for types other than domains over array types. @@ -7842,7 +7842,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_node_tree - If typdefaultbin is not null, it is the + If typdefaultbin is not null, it is the nodeToString() representation of a default expression for the type. This is only used for domains. @@ -7854,12 +7854,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< text - typdefault is null if the type has no associated - default value. If typdefaultbin is not null, - typdefault must contain a human-readable version of the - default expression represented by typdefaultbin. If - typdefaultbin is null and typdefault is - not, then typdefault is the external representation of + typdefault is null if the type has no associated + default value. If typdefaultbin is not null, + typdefault must contain a human-readable version of the + default expression represented by typdefaultbin. If + typdefaultbin is null and typdefault is + not, then typdefault is the external representation of the type's default value, which can be fed to the type's input converter to produce a constant. @@ -7882,13 +7882,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< lists the system-defined values - of typcategory. Any future additions to this list will + of typcategory. Any future additions to this list will also be upper-case ASCII letters. All other ASCII characters are reserved for user-defined categories.
- <structfield>typcategory</> Codes + <structfield>typcategory</structfield> Codes @@ -7957,7 +7957,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< X - unknown type + unknown type @@ -7982,7 +7982,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_user_mapping</> Columns + <structname>pg_user_mapping</structname> Columns @@ -8023,7 +8023,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - User mapping specific options, as keyword=value strings + User mapping specific options, as keyword=value strings @@ -8241,7 +8241,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_available_extensions</> Columns + <structname>pg_available_extensions</structname> Columns @@ -8303,7 +8303,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_available_extension_versions</> Columns + <structname>pg_available_extension_versions</structname> Columns @@ -8385,11 +8385,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_config describes the compile-time configuration parameters of the currently installed - version of PostgreSQL. It is intended, for example, to + version of PostgreSQL. It is intended, for example, to be used by software packages that want to interface to - PostgreSQL to facilitate finding the required header + PostgreSQL to facilitate finding the required header files and libraries. It provides the same basic information as the - PostgreSQL client + PostgreSQL client application. @@ -8399,7 +8399,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_config</> Columns + <structname>pg_config</structname> Columns @@ -8470,15 +8470,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< Cursors are used internally to implement some of the components - of PostgreSQL, such as procedural languages. - Therefore, the pg_cursors view might include cursors + of PostgreSQL, such as procedural languages. + Therefore, the pg_cursors view might include cursors that have not been explicitly created by the user.
- <structname>pg_cursors</> Columns + <structname>pg_cursors</structname> Columns @@ -8526,7 +8526,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< is_scrollable boolean - true if the cursor is scrollable (that is, it + true if the cursor is scrollable (that is, it allows rows to be retrieved in a nonsequential manner); false otherwise @@ -8557,16 +8557,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_file_settings provides a summary of the contents of the server's configuration file(s). A row appears in - this view for each name = value entry appearing in the files, + this view for each name = value entry appearing in the files, with annotations indicating whether the value could be applied successfully. Additional row(s) may appear for problems not linked to - a name = value entry, such as syntax errors in the files. + a name = value entry, such as syntax errors in the files. This view is helpful for checking whether planned changes in the configuration files will work, or for diagnosing a previous failure. - Note that this view reports on the current contents of the + Note that this view reports on the current contents of the files, not on what was last applied by the server. (The pg_settings view is usually sufficient to determine that.) @@ -8578,7 +8578,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_file_settings</> Columns + <structname>pg_file_settings</structname> Columns @@ -8604,7 +8604,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< seqno integer - Order in which the entries are processed (1..n) + Order in which the entries are processed (1..n) name @@ -8634,14 +8634,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< If the configuration file contains syntax errors or invalid parameter names, the server will not attempt to apply any settings from it, and - therefore all the applied fields will read as false. + therefore all the applied fields will read as false. In such a case there will be one or more rows with non-null error fields indicating the problem(s). Otherwise, individual settings will be applied if possible. If an individual setting cannot be applied (e.g., invalid value, or the setting cannot be changed after server start) it will have an appropriate message in the error field. Another way that - an entry might have applied = false is that it is + an entry might have applied = false is that it is overridden by a later entry for the same parameter name; this case is not considered an error so nothing appears in the error field. @@ -8666,12 +8666,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< compatibility: it emulates a catalog that existed in PostgreSQL before version 8.1. It shows the names and members of all roles that are marked as not - rolcanlogin, which is an approximation to the set + rolcanlogin, which is an approximation to the set of roles that are being used as groups.
- <structname>pg_group</> Columns + <structname>pg_group</structname> Columns @@ -8720,7 +8720,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_hba_file_rules provides a summary of the contents of the client authentication configuration - file, pg_hba.conf. A row appears in this view for each + file, pg_hba.conf. A row appears in this view for each non-empty, non-comment line in the file, with annotations indicating whether the rule could be applied successfully. @@ -8728,7 +8728,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< This view can be helpful for checking whether planned changes in the authentication configuration file will work, or for diagnosing a previous - failure. Note that this view reports on the current contents + failure. Note that this view reports on the current contents of the file, not on what was last loaded by the server. @@ -8738,7 +8738,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_hba_file_rules</> Columns + <structname>pg_hba_file_rules</structname> Columns @@ -8753,7 +8753,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< line_number integer - Line number of this rule in pg_hba.conf + Line number of this rule in pg_hba.conf @@ -8809,7 +8809,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Usually, a row reflecting an incorrect entry will have values for only - the line_number and error fields. + the line_number and error fields. @@ -8831,7 +8831,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_indexes</> Columns + <structname>pg_indexes</structname> Columns @@ -8912,12 +8912,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< in the same way as in pg_description or pg_depend). Also, the right to extend a relation is represented as a separate lockable object. - Also, advisory locks can be taken on numbers that have + Also, advisory locks can be taken on numbers that have user-defined meanings.
- <structname>pg_locks</> Columns + <structname>pg_locks</structname> Columns @@ -8935,15 +8935,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< Type of the lockable object: - relation, - extend, - page, - tuple, - transactionid, - virtualxid, - object, - userlock, or - advisory + relation, + extend, + page, + tuple, + transactionid, + virtualxid, + object, + userlock, or + advisory @@ -9025,7 +9025,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Column number targeted by the lock (the - classid and objid refer to the + classid and objid refer to the table itself), or zero if the target is some other general database object, or null if the target is not a general database object @@ -9107,23 +9107,23 @@ SCRAM-SHA-256$<iteration count>:<salt>< Advisory locks can be acquired on keys consisting of either a single bigint value or two integer values. A bigint key is displayed with its - high-order half in the classid column, its low-order half - in the objid column, and objsubid equal + high-order half in the classid column, its low-order half + in the objid column, and objsubid equal to 1. The original bigint value can be reassembled with the expression (classid::bigint << 32) | objid::bigint. Integer keys are displayed with the first key in the - classid column, the second key in the objid - column, and objsubid equal to 2. The actual meaning of + classid column, the second key in the objid + column, and objsubid equal to 2. The actual meaning of the keys is up to the user. Advisory locks are local to each database, - so the database column is meaningful for an advisory lock. + so the database column is meaningful for an advisory lock. pg_locks provides a global view of all locks in the database cluster, not only those relevant to the current database. Although its relation column can be joined - against pg_class.oid to identify locked + against pg_class.oid to identify locked relations, this will only work correctly for relations in the current database (those for which the database column is either the current database's OID or zero). @@ -9141,7 +9141,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid; Also, if you are using prepared transactions, the - virtualtransaction column can be joined to the + virtualtransaction column can be joined to the transaction column of the pg_prepared_xacts view to get more information on prepared transactions that hold locks. @@ -9163,7 +9163,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx information about which processes are ahead of which others in lock wait queues, nor information about which processes are parallel workers running on behalf of which other client sessions. It is better to use - the pg_blocking_pids() function + the pg_blocking_pids() function (see ) to identify which process(es) a waiting process is blocked behind. @@ -9172,10 +9172,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The pg_locks view displays data from both the regular lock manager and the predicate lock manager, which are separate systems; in addition, the regular lock manager subdivides its - locks into regular and fast-path locks. + locks into regular and fast-path locks. This data is not guaranteed to be entirely consistent. When the view is queried, - data on fast-path locks (with fastpath = true) + data on fast-path locks (with fastpath = true) is gathered from each backend one at a time, without freezing the state of the entire lock manager, so it is possible for locks to be taken or released while information is gathered. Note, however, that these locks are @@ -9218,7 +9218,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_matviews</> Columns + <structname>pg_matviews</structname> Columns @@ -9291,7 +9291,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_policies</> Columns + <structname>pg_policies</structname> Columns @@ -9381,7 +9381,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_prepared_statements</> Columns + <structname>pg_prepared_statements</structname> Columns @@ -9467,7 +9467,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_prepared_xacts</> Columns + <structname>pg_prepared_xacts</structname> Columns @@ -9706,7 +9706,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx slot_typetext - The slot type - physical or logical + The slot type - physical or logical @@ -9787,7 +9787,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The address (LSN) up to which the logical slot's consumer has confirmed receiving data. Data older than this is - not available anymore. NULL for physical slots. + not available anymore. NULL for physical slots. @@ -9817,7 +9817,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_roles</> Columns + <structname>pg_roles</structname> Columns @@ -9900,7 +9900,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx rolpasswordtext - Not the password (always reads as ********) + Not the password (always reads as ********) @@ -9953,7 +9953,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_rules</> Columns + <structname>pg_rules</structname> Columns @@ -9994,9 +9994,9 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- The pg_rules view excludes the ON SELECT rules + The pg_rules view excludes the ON SELECT rules of views and materialized views; those can be seen in - pg_views and pg_matviews. + pg_views and pg_matviews. @@ -10011,11 +10011,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_seclabels provides information about security labels. It as an easier-to-query version of the - pg_seclabel catalog. + pg_seclabel catalog. - <structname>pg_seclabels</> Columns + <structname>pg_seclabels</structname> Columns @@ -10045,7 +10045,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx For a security label on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -10105,7 +10105,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_sequences</> Columns + <structname>pg_sequences</structname> Columns @@ -10206,12 +10206,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx interface to the and commands. It also provides access to some facts about each parameter that are - not directly available from SHOW, such as minimum and + not directly available from SHOW, such as minimum and maximum values.
- <structname>pg_settings</> Columns + <structname>pg_settings</structname> Columns @@ -10260,8 +10260,8 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx vartype text - Parameter type (bool, enum, - integer, real, or string) + Parameter type (bool, enum, + integer, real, or string) @@ -10306,7 +10306,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx values set from sources other than configuration files, or when examined by a user who is neither a superuser or a member of pg_read_all_settings); helpful when using - include directives in configuration files + include directives in configuration files sourceline @@ -10384,7 +10384,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in postgresql.conf without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via libpq's PGOPTIONS + packet (for example, via libpq's PGOPTIONS environment variable), but only if the connecting user is a superuser. However, these settings never change in a session after it is started. If you change them in postgresql.conf, send a @@ -10402,7 +10402,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in postgresql.conf without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via libpq's PGOPTIONS + packet (for example, via libpq's PGOPTIONS environment variable); any user can make such a change for their session. However, these settings never change in a session after it is started. If you change them in postgresql.conf, send a @@ -10418,10 +10418,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx These settings can be set from postgresql.conf, - or within a session via the SET command; but only superusers - can change them via SET. Changes in + or within a session via the SET command; but only superusers + can change them via SET. Changes in postgresql.conf will affect existing sessions - only if no session-local value has been established with SET. + only if no session-local value has been established with SET. @@ -10431,10 +10431,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx These settings can be set from postgresql.conf, - or within a session via the SET command. Any user is + or within a session via the SET command. Any user is allowed to change their session-local value. Changes in postgresql.conf will affect existing sessions - only if no session-local value has been established with SET. + only if no session-local value has been established with SET. @@ -10473,7 +10473,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx compatibility: it emulates a catalog that existed in PostgreSQL before version 8.1. It shows properties of all roles that are marked as - rolcanlogin in + rolcanlogin in pg_authid. @@ -10486,7 +10486,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_shadow</> Columns + <structname>pg_shadow</structname> Columns @@ -10600,7 +10600,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_stats</> Columns + <structname>pg_stats</structname> Columns @@ -10663,7 +10663,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx If greater than zero, the estimated number of distinct values in the column. If less than zero, the negative of the number of distinct values divided by the number of rows. (The negated form is used when - ANALYZE believes that the number of distinct values is + ANALYZE believes that the number of distinct values is likely to increase as the table grows; the positive form is used when the column seems to have a fixed number of possible values.) For example, -1 indicates a unique column in which the number of distinct @@ -10699,10 +10699,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx A list of values that divide the column's values into groups of approximately equal population. The values in - most_common_vals, if present, are omitted from this + most_common_vals, if present, are omitted from this histogram calculation. (This column is null if the column data type - does not have a < operator or if the - most_common_vals list accounts for the entire + does not have a < operator or if the + most_common_vals list accounts for the entire population.) @@ -10717,7 +10717,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx When the value is near -1 or +1, an index scan on the column will be estimated to be cheaper than when it is near zero, due to reduction of random access to the disk. (This column is null if the column data - type does not have a < operator.) + type does not have a < operator.) @@ -10761,7 +10761,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The maximum number of entries in the array fields can be controlled on a - column-by-column basis using the ALTER TABLE SET STATISTICS + column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the run-time parameter. @@ -10781,7 +10781,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_tables</> Columns + <structname>pg_tables</structname> Columns @@ -10862,7 +10862,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_timezone_abbrevs</> Columns + <structname>pg_timezone_abbrevs</structname> Columns @@ -10910,7 +10910,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_timezone_names provides a list - of time zone names that are recognized by SET TIMEZONE, + of time zone names that are recognized by SET TIMEZONE, along with their associated abbreviations, UTC offsets, and daylight-savings status. (Technically, PostgreSQL does not use UTC because leap @@ -10919,11 +10919,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx linkend="view-pg-timezone-abbrevs">pg_timezone_abbrevs, many of these names imply a set of daylight-savings transition date rules. Therefore, the associated information changes across local DST boundaries. The displayed information is computed based on the current - value of CURRENT_TIMESTAMP. + value of CURRENT_TIMESTAMP.
- <structname>pg_timezone_names</> Columns + <structname>pg_timezone_names</structname> Columns @@ -10976,7 +10976,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_user</> Columns + <structname>pg_user</structname> Columns @@ -11032,7 +11032,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx passwd text - Not the password (always reads as ********) + Not the password (always reads as ********) @@ -11069,7 +11069,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_user_mappings</> Columns + <structname>pg_user_mappings</structname> Columns @@ -11126,7 +11126,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx text[] - User mapping specific options, as keyword=value strings + User mapping specific options, as keyword=value strings @@ -11141,12 +11141,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx current user is the user being mapped, and owns the server or - holds USAGE privilege on it + holds USAGE privilege on it - current user is the server owner and mapping is for PUBLIC + current user is the server owner and mapping is for PUBLIC @@ -11173,7 +11173,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_views</> Columns + <structname>pg_views</structname> Columns diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 63f7de5b43..3874a3f1ea 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -35,12 +35,12 @@ Locale Support - locale + locale - Locale support refers to an application respecting + Locale support refers to an application respecting cultural preferences regarding alphabets, sorting, number - formatting, etc. PostgreSQL uses the standard ISO + formatting, etc. PostgreSQL uses the standard ISO C and POSIX locale facilities provided by the server operating system. For additional information refer to the documentation of your system. @@ -67,14 +67,14 @@ initdb --locale=sv_SE This example for Unix systems sets the locale to Swedish - (sv) as spoken - in Sweden (SE). Other possibilities might include - en_US (U.S. English) and fr_CA (French + (sv) as spoken + in Sweden (SE). Other possibilities might include + en_US (U.S. English) and fr_CA (French Canadian). If more than one character set can be used for a locale then the specifications can take the form - language_territory.codeset. For example, - fr_BE.UTF-8 represents the French language (fr) as - spoken in Belgium (BE), with a UTF-8 character set + language_territory.codeset. For example, + fr_BE.UTF-8 represents the French language (fr) as + spoken in Belgium (BE), with a UTF-8 character set encoding. @@ -82,9 +82,9 @@ initdb --locale=sv_SE What locales are available on your system under what names depends on what was provided by the operating system vendor and what was installed. On most Unix systems, the command - locale -a will provide a list of available locales. - Windows uses more verbose locale names, such as German_Germany - or Swedish_Sweden.1252, but the principles are the same. + locale -a will provide a list of available locales. + Windows uses more verbose locale names, such as German_Germany + or Swedish_Sweden.1252, but the principles are the same. @@ -97,28 +97,28 @@ initdb --locale=sv_SE - LC_COLLATE - String sort order + LC_COLLATE + String sort order - LC_CTYPE - Character classification (What is a letter? Its upper-case equivalent?) + LC_CTYPE + Character classification (What is a letter? Its upper-case equivalent?) - LC_MESSAGES - Language of messages + LC_MESSAGES + Language of messages - LC_MONETARY - Formatting of currency amounts + LC_MONETARY + Formatting of currency amounts - LC_NUMERIC - Formatting of numbers + LC_NUMERIC + Formatting of numbers - LC_TIME - Formatting of dates and times + LC_TIME + Formatting of dates and times @@ -133,8 +133,8 @@ initdb --locale=sv_SE If you want the system to behave as if it had no locale support, - use the special locale name C, or equivalently - POSIX. + use the special locale name C, or equivalently + POSIX. @@ -192,14 +192,14 @@ initdb --locale=sv_SE settings for the purpose of setting the language of messages. If in doubt, please refer to the documentation of your operating system, in particular the documentation about - gettext. + gettext. To enable messages to be translated to the user's preferred language, NLS must have been selected at build time - (configure --enable-nls). All other locale support is + (configure --enable-nls). All other locale support is built in automatically. @@ -213,63 +213,63 @@ initdb --locale=sv_SE - Sort order in queries using ORDER BY or the standard + Sort order in queries using ORDER BY or the standard comparison operators on textual data - ORDER BYand locales + ORDER BYand locales - The upper, lower, and initcap + The upper, lower, and initcap functions - upperand locales - lowerand locales + upperand locales + lowerand locales - Pattern matching operators (LIKE, SIMILAR TO, + Pattern matching operators (LIKE, SIMILAR TO, and POSIX-style regular expressions); locales affect both case insensitive matching and the classification of characters by character-class regular expressions - LIKEand locales - regular expressionsand locales + LIKEand locales + regular expressionsand locales - The to_char family of functions - to_charand locales + The to_char family of functions + to_charand locales - The ability to use indexes with LIKE clauses + The ability to use indexes with LIKE clauses - The drawback of using locales other than C or - POSIX in PostgreSQL is its performance + The drawback of using locales other than C or + POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes - from being used by LIKE. For this reason use locales + from being used by LIKE. For this reason use locales only if you actually need them. - As a workaround to allow PostgreSQL to use indexes - with LIKE clauses under a non-C locale, several custom + As a workaround to allow PostgreSQL to use indexes + with LIKE clauses under a non-C locale, several custom operator classes exist. These allow the creation of an index that performs a strict character-by-character comparison, ignoring locale comparison rules. Refer to for more information. Another approach is to create indexes using - the C collation, as discussed in + the C collation, as discussed in . @@ -286,20 +286,20 @@ initdb --locale=sv_SE - Check that PostgreSQL is actually using the locale - that you think it is. The LC_COLLATE and LC_CTYPE + Check that PostgreSQL is actually using the locale + that you think it is. The LC_COLLATE and LC_CTYPE settings are determined when a database is created, and cannot be changed except by creating a new database. Other locale - settings including LC_MESSAGES and LC_MONETARY + settings including LC_MESSAGES and LC_MONETARY are initially determined by the environment the server is started in, but can be changed on-the-fly. You can check the active locale - settings using the SHOW command. + settings using the SHOW command. - The directory src/test/locale in the source + The directory src/test/locale in the source distribution contains a test suite for - PostgreSQL's locale support. + PostgreSQL's locale support. @@ -313,7 +313,7 @@ initdb --locale=sv_SE Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want to see - PostgreSQL speak their preferred language well. + PostgreSQL speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated. If you want to help, refer to or write to the developers' @@ -326,7 +326,7 @@ initdb --locale=sv_SE Collation Support - collation + collation The collation feature allows specifying the sort order and character @@ -370,9 +370,9 @@ initdb --locale=sv_SE function or operator call is derived from the arguments, as described below. In addition to comparison operators, collations are taken into account by functions that convert between lower and upper case - letters, such as lower, upper, and - initcap; by pattern matching operators; and by - to_char and related functions. + letters, such as lower, upper, and + initcap; by pattern matching operators; and by + to_char and related functions. @@ -452,7 +452,7 @@ SELECT a < ('foo' COLLATE "fr_FR") FROM test1; SELECT a < b FROM test1; the parser cannot determine which collation to apply, since the - a and b columns have conflicting + a and b columns have conflicting implicit collations. Since the < operator does need to know which collation to use, this will result in an error. The error can be resolved by attaching an explicit collation @@ -468,7 +468,7 @@ SELECT a COLLATE "de_DE" < b FROM test1; SELECT a || b FROM test1; - does not result in an error, because the || operator + does not result in an error, because the || operator does not care about collations: its result is the same regardless of the collation. @@ -486,8 +486,8 @@ SELECT * FROM test1 ORDER BY a || 'foo'; SELECT * FROM test1 ORDER BY a || b; - results in an error, because even though the || operator - doesn't need to know a collation, the ORDER BY clause does. + results in an error, because even though the || operator + doesn't need to know a collation, the ORDER BY clause does. As before, the conflict can be resolved with an explicit collation specifier: @@ -508,7 +508,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; operating system C library. These are the locales that most tools provided by the operating system use. Another provider is icu, which uses the external - ICUICU library. ICU locales can only be + ICUICU library. ICU locales can only be used if support for ICU was configured when PostgreSQL was built. @@ -541,14 +541,14 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; Standard Collations - On all platforms, the collations named default, - C, and POSIX are available. Additional + On all platforms, the collations named default, + C, and POSIX are available. Additional collations may be available depending on operating system support. - The default collation selects the LC_COLLATE + The default collation selects the LC_COLLATE and LC_CTYPE values specified at database creation time. - The C and POSIX collations both specify - traditional C behavior, in which only the ASCII letters - A through Z + The C and POSIX collations both specify + traditional C behavior, in which only the ASCII letters + A through Z are treated as letters, and sorting is done strictly by character code byte values. @@ -565,7 +565,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; If the operating system provides support for using multiple locales - within a single program (newlocale and related functions), + within a single program (newlocale and related functions), or if support for ICU is configured, then when a database cluster is initialized, initdb populates the system catalog pg_collation with @@ -618,8 +618,8 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; within a given database even though it would not be unique globally. Use of the stripped collation names is recommended, since it will make one less thing you need to change if you decide to change to - another database encoding. Note however that the default, - C, and POSIX collations can be used regardless of + another database encoding. Note however that the default, + C, and POSIX collations can be used regardless of the database encoding. @@ -630,7 +630,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - will draw an error even though the C and POSIX + will draw an error even though the C and POSIX collations have identical behaviors. Mixing stripped and non-stripped collation names is therefore not recommended. @@ -691,7 +691,7 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; database encoding is one of these, ICU collation entries in pg_collation are ignored. Attempting to use one will draw an error along the lines of collation "de-x-icu" for - encoding "WIN874" does not exist. + encoding "WIN874" does not exist. @@ -889,30 +889,30 @@ CREATE COLLATION french FROM "fr-x-icu"; Character Set Support - character set + character set The character set support in PostgreSQL allows you to store text in a variety of character sets (also called encodings), including single-byte character sets such as the ISO 8859 series and - multiple-byte character sets such as EUC (Extended Unix + multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your PostgreSQL database - cluster using initdb. It can be overridden when you + cluster using initdb. It can be overridden when you create a database, so you can have multiple databases each with a different character set. An important restriction, however, is that each database's character set - must be compatible with the database's LC_CTYPE (character - classification) and LC_COLLATE (string sort order) locale - settings. For C or - POSIX locale, any character set is allowed, but for other + must be compatible with the database's LC_CTYPE (character + classification) and LC_COLLATE (string sort order) locale + settings. For C or + POSIX locale, any character set is allowed, but for other libc-provided locales there is only one character set that will work correctly. (On Windows, however, UTF-8 encoding can be used with any locale.) @@ -954,7 +954,7 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN950, Windows950 + WIN950, Windows950 EUC_CN @@ -1017,11 +1017,11 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN936, Windows936 + WIN936, Windows936 ISO_8859_5 - ISO 8859-5, ECMA 113 + ISO 8859-5, ECMA 113 Latin/Cyrillic Yes Yes @@ -1030,7 +1030,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_6 - ISO 8859-6, ECMA 114 + ISO 8859-6, ECMA 114 Latin/Arabic Yes Yes @@ -1039,7 +1039,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_7 - ISO 8859-7, ECMA 118 + ISO 8859-7, ECMA 118 Latin/Greek Yes Yes @@ -1048,7 +1048,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_8 - ISO 8859-8, ECMA 121 + ISO 8859-8, ECMA 121 Latin/Hebrew Yes Yes @@ -1057,7 +1057,7 @@ CREATE COLLATION french FROM "fr-x-icu"; JOHAB - JOHAB + JOHAB Korean (Hangul) No No @@ -1071,7 +1071,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - KOI8 + KOI8 KOI8U @@ -1084,57 +1084,57 @@ CREATE COLLATION french FROM "fr-x-icu"; LATIN1 - ISO 8859-1, ECMA 94 + ISO 8859-1, ECMA 94 Western European Yes Yes 1 - ISO88591 + ISO88591 LATIN2 - ISO 8859-2, ECMA 94 + ISO 8859-2, ECMA 94 Central European Yes Yes 1 - ISO88592 + ISO88592 LATIN3 - ISO 8859-3, ECMA 94 + ISO 8859-3, ECMA 94 South European Yes Yes 1 - ISO88593 + ISO88593 LATIN4 - ISO 8859-4, ECMA 94 + ISO 8859-4, ECMA 94 North European Yes Yes 1 - ISO88594 + ISO88594 LATIN5 - ISO 8859-9, ECMA 128 + ISO 8859-9, ECMA 128 Turkish Yes Yes 1 - ISO88599 + ISO88599 LATIN6 - ISO 8859-10, ECMA 144 + ISO 8859-10, ECMA 144 Nordic Yes Yes 1 - ISO885910 + ISO885910 LATIN7 @@ -1143,7 +1143,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885913 + ISO885913 LATIN8 @@ -1152,7 +1152,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885914 + ISO885914 LATIN9 @@ -1161,16 +1161,16 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885915 + ISO885915 LATIN10 - ISO 8859-16, ASRO SR 14111 + ISO 8859-16, ASRO SR 14111 Romanian Yes No 1 - ISO885916 + ISO885916 MULE_INTERNAL @@ -1188,7 +1188,7 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - Mskanji, ShiftJIS, WIN932, Windows932 + Mskanji, ShiftJIS, WIN932, Windows932 SHIFT_JIS_2004 @@ -1202,7 +1202,7 @@ CREATE COLLATION french FROM "fr-x-icu"; SQL_ASCII unspecified (see text) - any + any Yes No 1 @@ -1215,16 +1215,16 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN949, Windows949 + WIN949, Windows949 UTF8 Unicode, 8-bit - all + all Yes Yes 1-4 - Unicode + Unicode WIN866 @@ -1233,7 +1233,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ALT + ALT WIN874 @@ -1260,7 +1260,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - WIN + WIN WIN1252 @@ -1323,30 +1323,30 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ABC, TCVN, TCVN5712, VSCII + ABC, TCVN, TCVN5712, VSCII
- Not all client APIs support all the listed character sets. For example, the - PostgreSQL - JDBC driver does not support MULE_INTERNAL, LATIN6, - LATIN8, and LATIN10. + Not all client APIs support all the listed character sets. For example, the + PostgreSQL + JDBC driver does not support MULE_INTERNAL, LATIN6, + LATIN8, and LATIN10. - The SQL_ASCII setting behaves considerably differently + The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is - SQL_ASCII, the server interprets byte values 0-127 + SQL_ASCII, the server interprets byte values 0-127 according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. No encoding conversion will be done when - the setting is SQL_ASCII. Thus, this setting is not so + the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the - SQL_ASCII setting because + SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters. @@ -1356,7 +1356,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Setting the Character Set - initdb defines the default character set (encoding) + initdb defines the default character set (encoding) for a PostgreSQL cluster. For example, @@ -1367,8 +1367,8 @@ initdb -E EUC_JP EUC_JP (Extended Unix Code for Japanese). You can use instead of if you prefer longer option strings. - If no option is - given, initdb attempts to determine the appropriate + If no or option is + given, initdb attempts to determine the appropriate encoding to use based on the specified or default locale. @@ -1388,7 +1388,7 @@ createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0; - Notice that the above commands specify copying the template0 + Notice that the above commands specify copying the template0 database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data. For more information see @@ -1420,7 +1420,7 @@ $ psql -l On most modern operating systems, PostgreSQL - can determine which character set is implied by the LC_CTYPE + can determine which character set is implied by the LC_CTYPE setting, and it will enforce that only the matching database encoding is used. On older systems it is your responsibility to ensure that you use the encoding expected by the locale you have selected. A mistake in @@ -1430,9 +1430,9 @@ $ psql -l PostgreSQL will allow superusers to create - databases with SQL_ASCII encoding even when - LC_CTYPE is not C or POSIX. As noted - above, SQL_ASCII does not enforce that the data stored in + databases with SQL_ASCII encoding even when + LC_CTYPE is not C or POSIX. As noted + above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether. @@ -1447,7 +1447,7 @@ $ psql -l PostgreSQL supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the - pg_conversion system catalog. PostgreSQL + pg_conversion system catalog. PostgreSQL comes with some predefined conversions, as shown in . You can create a new conversion using the SQL command CREATE CONVERSION. @@ -1763,7 +1763,7 @@ $ psql -l - libpq () has functions to control the client encoding. + libpq () has functions to control the client encoding. @@ -1774,14 +1774,14 @@ $ psql -l Setting the client encoding can be done with this SQL command: -SET CLIENT_ENCODING TO 'value'; +SET CLIENT_ENCODING TO 'value'; Also you can use the standard SQL syntax SET NAMES for this purpose: -SET NAMES 'value'; +SET NAMES 'value'; To query the current client encoding: @@ -1813,7 +1813,7 @@ RESET client_encoding; Using the configuration variable . If the - client_encoding variable is set, that client + client_encoding variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.) @@ -1832,9 +1832,9 @@ RESET client_encoding; - If the client character set is defined as SQL_ASCII, + If the client character set is defined as SQL_ASCII, encoding conversion is disabled, regardless of the server's character - set. Just as for the server, use of SQL_ASCII is unwise + set. Just as for the server, use of SQL_ASCII is unwise unless you are working with all-ASCII data. diff --git a/doc/src/sgml/citext.sgml b/doc/src/sgml/citext.sgml index 9b4c68f7d4..82251de852 100644 --- a/doc/src/sgml/citext.sgml +++ b/doc/src/sgml/citext.sgml @@ -8,10 +8,10 @@ - The citext module provides a case-insensitive - character string type, citext. Essentially, it internally calls - lower when comparing values. Otherwise, it behaves almost - exactly like text. + The citext module provides a case-insensitive + character string type, citext. Essentially, it internally calls + lower when comparing values. Otherwise, it behaves almost + exactly like text. @@ -19,7 +19,7 @@ The standard approach to doing case-insensitive matches - in PostgreSQL has been to use the lower + in PostgreSQL has been to use the lower function when comparing values, for example @@ -35,19 +35,19 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); It makes your SQL statements verbose, and you always have to remember to - use lower on both the column and the query value. + use lower on both the column and the query value. It won't use an index, unless you create a functional index using - lower. + lower. - If you declare a column as UNIQUE or PRIMARY - KEY, the implicitly generated index is case-sensitive. So it's + If you declare a column as UNIQUE or PRIMARY + KEY, the implicitly generated index is case-sensitive. So it's useless for case-insensitive searches, and it won't enforce uniqueness case-insensitively. @@ -55,13 +55,13 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); - The citext data type allows you to eliminate calls - to lower in SQL queries, and allows a primary key to - be case-insensitive. citext is locale-aware, just - like text, which means that the matching of upper case and + The citext data type allows you to eliminate calls + to lower in SQL queries, and allows a primary key to + be case-insensitive. citext is locale-aware, just + like text, which means that the matching of upper case and lower case characters is dependent on the rules of - the database's LC_CTYPE setting. Again, this behavior is - identical to the use of lower in queries. But because it's + the database's LC_CTYPE setting. Again, this behavior is + identical to the use of lower in queries. But because it's done transparently by the data type, you don't have to remember to do anything special in your queries. @@ -89,9 +89,9 @@ INSERT INTO users VALUES ( 'Bjørn', md5(random()::text) ); SELECT * FROM users WHERE nick = 'Larry'; - The SELECT statement will return one tuple, even though - the nick column was set to larry and the query - was for Larry. + The SELECT statement will return one tuple, even though + the nick column was set to larry and the query + was for Larry. @@ -99,82 +99,82 @@ SELECT * FROM users WHERE nick = 'Larry'; String Comparison Behavior - citext performs comparisons by converting each string to lower - case (as though lower were called) and then comparing the + citext performs comparisons by converting each string to lower + case (as though lower were called) and then comparing the results normally. Thus, for example, two strings are considered equal - if lower would produce identical results for them. + if lower would produce identical results for them. In order to emulate a case-insensitive collation as closely as possible, - there are citext-specific versions of a number of string-processing + there are citext-specific versions of a number of string-processing operators and functions. So, for example, the regular expression - operators ~ and ~* exhibit the same behavior when - applied to citext: they both match case-insensitively. + operators ~ and ~* exhibit the same behavior when + applied to citext: they both match case-insensitively. The same is true - for !~ and !~*, as well as for the - LIKE operators ~~ and ~~*, and - !~~ and !~~*. If you'd like to match - case-sensitively, you can cast the operator's arguments to text. + for !~ and !~*, as well as for the + LIKE operators ~~ and ~~*, and + !~~ and !~~*. If you'd like to match + case-sensitively, you can cast the operator's arguments to text. Similarly, all of the following functions perform matching - case-insensitively if their arguments are citext: + case-insensitively if their arguments are citext: - regexp_match() + regexp_match() - regexp_matches() + regexp_matches() - regexp_replace() + regexp_replace() - regexp_split_to_array() + regexp_split_to_array() - regexp_split_to_table() + regexp_split_to_table() - replace() + replace() - split_part() + split_part() - strpos() + strpos() - translate() + translate() For the regexp functions, if you want to match case-sensitively, you can - specify the c flag to force a case-sensitive match. Otherwise, - you must cast to text before using one of these functions if + specify the c flag to force a case-sensitive match. Otherwise, + you must cast to text before using one of these functions if you want case-sensitive behavior. @@ -186,13 +186,13 @@ SELECT * FROM users WHERE nick = 'Larry'; - citext's case-folding behavior depends on - the LC_CTYPE setting of your database. How it compares + citext's case-folding behavior depends on + the LC_CTYPE setting of your database. How it compares values is therefore determined when the database is created. It is not truly case-insensitive in the terms defined by the Unicode standard. Effectively, what this means is that, as long as you're happy with your - collation, you should be happy with citext's comparisons. But + collation, you should be happy with citext's comparisons. But if you have data in different languages stored in your database, users of one language may find their query results are not as expected if the collation is for another language. @@ -201,38 +201,38 @@ SELECT * FROM users WHERE nick = 'Larry'; - As of PostgreSQL 9.1, you can attach a - COLLATE specification to citext columns or data - values. Currently, citext operators will honor a non-default - COLLATE specification while comparing case-folded strings, + As of PostgreSQL 9.1, you can attach a + COLLATE specification to citext columns or data + values. Currently, citext operators will honor a non-default + COLLATE specification while comparing case-folded strings, but the initial folding to lower case is always done according to the - database's LC_CTYPE setting (that is, as though - COLLATE "default" were given). This may be changed in a - future release so that both steps follow the input COLLATE + database's LC_CTYPE setting (that is, as though + COLLATE "default" were given). This may be changed in a + future release so that both steps follow the input COLLATE specification. - citext is not as efficient as text because the + citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, - however, slightly more efficient than using lower to get + however, slightly more efficient than using lower to get case-insensitive matching. - citext doesn't help much if you need data to compare + citext doesn't help much if you need data to compare case-sensitively in some contexts and case-insensitively in other - contexts. The standard answer is to use the text type and - manually use the lower function when you need to compare + contexts. The standard answer is to use the text type and + manually use the lower function when you need to compare case-insensitively; this works all right if case-insensitive comparison is needed only infrequently. If you need case-insensitive behavior most of the time and case-sensitive infrequently, consider storing the data - as citext and explicitly casting the column to text + as citext and explicitly casting the column to text when you want case-sensitive comparison. In either situation, you will need two indexes if you want both types of searches to be fast. @@ -240,9 +240,9 @@ SELECT * FROM users WHERE nick = 'Larry'; - The schema containing the citext operators must be - in the current search_path (typically public); - if it is not, the normal case-sensitive text operators + The schema containing the citext operators must be + in the current search_path (typically public); + if it is not, the normal case-sensitive text operators will be invoked instead. @@ -257,7 +257,7 @@ SELECT * FROM users WHERE nick = 'Larry'; - Inspired by the original citext module by Donald Fraser. + Inspired by the original citext module by Donald Fraser. diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 78c594bbba..722f3da813 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -21,9 +21,9 @@ As explained in , PostgreSQL actually does privilege - management in terms of roles. In this chapter, we - consistently use database user to mean role with the - LOGIN privilege. + management in terms of roles. In this chapter, we + consistently use database user to mean role with the + LOGIN privilege. @@ -66,7 +66,7 @@ which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. - (HBA stands for host-based authentication.) A default + (HBA stands for host-based authentication.) A default pg_hba.conf file is installed when the data directory is initialized by initdb. It is possible to place the authentication configuration file elsewhere, @@ -82,7 +82,7 @@ up of a number of fields which are separated by spaces and/or tabs. Fields can contain white space if the field value is double-quoted. Quoting one of the keywords in a database, user, or address field (e.g., - all or replication) makes the word lose its special + all or replication) makes the word lose its special meaning, and just match a database, user, or host with that name. @@ -92,8 +92,8 @@ and the authentication method to be used for connections matching these parameters. The first record with a matching connection type, client address, requested database, and user name is used to perform - authentication. There is no fall-through or - backup: if one record is chosen and the authentication + authentication. There is no fall-through or + backup: if one record is chosen and the authentication fails, subsequent records are not considered. If no record matches, access is denied. @@ -138,7 +138,7 @@ hostnossl database user the server is started with an appropriate value for the configuration parameter, since the default behavior is to listen for TCP/IP connections - only on the local loopback address localhost. + only on the local loopback address localhost. @@ -169,7 +169,7 @@ hostnossl database user hostnossl - This record type has the opposite behavior of hostssl; + This record type has the opposite behavior of hostssl; it only matches connection attempts made over TCP/IP that do not use SSL. @@ -182,24 +182,24 @@ hostnossl database user Specifies which database name(s) this record matches. The value all specifies that it matches all databases. - The value sameuser specifies that the record + The value sameuser specifies that the record matches if the requested database has the same name as the - requested user. The value samerole specifies that + requested user. The value samerole specifies that the requested user must be a member of the role with the same - name as the requested database. (samegroup is an - obsolete but still accepted spelling of samerole.) + name as the requested database. (samegroup is an + obsolete but still accepted spelling of samerole.) Superusers are not considered to be members of a role for the - purposes of samerole unless they are explicitly + purposes of samerole unless they are explicitly members of the role, directly or indirectly, and not just by virtue of being a superuser. - The value replication specifies that the record + The value replication specifies that the record matches if a physical replication connection is requested (note that replication connections do not specify any particular database). Otherwise, this is the name of a specific PostgreSQL database. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by - preceding the file name with @. + preceding the file name with @. @@ -211,18 +211,18 @@ hostnossl database user Specifies which database user name(s) this record matches. The value all specifies that it matches all users. Otherwise, this is either the name of a specific - database user, or a group name preceded by +. + database user, or a group name preceded by +. (Recall that there is no real distinction between users and groups - in PostgreSQL; a + mark really means + in PostgreSQL; a + mark really means match any of the roles that are directly or indirectly members - of this role, while a name without a + mark matches + of this role, while a name without a + mark matches only that specific role.) For this purpose, a superuser is only considered to be a member of a role if they are explicitly a member of the role, directly or indirectly, and not just by virtue of being a superuser. Multiple user names can be supplied by separating them with commas. A separate file containing user names can be specified by preceding the - file name with @. + file name with @. @@ -239,7 +239,7 @@ hostnossl database user An IP address range is specified using standard numeric notation for the range's starting address, then a slash (/) - and a CIDR mask length. The mask + and a CIDR mask length. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this should be zero in the given IP address. @@ -317,7 +317,7 @@ hostnossl database user This field only applies to host, - hostssl, and hostnossl records. + hostssl, and hostnossl records. @@ -360,17 +360,17 @@ hostnossl database user These two fields can be used as an alternative to the - IP-address/mask-length + IP-address/mask-length notation. Instead of specifying the mask length, the actual mask is specified in a - separate column. For example, 255.0.0.0 represents an IPv4 - CIDR mask length of 8, and 255.255.255.255 represents a + separate column. For example, 255.0.0.0 represents an IPv4 + CIDR mask length of 8, and 255.255.255.255 represents a CIDR mask length of 32. These fields only apply to host, - hostssl, and hostnossl records. + hostssl, and hostnossl records. @@ -385,7 +385,7 @@ hostnossl database user - trust + trust Allow the connection unconditionally. This method @@ -399,12 +399,12 @@ hostnossl database user - reject + reject Reject the connection unconditionally. This is useful for - filtering out certain hosts from a group, for example a - reject line could block a specific host from connecting, + filtering out certain hosts from a group, for example a + reject line could block a specific host from connecting, while a later line allows the remaining hosts in a specific network to connect. @@ -412,7 +412,7 @@ hostnossl database user - scram-sha-256 + scram-sha-256 Perform SCRAM-SHA-256 authentication to verify the user's @@ -422,7 +422,7 @@ hostnossl database user - md5 + md5 Perform SCRAM-SHA-256 or MD5 authentication to verify the @@ -433,7 +433,7 @@ hostnossl database user - password + password Require the client to supply an unencrypted password for @@ -446,7 +446,7 @@ hostnossl database user - gss + gss Use GSSAPI to authenticate the user. This is only @@ -457,7 +457,7 @@ hostnossl database user - sspi + sspi Use SSPI to authenticate the user. This is only @@ -468,7 +468,7 @@ hostnossl database user - ident + ident Obtain the operating system user name of the client @@ -483,7 +483,7 @@ hostnossl database user - peer + peer Obtain the client's operating system user name from the operating @@ -495,17 +495,17 @@ hostnossl database user - ldap + ldap - Authenticate using an LDAP server. See LDAP server. See for details. - radius + radius Authenticate using a RADIUS server. See database
user - cert + cert Authenticate using SSL client certificates. See @@ -525,7 +525,7 @@ hostnossl database user - pam + pam Authenticate using the Pluggable Authentication Modules @@ -536,7 +536,7 @@ hostnossl database user - bsd + bsd Authenticate using the BSD Authentication service provided by the @@ -554,17 +554,17 @@ hostnossl database user auth-options - After the auth-method field, there can be field(s) of - the form name=value that + After the auth-method field, there can be field(s) of + the form name=value that specify options for the authentication method. Details about which options are available for which authentication methods appear below. In addition to the method-specific options listed below, there is one - method-independent authentication option clientcert, which - can be specified in any hostssl record. When set - to 1, this option requires the client to present a valid + method-independent authentication option clientcert, which + can be specified in any hostssl record. When set + to 1, this option requires the client to present a valid (trusted) SSL certificate, in addition to the other requirements of the authentication method. @@ -574,11 +574,11 @@ hostnossl database user - Files included by @ constructs are read as lists of names, + Files included by @ constructs are read as lists of names, which can be separated by either whitespace or commas. Comments are introduced by #, just as in - pg_hba.conf, and nested @ constructs are - allowed. Unless the file name following @ is an absolute + pg_hba.conf, and nested @ constructs are + allowed. Unless the file name following @ is an absolute path, it is taken to be relative to the directory containing the referencing file. @@ -589,10 +589,10 @@ hostnossl database user significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication - methods. For example, one might wish to use trust + methods. For example, one might wish to use trust authentication for local TCP/IP connections but require a password for remote TCP/IP connections. In this case a record specifying - trust authentication for connections from 127.0.0.1 would + trust authentication for connections from 127.0.0.1 would appear before a record specifying password authentication for a wider range of allowed client IP addresses. @@ -603,7 +603,7 @@ hostnossl database user SIGHUPSIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster - (using pg_ctl reload or kill -HUP) to make it + (using pg_ctl reload or kill -HUP) to make it re-read the file. @@ -618,7 +618,7 @@ hostnossl database user The system view pg_hba_file_rules - can be helpful for pre-testing changes to the pg_hba.conf + can be helpful for pre-testing changes to the pg_hba.conf file, or for diagnosing problems if loading of the file did not have the desired effects. Rows in the view with non-null error fields indicate problems in the @@ -629,9 +629,9 @@ hostnossl database user To connect to a particular database, a user must not only pass the pg_hba.conf checks, but must have the - CONNECT privilege for the database. If you wish to + CONNECT privilege for the database. If you wish to restrict which users can connect to which databases, it's usually - easier to control this by granting/revoking CONNECT privilege + easier to control this by granting/revoking CONNECT privilege than to put the rules in pg_hba.conf entries. @@ -760,21 +760,21 @@ local db1,db2,@demodbs all md5 User name maps are defined in the ident map file, which by default is named - pg_ident.confpg_ident.conf + pg_ident.confpg_ident.conf and is stored in the cluster's data directory. (It is possible to place the map file elsewhere, however; see the configuration parameter.) The ident map file contains lines of the general form: -map-name system-username database-username +map-name system-username database-username Comments and whitespace are handled in the same way as in - pg_hba.conf. The - map-name is an arbitrary name that will be used to + pg_hba.conf. The + map-name is an arbitrary name that will be used to refer to this mapping in pg_hba.conf. The other two fields specify an operating system user name and a matching - database user name. The same map-name can be + database user name. The same map-name can be used repeatedly to specify multiple user-mappings within a single map. @@ -788,13 +788,13 @@ local db1,db2,@demodbs all md5 user has requested to connect as. - If the system-username field starts with a slash (/), + If the system-username field starts with a slash (/), the remainder of the field is treated as a regular expression. (See for details of - PostgreSQL's regular expression syntax.) The regular + PostgreSQL's regular expression syntax.) The regular expression can include a single capture, or parenthesized subexpression, - which can then be referenced in the database-username - field as \1 (backslash-one). This allows the mapping of + which can then be referenced in the database-username + field as \1 (backslash-one). This allows the mapping of multiple user names in a single line, which is particularly useful for simple syntax substitutions. For example, these entries @@ -802,14 +802,14 @@ mymap /^(.*)@mydomain\.com$ \1 mymap /^(.*)@otherdomain\.com$ guest will remove the domain part for users with system user names that end with - @mydomain.com, and allow any user whose system name ends with - @otherdomain.com to log in as guest. + @mydomain.com, and allow any user whose system name ends with + @otherdomain.com to log in as guest. Keep in mind that by default, a regular expression can match just part of - a string. It's usually wise to use ^ and $, as + a string. It's usually wise to use ^ and $, as shown in the above example, to force the match to be to the entire system user name. @@ -821,28 +821,28 @@ mymap /^(.*)@otherdomain\.com$ guest SIGHUPSIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster - (using pg_ctl reload or kill -HUP) to make it + (using pg_ctl reload or kill -HUP) to make it re-read the file. A pg_ident.conf file that could be used in - conjunction with the pg_hba.conf file in pg_hba.conf file in is shown in . In this example, anyone logged in to a machine on the 192.168 network that does not have the - operating system user name bryanh, ann, or - robert would not be granted access. Unix user - robert would only be allowed access when he tries to - connect as PostgreSQL user bob, not - as robert or anyone else. ann would - only be allowed to connect as ann. User - bryanh would be allowed to connect as either - bryanh or as guest1. + operating system user name bryanh, ann, or + robert would not be granted access. Unix user + robert would only be allowed access when he tries to + connect as PostgreSQL user bob, not + as robert or anyone else. ann would + only be allowed to connect as ann. User + bryanh would be allowed to connect as either + bryanh or as guest1. - An Example <filename>pg_ident.conf</> File + An Example <filename>pg_ident.conf</filename> File # MAPNAME SYSTEM-USERNAME PG-USERNAME @@ -866,21 +866,21 @@ omicron bryanh guest1 Trust Authentication - When trust authentication is specified, + When trust authentication is specified, PostgreSQL assumes that anyone who can connect to the server is authorized to access the database with whatever database user name they specify (even superuser names). - Of course, restrictions made in the database and - user columns still apply. + Of course, restrictions made in the database and + user columns still apply. This method should only be used when there is adequate operating-system-level protection on connections to the server. - trust authentication is appropriate and very + trust authentication is appropriate and very convenient for local connections on a single-user workstation. It - is usually not appropriate by itself on a multiuser - machine. However, you might be able to use trust even + is usually not appropriate by itself on a multiuser + machine. However, you might be able to use trust even on a multiuser machine, if you restrict access to the server's Unix-domain socket file using file-system permissions. To do this, set the unix_socket_permissions (and possibly @@ -895,17 +895,17 @@ omicron bryanh guest1 Setting file-system permissions only helps for Unix-socket connections. Local TCP/IP connections are not restricted by file-system permissions. Therefore, if you want to use file-system permissions for local security, - remove the host ... 127.0.0.1 ... line from - pg_hba.conf, or change it to a - non-trust authentication method. + remove the host ... 127.0.0.1 ... line from + pg_hba.conf, or change it to a + non-trust authentication method. - trust authentication is only suitable for TCP/IP connections + trust authentication is only suitable for TCP/IP connections if you trust every user on every machine that is allowed to connect - to the server by the pg_hba.conf lines that specify - trust. It is seldom reasonable to use trust - for any TCP/IP connections other than those from localhost (127.0.0.1). + to the server by the pg_hba.conf lines that specify + trust. It is seldom reasonable to use trust + for any TCP/IP connections other than those from localhost (127.0.0.1). @@ -914,10 +914,10 @@ omicron bryanh guest1 Password Authentication - MD5 + MD5 - SCRAM + SCRAM password @@ -936,7 +936,7 @@ omicron bryanh guest1 scram-sha-256 - The method scram-sha-256 performs SCRAM-SHA-256 + The method scram-sha-256 performs SCRAM-SHA-256 authentication, as described in RFC 7677. It is a challenge-response scheme that prevents password sniffing on @@ -955,7 +955,7 @@ omicron bryanh guest1 md5 - The method md5 uses a custom less secure challenge-response + The method md5 uses a custom less secure challenge-response mechanism. It prevents password sniffing and avoids storing passwords on the server in plain text but provides no protection if an attacker manages to steal the password hash from the server. Also, the MD5 hash @@ -982,10 +982,10 @@ omicron bryanh guest1 password - The method password sends the password in clear-text and is - therefore vulnerable to password sniffing attacks. It should + The method password sends the password in clear-text and is + therefore vulnerable to password sniffing attacks. It should always be avoided if possible. If the connection is protected by SSL - encryption then password can be used safely, though. + encryption then password can be used safely, though. (Though SSL certificate authentication might be a better choice if one is depending on using SSL). @@ -996,7 +996,7 @@ omicron bryanh guest1 PostgreSQL database passwords are separate from operating system user passwords. The password for - each database user is stored in the pg_authid system + each database user is stored in the pg_authid system catalog. Passwords can be managed with the SQL commands and , @@ -1060,7 +1060,7 @@ omicron bryanh guest1 - GSSAPI support has to be enabled when PostgreSQL is built; + GSSAPI support has to be enabled when PostgreSQL is built; see for more information. @@ -1068,13 +1068,13 @@ omicron bryanh guest1 When GSSAPI uses Kerberos, it uses a standard principal in the format - servicename/hostname@realm. + servicename/hostname@realm. The PostgreSQL server will accept any principal that is included in the keytab used by the server, but care needs to be taken to specify the correct principal details when - making the connection from the client using the krbsrvname connection parameter. (See + making the connection from the client using the krbsrvname connection parameter. (See also .) The installation default can be changed from the default postgres at build time using - ./configure --with-krb-srvnam=whatever. + ./configure --with-krb-srvnam=whatever. In most environments, this parameter never needs to be changed. Some Kerberos implementations might require a different service name, @@ -1082,31 +1082,31 @@ omicron bryanh guest1 to be in upper case (POSTGRES). - hostname is the fully qualified host name of the + hostname is the fully qualified host name of the server machine. The service principal's realm is the preferred realm of the server machine. - Client principals can be mapped to different PostgreSQL - database user names with pg_ident.conf. For example, - pgusername@realm could be mapped to just pgusername. - Alternatively, you can use the full username@realm principal as - the role name in PostgreSQL without any mapping. + Client principals can be mapped to different PostgreSQL + database user names with pg_ident.conf. For example, + pgusername@realm could be mapped to just pgusername. + Alternatively, you can use the full username@realm principal as + the role name in PostgreSQL without any mapping. - PostgreSQL also supports a parameter to strip the realm from + PostgreSQL also supports a parameter to strip the realm from the principal. This method is supported for backwards compatibility and is strongly discouraged as it is then impossible to distinguish different users with the same user name but coming from different realms. To enable this, - set include_realm to 0. For simple single-realm + set include_realm to 0. For simple single-realm installations, doing that combined with setting the - krb_realm parameter (which checks that the principal's realm + krb_realm parameter (which checks that the principal's realm matches exactly what is in the krb_realm parameter) is still secure; but this is a less capable approach compared to specifying an explicit mapping in - pg_ident.conf. + pg_ident.conf. @@ -1116,8 +1116,8 @@ omicron bryanh guest1 of the key file is specified by the configuration parameter. The default is - /usr/local/pgsql/etc/krb5.keytab (or whatever - directory was specified as sysconfdir at build time). + /usr/local/pgsql/etc/krb5.keytab (or whatever + directory was specified as sysconfdir at build time). For security reasons, it is recommended to use a separate keytab just for the PostgreSQL server rather than opening up permissions on the system keytab file. @@ -1127,17 +1127,17 @@ omicron bryanh guest1 Kerberos documentation for details. The following example is for MIT-compatible Kerberos 5 implementations: -kadmin% ank -randkey postgres/server.my.domain.org -kadmin% ktadd -k krb5.keytab postgres/server.my.domain.org +kadmin% ank -randkey postgres/server.my.domain.org +kadmin% ktadd -k krb5.keytab postgres/server.my.domain.org When connecting to the database make sure you have a ticket for a principal matching the requested database user name. For example, for - database user name fred, principal - fred@EXAMPLE.COM would be able to connect. To also allow - principal fred/users.example.com@EXAMPLE.COM, use a user name + database user name fred, principal + fred@EXAMPLE.COM would be able to connect. To also allow + principal fred/users.example.com@EXAMPLE.COM, use a user name map, as described in . @@ -1155,8 +1155,8 @@ omicron bryanh guest1 in multi-realm environments unless krb_realm is also used. It is recommended to leave include_realm set to the default (1) and to - provide an explicit mapping in pg_ident.conf to convert - principal names to PostgreSQL user names. + provide an explicit mapping in pg_ident.conf to convert + principal names to PostgreSQL user names. @@ -1236,8 +1236,8 @@ omicron bryanh guest1 in multi-realm environments unless krb_realm is also used. It is recommended to leave include_realm set to the default (1) and to - provide an explicit mapping in pg_ident.conf to convert - principal names to PostgreSQL user names. + provide an explicit mapping in pg_ident.conf to convert + principal names to PostgreSQL user names. @@ -1270,9 +1270,9 @@ omicron bryanh guest1 By default, these two names are identical for new user accounts. - Note that libpq uses the SAM-compatible name if no + Note that libpq uses the SAM-compatible name if no explicit user name is specified. If you use - libpq or a driver based on it, you should + libpq or a driver based on it, you should leave this option disabled or explicitly specify user name in the connection string. @@ -1357,8 +1357,8 @@ omicron bryanh guest1 is to answer questions like What user initiated the connection that goes out of your port X and connects to my port Y?. - Since PostgreSQL knows both X and - Y when a physical connection is established, it + Since PostgreSQL knows both X and + Y when a physical connection is established, it can interrogate the ident server on the host of the connecting client and can theoretically determine the operating system user for any given connection. @@ -1386,9 +1386,9 @@ omicron bryanh guest1 Some ident servers have a nonstandard option that causes the returned user name to be encrypted, using a key that only the originating - machine's administrator knows. This option must not be - used when using the ident server with PostgreSQL, - since PostgreSQL does not have any way to decrypt the + machine's administrator knows. This option must not be + used when using the ident server with PostgreSQL, + since PostgreSQL does not have any way to decrypt the returned string to determine the actual user name. @@ -1424,11 +1424,11 @@ omicron bryanh guest1 Peer authentication is only available on operating systems providing - the getpeereid() function, the SO_PEERCRED + the getpeereid() function, the SO_PEERCRED socket parameter, or similar mechanisms. Currently that includes - Linux, - most flavors of BSD including - macOS, + Linux, + most flavors of BSD including + macOS, and Solaris. @@ -1454,23 +1454,23 @@ omicron bryanh guest1 LDAP authentication can operate in two modes. In the first mode, which we will call the simple bind mode, the server will bind to the distinguished name constructed as - prefix username suffix. - Typically, the prefix parameter is used to specify - cn=, or DOMAIN\ in an Active - Directory environment. suffix is used to specify the + prefix username suffix. + Typically, the prefix parameter is used to specify + cn=, or DOMAIN\ in an Active + Directory environment. suffix is used to specify the remaining part of the DN in a non-Active Directory environment. In the second mode, which we will call the search+bind mode, the server first binds to the LDAP directory with - a fixed user name and password, specified with ldapbinddn - and ldapbindpasswd, and performs a search for the user trying + a fixed user name and password, specified with ldapbinddn + and ldapbindpasswd, and performs a search for the user trying to log in to the database. If no user and password is configured, an anonymous bind will be attempted to the directory. The search will be - performed over the subtree at ldapbasedn, and will try to + performed over the subtree at ldapbasedn, and will try to do an exact match of the attribute specified in - ldapsearchattribute. + ldapsearchattribute. Once the user has been found in this search, the server disconnects and re-binds to the directory as this user, using the password specified by the client, to verify that the @@ -1572,7 +1572,7 @@ omicron bryanh guest1 Attribute to match against the user name in the search when doing search+bind authentication. If no attribute is specified, the - uid attribute will be used. + uid attribute will be used. @@ -1719,11 +1719,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse When using RADIUS authentication, an Access Request message will be sent to the configured RADIUS server. This request will be of type Authenticate Only, and include parameters for - user name, password (encrypted) and - NAS Identifier. The request will be encrypted using + user name, password (encrypted) and + NAS Identifier. The request will be encrypted using a secret shared with the server. The RADIUS server will respond to - this server with either Access Accept or - Access Reject. There is no support for RADIUS accounting. + this server with either Access Accept or + Access Reject. There is no support for RADIUS accounting. @@ -1762,8 +1762,8 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse The encryption vector used will only be cryptographically - strong if PostgreSQL is built with support for - OpenSSL. In other cases, the transmission to the + strong if PostgreSQL is built with support for + OpenSSL. In other cases, the transmission to the RADIUS server should only be considered obfuscated, not secured, and external security measures should be applied if necessary. @@ -1777,7 +1777,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse The port number on the RADIUS servers to connect to. If no port - is specified, the default port 1812 will be used. + is specified, the default port 1812 will be used. @@ -1786,12 +1786,12 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse radiusidentifiers - The string used as NAS Identifier in the RADIUS + The string used as NAS Identifier in the RADIUS requests. This parameter can be used as a second parameter identifying for example which database user the user is attempting to authenticate as, which can be used for policy matching on the RADIUS server. If no identifier is specified, the default - postgresql will be used. + postgresql will be used. @@ -1836,11 +1836,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - In a pg_hba.conf record specifying certificate - authentication, the authentication option clientcert is - assumed to be 1, and it cannot be turned off since a client - certificate is necessary for this method. What the cert - method adds to the basic clientcert certificate validity test + In a pg_hba.conf record specifying certificate + authentication, the authentication option clientcert is + assumed to be 1, and it cannot be turned off since a client + certificate is necessary for this method. What the cert + method adds to the basic clientcert certificate validity test is a check that the cn attribute matches the database user name. @@ -1863,7 +1863,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse exist in the database before PAM can be used for authentication. For more information about PAM, please read the - Linux-PAM Page. + Linux-PAM Page. @@ -1896,7 +1896,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - If PAM is set up to read /etc/shadow, authentication + If PAM is set up to read /etc/shadow, authentication will fail because the PostgreSQL server is started by a non-root user. However, this is not an issue when PAM is configured to use LDAP or other authentication methods. @@ -1922,11 +1922,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - BSD Authentication in PostgreSQL uses + BSD Authentication in PostgreSQL uses the auth-postgresql login type and authenticates with the postgresql login class if that's defined in login.conf. By default that login class does not - exist, and PostgreSQL will use the default login class. + exist, and PostgreSQL will use the default login class. diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index b012a26991..aeda826d87 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -70,9 +70,9 @@ (typically eight kilobytes), milliseconds, seconds, or minutes. An unadorned numeric value for one of these settings will use the setting's default unit, which can be learned from - pg_settings.unit. + pg_settings.unit. For convenience, settings can be given with a unit specified explicitly, - for example '120 ms' for a time value, and they will be + for example '120 ms' for a time value, and they will be converted to whatever the parameter's actual unit is. Note that the value must be written as a string (with quotes) to use this feature. The unit name is case-sensitive, and there can be whitespace between @@ -105,7 +105,7 @@ Enumerated-type parameters are written in the same way as string parameters, but are restricted to have one of a limited set of values. The values allowable for such a parameter can be found from - pg_settings.enumvals. + pg_settings.enumvals. Enum parameter values are case-insensitive. @@ -117,7 +117,7 @@ The most fundamental way to set these parameters is to edit the file - postgresql.confpostgresql.conf, + postgresql.confpostgresql.conf, which is normally kept in the data directory. A default copy is installed when the database cluster directory is initialized. An example of what this file might look like is: @@ -150,8 +150,8 @@ shared_buffers = 128MB SIGHUP The configuration file is reread whenever the main server process - receives a SIGHUP signal; this signal is most easily - sent by running pg_ctl reload from the command line or by + receives a SIGHUP signal; this signal is most easily + sent by running pg_ctl reload from the command line or by calling the SQL function pg_reload_conf(). The main server process also propagates this signal to all currently running server processes, so that existing sessions also adopt the new values @@ -161,26 +161,26 @@ shared_buffers = 128MB can only be set at server start; any changes to their entries in the configuration file will be ignored until the server is restarted. Invalid parameter settings in the configuration file are likewise - ignored (but logged) during SIGHUP processing. + ignored (but logged) during SIGHUP processing. - In addition to postgresql.conf, + In addition to postgresql.conf, a PostgreSQL data directory contains a file - postgresql.auto.confpostgresql.auto.conf, - which has the same format as postgresql.conf but should + postgresql.auto.confpostgresql.auto.conf, + which has the same format as postgresql.conf but should never be edited manually. This file holds settings provided through the command. This file is automatically - read whenever postgresql.conf is, and its settings take - effect in the same way. Settings in postgresql.auto.conf - override those in postgresql.conf. + read whenever postgresql.conf is, and its settings take + effect in the same way. Settings in postgresql.auto.conf + override those in postgresql.conf. The system view pg_file_settings can be helpful for pre-testing changes to the configuration file, or for - diagnosing problems if a SIGHUP signal did not have the + diagnosing problems if a SIGHUP signal did not have the desired effects. @@ -193,7 +193,7 @@ shared_buffers = 128MB commands to establish configuration defaults. The already-mentioned command provides a SQL-accessible means of changing global defaults; it is - functionally equivalent to editing postgresql.conf. + functionally equivalent to editing postgresql.conf. In addition, there are two commands that allow setting of defaults on a per-database or per-role basis: @@ -215,7 +215,7 @@ shared_buffers = 128MB - Values set with ALTER DATABASE and ALTER ROLE + Values set with ALTER DATABASE and ALTER ROLE are applied only when starting a fresh database session. They override values obtained from the configuration files or server command line, and constitute defaults for the rest of the session. @@ -224,7 +224,7 @@ shared_buffers = 128MB - Once a client is connected to the database, PostgreSQL + Once a client is connected to the database, PostgreSQL provides two additional SQL commands (and equivalent functions) to interact with session-local configuration settings: @@ -251,14 +251,14 @@ shared_buffers = 128MB In addition, the system view pg_settings can be + linkend="view-pg-settings">pg_settings can be used to view and change session-local values: - Querying this view is similar to using SHOW ALL but + Querying this view is similar to using SHOW ALL but provides more detail. It is also more flexible, since it's possible to specify filter conditions or join against other relations. @@ -267,8 +267,8 @@ shared_buffers = 128MB Using on this view, specifically - updating the setting column, is the equivalent - of issuing SET commands. For example, the equivalent of + updating the setting column, is the equivalent + of issuing SET commands. For example, the equivalent of SET configuration_parameter TO DEFAULT; @@ -289,7 +289,7 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter In addition to setting global defaults or attaching overrides at the database or role level, you can pass settings to PostgreSQL via shell facilities. - Both the server and libpq client library + Both the server and libpq client library accept parameter values via the shell. @@ -298,26 +298,26 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter During server startup, parameter settings can be passed to the postgres command via the - command-line parameter. For example, postgres -c log_connections=yes -c log_destination='syslog' Settings provided in this way override those set via - postgresql.conf or ALTER SYSTEM, + postgresql.conf or ALTER SYSTEM, so they cannot be changed globally without restarting the server. - When starting a client session via libpq, + When starting a client session via libpq, parameter settings can be specified using the PGOPTIONS environment variable. Settings established in this way constitute defaults for the life of the session, but do not affect other sessions. For historical reasons, the format of PGOPTIONS is similar to that used when launching the postgres - command; specifically, the flag must be specified. For example, env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql @@ -338,20 +338,20 @@ env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql Managing Configuration File Contents - PostgreSQL provides several features for breaking - down complex postgresql.conf files into sub-files. + PostgreSQL provides several features for breaking + down complex postgresql.conf files into sub-files. These features are especially useful when managing multiple servers with related, but not identical, configurations. - include + include in configuration file In addition to individual parameter settings, - the postgresql.conf file can contain include - directives, which specify another file to read and process as if + the postgresql.conf file can contain include + directives, which specify another file to read and process as if it were inserted into the configuration file at this point. This feature allows a configuration file to be divided into physically separate parts. Include directives simply look like: @@ -365,23 +365,23 @@ include 'filename' - include_if_exists + include_if_exists in configuration file - There is also an include_if_exists directive, which acts - the same as the include directive, except + There is also an include_if_exists directive, which acts + the same as the include directive, except when the referenced file does not exist or cannot be read. A regular - include will consider this an error condition, but - include_if_exists merely logs a message and continues + include will consider this an error condition, but + include_if_exists merely logs a message and continues processing the referencing configuration file. - include_dir + include_dir in configuration file - The postgresql.conf file can also contain + The postgresql.conf file can also contain include_dir directives, which specify an entire directory of configuration files to include. These look like @@ -401,36 +401,36 @@ include_dir 'directory' Include files or directories can be used to logically separate portions of the database configuration, rather than having a single large - postgresql.conf file. Consider a company that has two + postgresql.conf file. Consider a company that has two database servers, each with a different amount of memory. There are likely elements of the configuration both will share, for things such as logging. But memory-related parameters on the server will vary between the two. And there might be server specific customizations, too. One way to manage this situation is to break the custom configuration changes for your site into three files. You could add - this to the end of your postgresql.conf file to include + this to the end of your postgresql.conf file to include them: include 'shared.conf' include 'memory.conf' include 'server.conf' - All systems would have the same shared.conf. Each + All systems would have the same shared.conf. Each server with a particular amount of memory could share the - same memory.conf; you might have one for all servers + same memory.conf; you might have one for all servers with 8GB of RAM, another for those having 16GB. And - finally server.conf could have truly server-specific + finally server.conf could have truly server-specific configuration information in it. Another possibility is to create a configuration file directory and - put this information into files there. For example, a conf.d - directory could be referenced at the end of postgresql.conf: + put this information into files there. For example, a conf.d + directory could be referenced at the end of postgresql.conf: include_dir 'conf.d' - Then you could name the files in the conf.d directory + Then you could name the files in the conf.d directory like this: 00shared.conf @@ -441,8 +441,8 @@ include_dir 'conf.d' files will be loaded. This is important because only the last setting encountered for a particular parameter while the server is reading configuration files will be used. In this example, - something set in conf.d/02server.conf would override a - value set in conf.d/01memory.conf. + something set in conf.d/02server.conf would override a + value set in conf.d/01memory.conf. @@ -483,7 +483,7 @@ include_dir 'conf.d' data_directory (string) - data_directory configuration parameter + data_directory configuration parameter @@ -497,13 +497,13 @@ include_dir 'conf.d' config_file (string) - config_file configuration parameter + config_file configuration parameter Specifies the main server configuration file - (customarily called postgresql.conf). + (customarily called postgresql.conf). This parameter can only be set on the postgres command line. @@ -512,13 +512,13 @@ include_dir 'conf.d' hba_file (string) - hba_file configuration parameter + hba_file configuration parameter Specifies the configuration file for host-based authentication - (customarily called pg_hba.conf). + (customarily called pg_hba.conf). This parameter can only be set at server start. @@ -527,13 +527,13 @@ include_dir 'conf.d' ident_file (string) - ident_file configuration parameter + ident_file configuration parameter Specifies the configuration file for user name mapping - (customarily called pg_ident.conf). + (customarily called pg_ident.conf). This parameter can only be set at server start. See also . @@ -543,7 +543,7 @@ include_dir 'conf.d' external_pid_file (string) - external_pid_file configuration parameter + external_pid_file configuration parameter @@ -569,10 +569,10 @@ include_dir 'conf.d' data directory, the postgres command-line option or PGDATA environment variable must point to the directory containing the configuration files, - and the data_directory parameter must be set in + and the data_directory parameter must be set in postgresql.conf (or on the command line) to show where the data directory is actually located. Notice that - data_directory overrides and + data_directory overrides and PGDATA for the location of the data directory, but not for the location of the configuration files. @@ -580,12 +580,12 @@ include_dir 'conf.d' If you wish, you can specify the configuration file names and locations - individually using the parameters config_file, - hba_file and/or ident_file. - config_file can only be specified on the + individually using the parameters config_file, + hba_file and/or ident_file. + config_file can only be specified on the postgres command line, but the others can be set within the main configuration file. If all three parameters plus - data_directory are explicitly set, then it is not necessary + data_directory are explicitly set, then it is not necessary to specify or PGDATA. @@ -607,7 +607,7 @@ include_dir 'conf.d' listen_addresses (string) - listen_addresses configuration parameter + listen_addresses configuration parameter @@ -615,15 +615,15 @@ include_dir 'conf.d' Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications. The value takes the form of a comma-separated list of host names - and/or numeric IP addresses. The special entry * + and/or numeric IP addresses. The special entry * corresponds to all available IP interfaces. The entry - 0.0.0.0 allows listening for all IPv4 addresses and - :: allows listening for all IPv6 addresses. + 0.0.0.0 allows listening for all IPv4 addresses and + :: allows listening for all IPv6 addresses. If the list is empty, the server does not listen on any IP interface at all, in which case only Unix-domain sockets can be used to connect to it. - The default value is localhost, - which allows only local TCP/IP loopback connections to be + The default value is localhost, + which allows only local TCP/IP loopback connections to be made. While client authentication () allows fine-grained control over who can access the server, listen_addresses @@ -638,7 +638,7 @@ include_dir 'conf.d' port (integer) - port configuration parameter + port configuration parameter @@ -653,7 +653,7 @@ include_dir 'conf.d' max_connections (integer) - max_connections configuration parameter + max_connections configuration parameter @@ -661,7 +661,7 @@ include_dir 'conf.d' Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections, but might be less if your kernel settings will not support it (as - determined during initdb). This parameter can + determined during initdb). This parameter can only be set at server start. @@ -678,17 +678,17 @@ include_dir 'conf.d' superuser_reserved_connections (integer) - superuser_reserved_connections configuration parameter + superuser_reserved_connections configuration parameter Determines the number of connection slots that - are reserved for connections by PostgreSQL + are reserved for connections by PostgreSQL superusers. At most connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least - max_connections minus + max_connections minus superuser_reserved_connections, new connections will be accepted only for superusers, and no new replication connections will be accepted. @@ -705,7 +705,7 @@ include_dir 'conf.d' unix_socket_directories (string) - unix_socket_directories configuration parameter + unix_socket_directories configuration parameter @@ -726,10 +726,10 @@ include_dir 'conf.d' In addition to the socket file itself, which is named - .s.PGSQL.nnnn where - nnnn is the server's port number, an ordinary file - named .s.PGSQL.nnnn.lock will be - created in each of the unix_socket_directories directories. + .s.PGSQL.nnnn where + nnnn is the server's port number, an ordinary file + named .s.PGSQL.nnnn.lock will be + created in each of the unix_socket_directories directories. Neither file should ever be removed manually. @@ -743,7 +743,7 @@ include_dir 'conf.d' unix_socket_group (string) - unix_socket_group configuration parameter + unix_socket_group configuration parameter @@ -768,7 +768,7 @@ include_dir 'conf.d' unix_socket_permissions (integer) - unix_socket_permissions configuration parameter + unix_socket_permissions configuration parameter @@ -804,7 +804,7 @@ include_dir 'conf.d' This parameter is irrelevant on systems, notably Solaris as of Solaris 10, that ignore socket permissions entirely. There, one can achieve a - similar effect by pointing unix_socket_directories to a + similar effect by pointing unix_socket_directories to a directory having search permission limited to the desired audience. This parameter is also irrelevant on Windows, which does not have Unix-domain sockets. @@ -815,7 +815,7 @@ include_dir 'conf.d' bonjour (boolean) - bonjour configuration parameter + bonjour configuration parameter @@ -830,14 +830,14 @@ include_dir 'conf.d' bonjour_name (string) - bonjour_name configuration parameter + bonjour_name configuration parameter Specifies the Bonjour service name. The computer name is used if this parameter is set to the - empty string '' (which is the default). This parameter is + empty string '' (which is the default). This parameter is ignored if the server was not compiled with Bonjour support. This parameter can only be set at server start. @@ -848,7 +848,7 @@ include_dir 'conf.d' tcp_keepalives_idle (integer) - tcp_keepalives_idle configuration parameter + tcp_keepalives_idle configuration parameter @@ -857,7 +857,7 @@ include_dir 'conf.d' should send a keepalive message to the client. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPIDLE or an equivalent socket option, and on + TCP_KEEPIDLE or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -874,7 +874,7 @@ include_dir 'conf.d' tcp_keepalives_interval (integer) - tcp_keepalives_interval configuration parameter + tcp_keepalives_interval configuration parameter @@ -883,7 +883,7 @@ include_dir 'conf.d' that is not acknowledged by the client should be retransmitted. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPINTVL or an equivalent socket option, and on + TCP_KEEPINTVL or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -900,7 +900,7 @@ include_dir 'conf.d' tcp_keepalives_count (integer) - tcp_keepalives_count configuration parameter + tcp_keepalives_count configuration parameter @@ -909,7 +909,7 @@ include_dir 'conf.d' the server's connection to the client is considered dead. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPCNT or an equivalent socket option; + TCP_KEEPCNT or an equivalent socket option; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -930,10 +930,10 @@ include_dir 'conf.d' authentication_timeout (integer) - timeoutclient authentication - client authenticationtimeout during + timeoutclient authentication + client authenticationtimeout during - authentication_timeout configuration parameter + authentication_timeout configuration parameter @@ -943,8 +943,8 @@ include_dir 'conf.d' would-be client has not completed the authentication protocol in this much time, the server closes the connection. This prevents hung clients from occupying a connection indefinitely. - The default is one minute (1m). - This parameter can only be set in the postgresql.conf + The default is one minute (1m). + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -953,16 +953,16 @@ include_dir 'conf.d' ssl (boolean) - ssl configuration parameter + ssl configuration parameter - Enables SSL connections. Please read + Enables SSL connections. Please read before using this. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is off. + The default is off. @@ -970,7 +970,7 @@ include_dir 'conf.d' ssl_ca_file (string) - ssl_ca_file configuration parameter + ssl_ca_file configuration parameter @@ -978,7 +978,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate authority (CA). Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is empty, meaning no CA file is loaded, and client certificate verification is not performed. @@ -989,14 +989,14 @@ include_dir 'conf.d' ssl_cert_file (string) - ssl_cert_file configuration parameter + ssl_cert_file configuration parameter Specifies the name of the file containing the SSL server certificate. Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is server.crt. @@ -1006,7 +1006,7 @@ include_dir 'conf.d' ssl_crl_file (string) - ssl_crl_file configuration parameter + ssl_crl_file configuration parameter @@ -1014,7 +1014,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate revocation list (CRL). Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is empty, meaning no CRL file is loaded. @@ -1024,14 +1024,14 @@ include_dir 'conf.d' ssl_key_file (string) - ssl_key_file configuration parameter + ssl_key_file configuration parameter Specifies the name of the file containing the SSL server private key. Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is server.key. @@ -1041,19 +1041,19 @@ include_dir 'conf.d' ssl_ciphers (string) - ssl_ciphers configuration parameter + ssl_ciphers configuration parameter - Specifies a list of SSL cipher suites that are allowed to be + Specifies a list of SSL cipher suites that are allowed to be used on secure connections. See - the ciphers manual page - in the OpenSSL package for the syntax of this setting + the ciphers manual page + in the OpenSSL package for the syntax of this setting and a list of supported values. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default value is HIGH:MEDIUM:+3DES:!aNULL. The + The default value is HIGH:MEDIUM:+3DES:!aNULL. The default is usually a reasonable choice unless you have specific security requirements. @@ -1065,7 +1065,7 @@ include_dir 'conf.d' HIGH - Cipher suites that use ciphers from HIGH group (e.g., + Cipher suites that use ciphers from HIGH group (e.g., AES, Camellia, 3DES) @@ -1075,7 +1075,7 @@ include_dir 'conf.d' MEDIUM - Cipher suites that use ciphers from MEDIUM group + Cipher suites that use ciphers from MEDIUM group (e.g., RC4, SEED) @@ -1085,11 +1085,11 @@ include_dir 'conf.d' +3DES - The OpenSSL default order for HIGH is problematic + The OpenSSL default order for HIGH is problematic because it orders 3DES higher than AES128. This is wrong because 3DES offers less security than AES128, and it is also much - slower. +3DES reorders it after all other - HIGH and MEDIUM ciphers. + slower. +3DES reorders it after all other + HIGH and MEDIUM ciphers. @@ -1111,7 +1111,7 @@ include_dir 'conf.d' Available cipher suite details will vary across OpenSSL versions. Use the command openssl ciphers -v 'HIGH:MEDIUM:+3DES:!aNULL' to - see actual details for the currently installed OpenSSL + see actual details for the currently installed OpenSSL version. Note that this list is filtered at run time based on the server key type. @@ -1121,16 +1121,16 @@ include_dir 'conf.d' ssl_prefer_server_ciphers (boolean) - ssl_prefer_server_ciphers configuration parameter + ssl_prefer_server_ciphers configuration parameter Specifies whether to use the server's SSL cipher preferences, rather than the client's. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is true. + The default is true. @@ -1146,28 +1146,28 @@ include_dir 'conf.d' ssl_ecdh_curve (string) - ssl_ecdh_curve configuration parameter + ssl_ecdh_curve configuration parameter - Specifies the name of the curve to use in ECDH key + Specifies the name of the curve to use in ECDH key exchange. It needs to be supported by all clients that connect. It does not need to be the same curve used by the server's Elliptic Curve key. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is prime256v1. + The default is prime256v1. OpenSSL names for the most common curves are: - prime256v1 (NIST P-256), - secp384r1 (NIST P-384), - secp521r1 (NIST P-521). + prime256v1 (NIST P-256), + secp384r1 (NIST P-384), + secp521r1 (NIST P-521). The full list of available curves can be shown with the command openssl ecparam -list_curves. Not all of them - are usable in TLS though. + are usable in TLS though. @@ -1175,17 +1175,17 @@ include_dir 'conf.d' password_encryption (enum) - password_encryption configuration parameter + password_encryption configuration parameter When a password is specified in or , this parameter determines the algorithm - to use to encrypt the password. The default value is md5, - which stores the password as an MD5 hash (on is also - accepted, as alias for md5). Setting this parameter to - scram-sha-256 will encrypt the password with SCRAM-SHA-256. + to use to encrypt the password. The default value is md5, + which stores the password as an MD5 hash (on is also + accepted, as alias for md5). Setting this parameter to + scram-sha-256 will encrypt the password with SCRAM-SHA-256. Note that older clients might lack support for the SCRAM authentication @@ -1198,7 +1198,7 @@ include_dir 'conf.d' ssl_dh_params_file (string) - ssl_dh_params_file configuration parameter + ssl_dh_params_file configuration parameter @@ -1213,7 +1213,7 @@ include_dir 'conf.d' - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1222,7 +1222,7 @@ include_dir 'conf.d' krb_server_keyfile (string) - krb_server_keyfile configuration parameter + krb_server_keyfile configuration parameter @@ -1230,7 +1230,7 @@ include_dir 'conf.d' Sets the location of the Kerberos server key file. See for details. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -1245,8 +1245,8 @@ include_dir 'conf.d' Sets whether GSSAPI user names should be treated case-insensitively. - The default is off (case sensitive). This parameter can only be - set in the postgresql.conf file or on the server command line. + The default is off (case sensitive). This parameter can only be + set in the postgresql.conf file or on the server command line. @@ -1254,43 +1254,43 @@ include_dir 'conf.d' db_user_namespace (boolean) - db_user_namespace configuration parameter + db_user_namespace configuration parameter This parameter enables per-database user names. It is off by default. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - If this is on, you should create users as username@dbname. - When username is passed by a connecting client, - @ and the database name are appended to the user + If this is on, you should create users as username@dbname. + When username is passed by a connecting client, + @ and the database name are appended to the user name and that database-specific user name is looked up by the server. Note that when you create users with names containing - @ within the SQL environment, you will need to + @ within the SQL environment, you will need to quote the user name. With this parameter enabled, you can still create ordinary global - users. Simply append @ when specifying the user - name in the client, e.g. joe@. The @ + users. Simply append @ when specifying the user + name in the client, e.g. joe@. The @ will be stripped off before the user name is looked up by the server. - db_user_namespace causes the client's and + db_user_namespace causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because - md5 uses the user name as salt on both the - client and server, md5 cannot be used with - db_user_namespace. + md5 uses the user name as salt on both the + client and server, md5 cannot be used with + db_user_namespace. @@ -1317,15 +1317,15 @@ include_dir 'conf.d' shared_buffers (integer) - shared_buffers configuration parameter + shared_buffers configuration parameter Sets the amount of memory the database server uses for shared memory buffers. The default is typically 128 megabytes - (128MB), but might be less if your kernel settings will - not support it (as determined during initdb). + (128MB), but might be less if your kernel settings will + not support it (as determined during initdb). This setting must be at least 128 kilobytes. (Non-default values of BLCKSZ change the minimum.) However, settings significantly higher than the minimum are usually needed @@ -1358,7 +1358,7 @@ include_dir 'conf.d' huge_pages (enum) - huge_pages configuration parameter + huge_pages configuration parameter @@ -1392,7 +1392,7 @@ include_dir 'conf.d' temp_buffers (integer) - temp_buffers configuration parameter + temp_buffers configuration parameter @@ -1400,7 +1400,7 @@ include_dir 'conf.d' Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes - (8MB). The setting can be changed within individual + (8MB). The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session. @@ -1408,10 +1408,10 @@ include_dir 'conf.d' A session will allocate temporary buffers as needed up to the limit - given by temp_buffers. The cost of setting a large + given by temp_buffers. The cost of setting a large value in sessions that do not actually need many temporary buffers is only a buffer descriptor, or about 64 bytes, per - increment in temp_buffers. However if a buffer is + increment in temp_buffers. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, BLCKSZ bytes). @@ -1421,13 +1421,13 @@ include_dir 'conf.d' max_prepared_transactions (integer) - max_prepared_transactions configuration parameter + max_prepared_transactions configuration parameter Sets the maximum number of transactions that can be in the - prepared state simultaneously (see prepared state simultaneously (see ). Setting this parameter to zero (which is the default) disables the prepared-transaction feature. @@ -1454,14 +1454,14 @@ include_dir 'conf.d' work_mem (integer) - work_mem configuration parameter + work_mem configuration parameter Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value - defaults to four megabytes (4MB). + defaults to four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary @@ -1469,10 +1469,10 @@ include_dir 'conf.d' concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are - used for ORDER BY, DISTINCT, and + used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and - hash-based processing of IN subqueries. + hash-based processing of IN subqueries. @@ -1480,15 +1480,15 @@ include_dir 'conf.d' maintenance_work_mem (integer) - maintenance_work_mem configuration parameter + maintenance_work_mem configuration parameter Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE - INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults - to 64 megabytes (64MB). Since only one of these + INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults + to 64 megabytes (64MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger @@ -1508,7 +1508,7 @@ include_dir 'conf.d' autovacuum_work_mem (integer) - autovacuum_work_mem configuration parameter + autovacuum_work_mem configuration parameter @@ -1525,26 +1525,26 @@ include_dir 'conf.d' max_stack_depth (integer) - max_stack_depth configuration parameter + max_stack_depth configuration parameter Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit - enforced by the kernel (as set by ulimit -s or local + enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines such as expression evaluation. The default setting is two - megabytes (2MB), which is conservatively small and + megabytes (2MB), which is conservatively small and unlikely to risk crashes. However, it might be too small to allow execution of complex functions. Only superusers can change this setting. - Setting max_stack_depth higher than + Setting max_stack_depth higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process. On platforms where PostgreSQL can determine the kernel limit, @@ -1558,25 +1558,25 @@ include_dir 'conf.d' dynamic_shared_memory_type (enum) - dynamic_shared_memory_type configuration parameter + dynamic_shared_memory_type configuration parameter Specifies the dynamic shared memory implementation that the server - should use. Possible values are posix (for POSIX shared - memory allocated using shm_open), sysv - (for System V shared memory allocated via shmget), - windows (for Windows shared memory), mmap + should use. Possible values are posix (for POSIX shared + memory allocated using shm_open), sysv + (for System V shared memory allocated via shmget), + windows (for Windows shared memory), mmap (to simulate shared memory using memory-mapped files stored in the - data directory), and none (to disable this feature). + data directory), and none (to disable this feature). Not all values are supported on all platforms; the first supported option is the default for that platform. The use of the - mmap option, which is not the default on any platform, + mmap option, which is not the default on any platform, is generally discouraged because the operating system may write modified pages back to disk repeatedly, increasing system I/O load; however, it may be useful for debugging, when the - pg_dynshmem directory is stored on a RAM disk, or when + pg_dynshmem directory is stored on a RAM disk, or when other shared memory facilities are not available. @@ -1592,7 +1592,7 @@ include_dir 'conf.d' temp_file_limit (integer) - temp_file_limit configuration parameter + temp_file_limit configuration parameter @@ -1601,13 +1601,13 @@ include_dir 'conf.d' for temporary files, such as sort and hash temporary files, or the storage file for a held cursor. A transaction attempting to exceed this limit will be canceled. - The value is specified in kilobytes, and -1 (the + The value is specified in kilobytes, and -1 (the default) means no limit. Only superusers can change this setting. This setting constrains the total space used at any instant by all - temporary files used by a given PostgreSQL process. + temporary files used by a given PostgreSQL process. It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does not count against this limit. @@ -1625,7 +1625,7 @@ include_dir 'conf.d' max_files_per_process (integer) - max_files_per_process configuration parameter + max_files_per_process configuration parameter @@ -1637,7 +1637,7 @@ include_dir 'conf.d' allow individual processes to open many more files than the system can actually support if many processes all try to open that many files. If you find yourself seeing Too many open - files failures, try reducing this setting. + files failures, try reducing this setting. This parameter can only be set at server start. @@ -1684,7 +1684,7 @@ include_dir 'conf.d' vacuum_cost_delay (integer) - vacuum_cost_delay configuration parameter + vacuum_cost_delay configuration parameter @@ -1702,7 +1702,7 @@ include_dir 'conf.d' When using cost-based vacuuming, appropriate values for - vacuum_cost_delay are usually quite small, perhaps + vacuum_cost_delay are usually quite small, perhaps 10 or 20 milliseconds. Adjusting vacuum's resource consumption is best done by changing the other vacuum cost parameters. @@ -1712,7 +1712,7 @@ include_dir 'conf.d' vacuum_cost_page_hit (integer) - vacuum_cost_page_hit configuration parameter + vacuum_cost_page_hit configuration parameter @@ -1728,7 +1728,7 @@ include_dir 'conf.d' vacuum_cost_page_miss (integer) - vacuum_cost_page_miss configuration parameter + vacuum_cost_page_miss configuration parameter @@ -1744,7 +1744,7 @@ include_dir 'conf.d' vacuum_cost_page_dirty (integer) - vacuum_cost_page_dirty configuration parameter + vacuum_cost_page_dirty configuration parameter @@ -1760,7 +1760,7 @@ include_dir 'conf.d' vacuum_cost_limit (integer) - vacuum_cost_limit configuration parameter + vacuum_cost_limit configuration parameter @@ -1792,8 +1792,8 @@ include_dir 'conf.d' There is a separate server - process called the background writer, whose function - is to issue writes of dirty (new or modified) shared + process called the background writer, whose function + is to issue writes of dirty (new or modified) shared buffers. It writes shared buffers so server processes handling user queries seldom or never need to wait for a write to occur. However, the background writer does cause a net overall @@ -1808,7 +1808,7 @@ include_dir 'conf.d' bgwriter_delay (integer) - bgwriter_delay configuration parameter + bgwriter_delay configuration parameter @@ -1816,16 +1816,16 @@ include_dir 'conf.d' Specifies the delay between activity rounds for the background writer. In each round the writer issues writes for some number of dirty buffers (controllable by the - following parameters). It then sleeps for bgwriter_delay + following parameters). It then sleeps for bgwriter_delay milliseconds, and repeats. When there are no dirty buffers in the buffer pool, though, it goes into a longer sleep regardless of - bgwriter_delay. The default value is 200 - milliseconds (200ms). Note that on many systems, the + bgwriter_delay. The default value is 200 + milliseconds (200ms). Note that on many systems, the effective resolution of sleep delays is 10 milliseconds; setting - bgwriter_delay to a value that is not a multiple of 10 + bgwriter_delay to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -1833,7 +1833,7 @@ include_dir 'conf.d' bgwriter_lru_maxpages (integer) - bgwriter_lru_maxpages configuration parameter + bgwriter_lru_maxpages configuration parameter @@ -1843,7 +1843,7 @@ include_dir 'conf.d' background writing. (Note that checkpoints, which are managed by a separate, dedicated auxiliary process, are unaffected.) The default value is 100 buffers. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1852,7 +1852,7 @@ include_dir 'conf.d' bgwriter_lru_multiplier (floating point) - bgwriter_lru_multiplier configuration parameter + bgwriter_lru_multiplier configuration parameter @@ -1860,18 +1860,18 @@ include_dir 'conf.d' The number of dirty buffers written in each round is based on the number of new buffers that have been needed by server processes during recent rounds. The average recent need is multiplied by - bgwriter_lru_multiplier to arrive at an estimate of the + bgwriter_lru_multiplier to arrive at an estimate of the number of buffers that will be needed during the next round. Dirty buffers are written until there are that many clean, reusable buffers - available. (However, no more than bgwriter_lru_maxpages + available. (However, no more than bgwriter_lru_maxpages buffers will be written per round.) - Thus, a setting of 1.0 represents a just in time policy + Thus, a setting of 1.0 represents a just in time policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1880,7 +1880,7 @@ include_dir 'conf.d' bgwriter_flush_after (integer) - bgwriter_flush_after configuration parameter + bgwriter_flush_after configuration parameter @@ -1897,10 +1897,10 @@ include_dir 'conf.d' cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, and - 2MB. The default is 512kB on Linux, - 0 elsewhere. (If BLCKSZ is not 8kB, + 2MB. The default is 512kB on Linux, + 0 elsewhere. (If BLCKSZ is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1923,15 +1923,15 @@ include_dir 'conf.d' effective_io_concurrency (integer) - effective_io_concurrency configuration parameter + effective_io_concurrency configuration parameter Sets the number of concurrent disk I/O operations that - PostgreSQL expects can be executed + PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O - operations that any individual PostgreSQL session + operations that any individual PostgreSQL session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. @@ -1951,7 +1951,7 @@ include_dir 'conf.d' - Asynchronous I/O depends on an effective posix_fadvise + Asynchronous I/O depends on an effective posix_fadvise function, which some operating systems lack. If the function is not present then setting this parameter to anything but zero will result in an error. On some operating systems (e.g., Solaris), the function @@ -1970,7 +1970,7 @@ include_dir 'conf.d' max_worker_processes (integer) - max_worker_processes configuration parameter + max_worker_processes configuration parameter @@ -1997,7 +1997,7 @@ include_dir 'conf.d' max_parallel_workers_per_gather (integer) - max_parallel_workers_per_gather configuration parameter + max_parallel_workers_per_gather configuration parameter @@ -2021,7 +2021,7 @@ include_dir 'conf.d' account when choosing a value for this setting, as well as when configuring other settings that control resource utilization, such as . Resource limits such as - work_mem are applied individually to each worker, + work_mem are applied individually to each worker, which means the total utilization may be much higher across all processes than it would normally be for any single process. For example, a parallel query using 4 workers may use up to 5 times @@ -2039,7 +2039,7 @@ include_dir 'conf.d' max_parallel_workers (integer) - max_parallel_workers configuration parameter + max_parallel_workers configuration parameter @@ -2059,7 +2059,7 @@ include_dir 'conf.d' backend_flush_after (integer) - backend_flush_after configuration parameter + backend_flush_after configuration parameter @@ -2076,7 +2076,7 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, - and 2MB. The default is 0, i.e., no + and 2MB. The default is 0, i.e., no forced writeback. (If BLCKSZ is not 8kB, the maximum value scales proportionally to it.) @@ -2086,13 +2086,13 @@ include_dir 'conf.d' old_snapshot_threshold (integer) - old_snapshot_threshold configuration parameter + old_snapshot_threshold configuration parameter Sets the minimum time that a snapshot can be used without risk of a - snapshot too old error occurring when using the snapshot. + snapshot too old error occurring when using the snapshot. This parameter can only be set at server start. @@ -2107,12 +2107,12 @@ include_dir 'conf.d' - A value of -1 disables this feature, and is the default. + A value of -1 disables this feature, and is the default. Useful values for production work probably range from a small number of hours to a few days. The setting will be coerced to a granularity - of minutes, and small numbers (such as 0 or - 1min) are only allowed because they may sometimes be - useful for testing. While a setting as high as 60d is + of minutes, and small numbers (such as 0 or + 1min) are only allowed because they may sometimes be + useful for testing. While a setting as high as 60d is allowed, please note that in many workloads extreme bloat or transaction ID wraparound may occur in much shorter time frames. @@ -2120,10 +2120,10 @@ include_dir 'conf.d' When this feature is enabled, freed space at the end of a relation cannot be released to the operating system, since that could remove - information needed to detect the snapshot too old + information needed to detect the snapshot too old condition. All space allocated to a relation remains associated with that relation for reuse only within that relation unless explicitly - freed (for example, with VACUUM FULL). + freed (for example, with VACUUM FULL). @@ -2135,7 +2135,7 @@ include_dir 'conf.d' Some tables cannot safely be vacuumed early, and so will not be affected by this setting, such as system catalogs. For such tables this setting will neither reduce bloat nor create a possibility - of a snapshot too old error on scanning. + of a snapshot too old error on scanning. @@ -2158,45 +2158,45 @@ include_dir 'conf.d' wal_level (enum) - wal_level configuration parameter + wal_level configuration parameter - wal_level determines how much information is written to - the WAL. The default value is replica, which writes enough + wal_level determines how much information is written to + the WAL. The default value is replica, which writes enough data to support WAL archiving and replication, including running - read-only queries on a standby server. minimal removes all + read-only queries on a standby server. minimal removes all logging except the information required to recover from a crash or immediate shutdown. Finally, - logical adds information necessary to support logical + logical adds information necessary to support logical decoding. Each level includes the information logged at all lower levels. This parameter can only be set at server start. - In minimal level, WAL-logging of some bulk + In minimal level, WAL-logging of some bulk operations can be safely skipped, which can make those operations much faster (see ). Operations in which this optimization can be applied include: - CREATE TABLE AS - CREATE INDEX - CLUSTER - COPY into tables that were created or truncated in the same + CREATE TABLE AS + CREATE INDEX + CLUSTER + COPY into tables that were created or truncated in the same transaction But minimal WAL does not contain enough information to reconstruct the - data from a base backup and the WAL logs, so replica or + data from a base backup and the WAL logs, so replica or higher must be used to enable WAL archiving () and streaming replication. - In logical level, the same information is logged as - with replica, plus information needed to allow + In logical level, the same information is logged as + with replica, plus information needed to allow extracting logical change sets from the WAL. Using a level of - logical will increase the WAL volume, particularly if many + logical will increase the WAL volume, particularly if many tables are configured for REPLICA IDENTITY FULL and - many UPDATE and DELETE statements are + many UPDATE and DELETE statements are executed. @@ -2210,14 +2210,14 @@ include_dir 'conf.d' fsync (boolean) - fsync configuration parameter + fsync configuration parameter - If this parameter is on, the PostgreSQL server + If this parameter is on, the PostgreSQL server will try to make sure that updates are physically written to - disk, by issuing fsync() system calls or various + disk, by issuing fsync() system calls or various equivalent methods (see ). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash. @@ -2249,7 +2249,7 @@ include_dir 'conf.d' off to on, it is necessary to force all modified buffers in the kernel to durable storage. This can be done while the cluster is shutdown or while fsync is on by running initdb - --sync-only, running sync, unmounting the + --sync-only, running sync, unmounting the file system, or rebooting the server. @@ -2261,7 +2261,7 @@ include_dir 'conf.d' - fsync can only be set in the postgresql.conf + fsync can only be set in the postgresql.conf file or on the server command line. If you turn this parameter off, also consider turning off . @@ -2272,26 +2272,26 @@ include_dir 'conf.d' synchronous_commit (enum) - synchronous_commit configuration parameter + synchronous_commit configuration parameter Specifies whether transaction commit will wait for WAL records - to be written to disk before the command returns a success - indication to the client. Valid values are on, - remote_apply, remote_write, local, - and off. The default, and safe, setting - is on. When off, there can be a delay between + to be written to disk before the command returns a success + indication to the client. Valid values are on, + remote_apply, remote_write, local, + and off. The default, and safe, setting + is on. When off, there can be a delay between when success is reported to the client and when the transaction is really guaranteed to be safe against a server crash. (The maximum delay is three times .) Unlike - , setting this parameter to off + , setting this parameter to off does not create any risk of database inconsistency: an operating system or database crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had - been aborted cleanly. So, turning synchronous_commit off + been aborted cleanly. So, turning synchronous_commit off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more discussion see . @@ -2300,32 +2300,32 @@ include_dir 'conf.d' If is non-empty, this parameter also controls whether or not transaction commits will wait for their WAL records to be replicated to the standby server(s). - When set to on, commits will wait until replies + When set to on, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and flushed it to disk. This ensures the transaction will not be lost unless both the primary and all synchronous standbys suffer corruption of their database storage. - When set to remote_apply, commits will wait until replies + When set to remote_apply, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and applied it, so that it has become visible to queries on the standby(s). - When set to remote_write, commits will wait until replies + When set to remote_write, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and written it out to their operating system. This setting is sufficient to ensure data preservation even if a standby instance of - PostgreSQL were to crash, but not if the standby + PostgreSQL were to crash, but not if the standby suffers an operating-system-level crash, since the data has not necessarily reached stable storage on the standby. - Finally, the setting local causes commits to wait for + Finally, the setting local causes commits to wait for local flush to disk, but not for replication. This is not usually desirable when synchronous replication is in use, but is provided for completeness. - If synchronous_standby_names is empty, the settings - on, remote_apply, remote_write - and local all provide the same synchronization level: + If synchronous_standby_names is empty, the settings + on, remote_apply, remote_write + and local all provide the same synchronization level: transaction commits only wait for local flush to disk. @@ -2335,7 +2335,7 @@ include_dir 'conf.d' transactions commit synchronously and others asynchronously. For example, to make a single multistatement transaction commit asynchronously when the default is the opposite, issue SET - LOCAL synchronous_commit TO OFF within the transaction. + LOCAL synchronous_commit TO OFF within the transaction. @@ -2343,7 +2343,7 @@ include_dir 'conf.d' wal_sync_method (enum) - wal_sync_method configuration parameter + wal_sync_method configuration parameter @@ -2356,41 +2356,41 @@ include_dir 'conf.d' - open_datasync (write WAL files with open() option O_DSYNC) + open_datasync (write WAL files with open() option O_DSYNC) - fdatasync (call fdatasync() at each commit) + fdatasync (call fdatasync() at each commit) - fsync (call fsync() at each commit) + fsync (call fsync() at each commit) - fsync_writethrough (call fsync() at each commit, forcing write-through of any disk write cache) + fsync_writethrough (call fsync() at each commit, forcing write-through of any disk write cache) - open_sync (write WAL files with open() option O_SYNC) + open_sync (write WAL files with open() option O_SYNC) - The open_* options also use O_DIRECT if available. + The open_* options also use O_DIRECT if available. Not all of these choices are available on all platforms. The default is the first method in the above list that is supported - by the platform, except that fdatasync is the default on + by the platform, except that fdatasync is the default on Linux. The default is not necessarily ideal; it might be necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. These aspects are discussed in . - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2399,12 +2399,12 @@ include_dir 'conf.d' full_page_writes (boolean) - full_page_writes configuration parameter + full_page_writes configuration parameter - When this parameter is on, the PostgreSQL server + When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint. This is needed because @@ -2436,9 +2436,9 @@ include_dir 'conf.d' - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is on. + The default is on. @@ -2446,12 +2446,12 @@ include_dir 'conf.d' wal_log_hints (boolean) - wal_log_hints configuration parameter + wal_log_hints configuration parameter - When this parameter is on, the PostgreSQL + When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint, even for non-critical modifications of so-called hint bits. @@ -2465,7 +2465,7 @@ include_dir 'conf.d' - This parameter can only be set at server start. The default value is off. + This parameter can only be set at server start. The default value is off. @@ -2473,16 +2473,16 @@ include_dir 'conf.d' wal_compression (boolean) - wal_compression configuration parameter + wal_compression configuration parameter - When this parameter is on, the PostgreSQL + When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when is on or during a base backup. A compressed page image will be decompressed during WAL replay. - The default value is off. + The default value is off. Only superusers can change this setting. @@ -2498,7 +2498,7 @@ include_dir 'conf.d' wal_buffers (integer) - wal_buffers configuration parameter + wal_buffers configuration parameter @@ -2530,24 +2530,24 @@ include_dir 'conf.d' wal_writer_delay (integer) - wal_writer_delay configuration parameter + wal_writer_delay configuration parameter Specifies how often the WAL writer flushes WAL. After flushing WAL it - sleeps for wal_writer_delay milliseconds, unless woken up + sleeps for wal_writer_delay milliseconds, unless woken up by an asynchronously committing transaction. If the last flush - happened less than wal_writer_delay milliseconds ago and - less than wal_writer_flush_after bytes of WAL have been + happened less than wal_writer_delay milliseconds ago and + less than wal_writer_flush_after bytes of WAL have been produced since, then WAL is only written to the operating system, not flushed to disk. - The default value is 200 milliseconds (200ms). Note that + The default value is 200 milliseconds (200ms). Note that on many systems, the effective resolution of sleep delays is 10 - milliseconds; setting wal_writer_delay to a value that is + milliseconds; setting wal_writer_delay to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2555,19 +2555,19 @@ include_dir 'conf.d' wal_writer_flush_after (integer) - wal_writer_flush_after configuration parameter + wal_writer_flush_after configuration parameter Specifies how often the WAL writer flushes WAL. If the last flush - happened less than wal_writer_delay milliseconds ago and - less than wal_writer_flush_after bytes of WAL have been + happened less than wal_writer_delay milliseconds ago and + less than wal_writer_flush_after bytes of WAL have been produced since, then WAL is only written to the operating system, not - flushed to disk. If wal_writer_flush_after is set - to 0 then WAL data is flushed immediately. The default is + flushed to disk. If wal_writer_flush_after is set + to 0 then WAL data is flushed immediately. The default is 1MB. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2575,7 +2575,7 @@ include_dir 'conf.d' commit_delay (integer) - commit_delay configuration parameter + commit_delay configuration parameter @@ -2592,15 +2592,15 @@ include_dir 'conf.d' commit_siblings other transactions are active when a flush is about to be initiated. Also, no delays are performed if fsync is disabled. - The default commit_delay is zero (no delay). + The default commit_delay is zero (no delay). Only superusers can change this setting. - In PostgreSQL releases prior to 9.3, + In PostgreSQL releases prior to 9.3, commit_delay behaved differently and was much less effective: it affected only commits, rather than all WAL flushes, and waited for the entire configured delay even if the WAL flush - was completed sooner. Beginning in PostgreSQL 9.3, + was completed sooner. Beginning in PostgreSQL 9.3, the first process that becomes ready to flush waits for the configured interval, while subsequent processes wait only until the leader completes the flush operation. @@ -2611,13 +2611,13 @@ include_dir 'conf.d' commit_siblings (integer) - commit_siblings configuration parameter + commit_siblings configuration parameter Minimum number of concurrent open transactions to require - before performing the commit_delay delay. A larger + before performing the commit_delay delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions. @@ -2634,17 +2634,17 @@ include_dir 'conf.d' checkpoint_timeout (integer) - checkpoint_timeout configuration parameter + checkpoint_timeout configuration parameter Maximum time between automatic WAL checkpoints, in seconds. The valid range is between 30 seconds and one day. - The default is five minutes (5min). + The default is five minutes (5min). Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2653,14 +2653,14 @@ include_dir 'conf.d' checkpoint_completion_target (floating point) - checkpoint_completion_target configuration parameter + checkpoint_completion_target configuration parameter Specifies the target of checkpoint completion, as a fraction of total time between checkpoints. The default is 0.5. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2669,7 +2669,7 @@ include_dir 'conf.d' checkpoint_flush_after (integer) - checkpoint_flush_after configuration parameter + checkpoint_flush_after configuration parameter @@ -2686,10 +2686,10 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, - and 2MB. The default is 256kB on - Linux, 0 elsewhere. (If BLCKSZ is not + and 2MB. The default is 256kB on + Linux, 0 elsewhere. (If BLCKSZ is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2698,7 +2698,7 @@ include_dir 'conf.d' checkpoint_warning (integer) - checkpoint_warning configuration parameter + checkpoint_warning configuration parameter @@ -2706,11 +2706,11 @@ include_dir 'conf.d' Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that - max_wal_size ought to be raised). The default is - 30 seconds (30s). Zero disables the warning. + max_wal_size ought to be raised). The default is + 30 seconds (30s). Zero disables the warning. No warnings will be generated if checkpoint_timeout is less than checkpoint_warning. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2719,19 +2719,19 @@ include_dir 'conf.d' max_wal_size (integer) - max_wal_size configuration parameter + max_wal_size configuration parameter Maximum size to let the WAL grow to between automatic WAL checkpoints. This is a soft limit; WAL size can exceed - max_wal_size under special circumstances, like - under heavy load, a failing archive_command, or a high - wal_keep_segments setting. The default is 1 GB. + max_wal_size under special circumstances, like + under heavy load, a failing archive_command, or a high + wal_keep_segments setting. The default is 1 GB. Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2740,7 +2740,7 @@ include_dir 'conf.d' min_wal_size (integer) - min_wal_size configuration parameter + min_wal_size configuration parameter @@ -2750,7 +2750,7 @@ include_dir 'conf.d' This can be used to ensure that enough WAL space is reserved to handle spikes in WAL usage, for example when running large batch jobs. The default is 80 MB. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2765,29 +2765,29 @@ include_dir 'conf.d' archive_mode (enum) - archive_mode configuration parameter + archive_mode configuration parameter - When archive_mode is enabled, completed WAL segments + When archive_mode is enabled, completed WAL segments are sent to archive storage by setting - . In addition to off, - to disable, there are two modes: on, and - always. During normal operation, there is no - difference between the two modes, but when set to always + . In addition to off, + to disable, there are two modes: on, and + always. During normal operation, there is no + difference between the two modes, but when set to always the WAL archiver is enabled also during archive recovery or standby - mode. In always mode, all files restored from the archive + mode. In always mode, all files restored from the archive or streamed with streaming replication will be archived (again). See for details. - archive_mode and archive_command are - separate variables so that archive_command can be + archive_mode and archive_command are + separate variables so that archive_command can be changed without leaving archiving mode. This parameter can only be set at server start. - archive_mode cannot be enabled when - wal_level is set to minimal. + archive_mode cannot be enabled when + wal_level is set to minimal. @@ -2795,32 +2795,32 @@ include_dir 'conf.d' archive_command (string) - archive_command configuration parameter + archive_command configuration parameter The local shell command to execute to archive a completed WAL file - segment. Any %p in the string is + segment. Any %p in the string is replaced by the path name of the file to archive, and any - %f is replaced by only the file name. + %f is replaced by only the file name. (The path name is relative to the working directory of the server, i.e., the cluster's data directory.) - Use %% to embed an actual % character in the + Use %% to embed an actual % character in the command. It is important for the command to return a zero exit status only if it succeeds. For more information see . - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. It is ignored unless - archive_mode was enabled at server start. - If archive_command is an empty string (the default) while - archive_mode is enabled, WAL archiving is temporarily + archive_mode was enabled at server start. + If archive_command is an empty string (the default) while + archive_mode is enabled, WAL archiving is temporarily disabled, but the server continues to accumulate WAL segment files in the expectation that a command will soon be provided. Setting - archive_command to a command that does nothing but - return true, e.g. /bin/true (REM on + archive_command to a command that does nothing but + return true, e.g. /bin/true (REM on Windows), effectively disables archiving, but also breaks the chain of WAL files needed for archive recovery, so it should only be used in unusual circumstances. @@ -2831,7 +2831,7 @@ include_dir 'conf.d' archive_timeout (integer) - archive_timeout configuration parameter + archive_timeout configuration parameter @@ -2841,7 +2841,7 @@ include_dir 'conf.d' traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived - data can be, you can set archive_timeout to force the + data can be, you can set archive_timeout to force the server to switch to a new WAL segment file periodically. When this parameter is greater than zero, the server will switch to a new segment file whenever this many seconds have elapsed since the last @@ -2850,13 +2850,13 @@ include_dir 'conf.d' no database activity). Note that archived files that are closed early due to a forced switch are still the same length as completely full files. Therefore, it is unwise to use a very short - archive_timeout — it will bloat your archive - storage. archive_timeout settings of a minute or so are + archive_timeout — it will bloat your archive + storage. archive_timeout settings of a minute or so are usually reasonable. You should consider using streaming replication, instead of archiving, if you want data to be copied off the master server more quickly than that. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2871,7 +2871,7 @@ include_dir 'conf.d' These settings control the behavior of the built-in - streaming replication feature (see + streaming replication feature (see ). Servers will be either a Master or a Standby server. Masters can send data, while Standby(s) are always receivers of replicated data. When cascading replication @@ -2898,7 +2898,7 @@ include_dir 'conf.d' max_wal_senders (integer) - max_wal_senders configuration parameter + max_wal_senders configuration parameter @@ -2914,8 +2914,8 @@ include_dir 'conf.d' a timeout is reached, so this parameter should be set slightly higher than the maximum number of expected clients so disconnected clients can immediately reconnect. This parameter can only - be set at server start. wal_level must be set to - replica or higher to allow connections from standby + be set at server start. wal_level must be set to + replica or higher to allow connections from standby servers. @@ -2924,7 +2924,7 @@ include_dir 'conf.d' max_replication_slots (integer) - max_replication_slots configuration parameter + max_replication_slots configuration parameter @@ -2944,17 +2944,17 @@ include_dir 'conf.d' wal_keep_segments (integer) - wal_keep_segments configuration parameter + wal_keep_segments configuration parameter Specifies the minimum number of past log file segments kept in the - pg_wal + pg_wal directory, in case a standby server needs to fetch them for streaming replication. Each segment is normally 16 megabytes. If a standby server connected to the sending server falls behind by more than - wal_keep_segments segments, the sending server might remove + wal_keep_segments segments, the sending server might remove a WAL segment still needed by the standby, in which case the replication connection will be terminated. Downstream connections will also eventually fail as a result. (However, the standby @@ -2964,15 +2964,15 @@ include_dir 'conf.d' This sets only the minimum number of segments retained in - pg_wal; the system might need to retain more segments + pg_wal; the system might need to retain more segments for WAL archival or to recover from a checkpoint. If - wal_keep_segments is zero (the default), the system + wal_keep_segments is zero (the default), the system doesn't keep any extra segments for standby purposes, so the number of old WAL segments available to standby servers is a function of the location of the previous checkpoint and status of WAL archiving. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2980,7 +2980,7 @@ include_dir 'conf.d' wal_sender_timeout (integer) - wal_sender_timeout configuration parameter + wal_sender_timeout configuration parameter @@ -2990,7 +2990,7 @@ include_dir 'conf.d' the sending server to detect a standby crash or network outage. A value of zero disables the timeout mechanism. This parameter can only be set in - the postgresql.conf file or on the server command line. + the postgresql.conf file or on the server command line. The default value is 60 seconds. @@ -2999,13 +2999,13 @@ include_dir 'conf.d' track_commit_timestamp (boolean) - track_commit_timestamp configuration parameter + track_commit_timestamp configuration parameter Record commit time of transactions. This parameter - can only be set in postgresql.conf file or on the server + can only be set in postgresql.conf file or on the server command line. The default value is off. @@ -3034,13 +3034,13 @@ include_dir 'conf.d' synchronous_standby_names (string) - synchronous_standby_names configuration parameter + synchronous_standby_names configuration parameter Specifies a list of standby servers that can support - synchronous replication, as described in + synchronous replication, as described in . There will be one or more active synchronous standbys; transactions waiting for commit will be allowed to proceed after @@ -3050,15 +3050,15 @@ include_dir 'conf.d' that are both currently connected and streaming data in real-time (as shown by a state of streaming in the - pg_stat_replication view). + pg_stat_replication view). Specifying more than one synchronous standby can allow for very high availability and protection against data loss. The name of a standby server for this purpose is the - application_name setting of the standby, as set in the + application_name setting of the standby, as set in the standby's connection information. In case of a physical replication - standby, this should be set in the primary_conninfo + standby, this should be set in the primary_conninfo setting in recovery.conf; the default is walreceiver. For logical replication, this can be set in the connection information of the subscription, and it @@ -3078,54 +3078,54 @@ ANY num_sync ( standby_name is the name of a standby server. - FIRST and ANY specify the method to choose + FIRST and ANY specify the method to choose synchronous standbys from the listed servers. - The keyword FIRST, coupled with + The keyword FIRST, coupled with num_sync, specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to num_sync synchronous standbys chosen based on their priorities. For example, a setting of - FIRST 3 (s1, s2, s3, s4) will cause each commit to wait for + FIRST 3 (s1, s2, s3, s4) will cause each commit to wait for replies from three higher-priority standbys chosen from standby servers - s1, s2, s3 and s4. + s1, s2, s3 and s4. The standbys whose names appear earlier in the list are given higher priority and will be considered as synchronous. Other standby servers appearing later in this list represent potential synchronous standbys. If any of the current synchronous standbys disconnects for whatever reason, it will be replaced immediately with the next-highest-priority - standby. The keyword FIRST is optional. + standby. The keyword FIRST is optional. - The keyword ANY, coupled with + The keyword ANY, coupled with num_sync, specifies a quorum-based synchronous replication and makes transaction commits - wait until their WAL records are replicated to at least + wait until their WAL records are replicated to at least num_sync listed standbys. - For example, a setting of ANY 3 (s1, s2, s3, s4) will cause + For example, a setting of ANY 3 (s1, s2, s3, s4) will cause each commit to proceed as soon as at least any three standbys of - s1, s2, s3 and s4 + s1, s2, s3 and s4 reply. - FIRST and ANY are case-insensitive. If these + FIRST and ANY are case-insensitive. If these keywords are used as the name of a standby server, its standby_name must be double-quoted. - The third syntax was used before PostgreSQL + The third syntax was used before PostgreSQL version 9.6 and is still supported. It's the same as the first syntax - with FIRST and + with FIRST and num_sync equal to 1. - For example, FIRST 1 (s1, s2) and s1, s2 have - the same meaning: either s1 or s2 is chosen + For example, FIRST 1 (s1, s2) and s1, s2 have + the same meaning: either s1 or s2 is chosen as a synchronous standby. - The special entry * matches any standby name. + The special entry * matches any standby name. There is no mechanism to enforce uniqueness of standby names. In case @@ -3136,7 +3136,7 @@ ANY num_sync ( standby_name should have the form of a valid SQL identifier, unless it - is *. You can use double-quoting if necessary. But note + is *. You can use double-quoting if necessary. But note that standby_names are compared to standby application names case-insensitively, whether double-quoted or not. @@ -3149,10 +3149,10 @@ ANY num_sync ( parameter to - local or off. + local or off. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -3161,13 +3161,13 @@ ANY num_sync ( vacuum_defer_cleanup_age (integer) - vacuum_defer_cleanup_age configuration parameter + vacuum_defer_cleanup_age configuration parameter - Specifies the number of transactions by which VACUUM and - HOT updates will defer cleanup of dead row versions. The + Specifies the number of transactions by which VACUUM and + HOT updates will defer cleanup of dead row versions. The default is zero transactions, meaning that dead row versions can be removed as soon as possible, that is, as soon as they are no longer visible to any open transaction. You may wish to set this to a @@ -3178,16 +3178,16 @@ ANY num_sync ( num_sync ( hot_standby (boolean) - hot_standby configuration parameter + hot_standby configuration parameter @@ -3226,7 +3226,7 @@ ANY num_sync ( max_standby_archive_delay (integer) - max_standby_archive_delay configuration parameter + max_standby_archive_delay configuration parameter @@ -3235,16 +3235,16 @@ ANY num_sync ( . - max_standby_archive_delay applies when WAL data is + max_standby_archive_delay applies when WAL data is being read from WAL archive (and is therefore not current). The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - Note that max_standby_archive_delay is not the same as the + Note that max_standby_archive_delay is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply any one WAL segment's data. Thus, if one query has resulted in significant delay earlier in the @@ -3257,7 +3257,7 @@ ANY num_sync ( max_standby_streaming_delay (integer) - max_standby_streaming_delay configuration parameter + max_standby_streaming_delay configuration parameter @@ -3266,16 +3266,16 @@ ANY num_sync ( . - max_standby_streaming_delay applies when WAL data is + max_standby_streaming_delay applies when WAL data is being received via streaming replication. The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - Note that max_standby_streaming_delay is not the same as + Note that max_standby_streaming_delay is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply WAL data once it has been received from the primary server. Thus, if one query has @@ -3289,7 +3289,7 @@ ANY num_sync ( wal_receiver_status_interval (integer) - wal_receiver_status_interval configuration parameter + wal_receiver_status_interval configuration parameter @@ -3298,7 +3298,7 @@ ANY num_sync ( - pg_stat_replication view. The standby will report + pg_stat_replication view. The standby will report the last write-ahead log location it has written, the last position it has flushed to disk, and the last position it has applied. This parameter's @@ -3307,7 +3307,7 @@ ANY num_sync ( num_sync ( hot_standby_feedback (boolean) - hot_standby_feedback configuration parameter + hot_standby_feedback configuration parameter @@ -3327,9 +3327,9 @@ ANY num_sync ( ( num_sync ( wal_receiver_timeout (integer) - wal_receiver_timeout configuration parameter + wal_receiver_timeout configuration parameter @@ -3363,7 +3363,7 @@ ANY num_sync ( num_sync ( wal_retrieve_retry_interval (integer) - wal_retrieve_retry_interval configuration parameter + wal_retrieve_retry_interval configuration parameter Specify how long the standby server should wait when WAL data is not available from any sources (streaming replication, - local pg_wal or WAL archive) before retrying to + local pg_wal or WAL archive) before retrying to retrieve WAL data. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. The default value is 5 seconds. Units are milliseconds if not specified. @@ -3420,7 +3420,7 @@ ANY num_sync ( max_logical_replication_workers (int) - max_logical_replication_workers configuration parameter + max_logical_replication_workers configuration parameter @@ -3441,7 +3441,7 @@ ANY num_sync ( max_sync_workers_per_subscription (integer) - max_sync_workers_per_subscription configuration parameter + max_sync_workers_per_subscription configuration parameter @@ -3478,7 +3478,7 @@ ANY num_sync ( num_sync ( num_sync ( enable_gathermerge (boolean) - enable_gathermerge configuration parameter + enable_gathermerge configuration parameter Enables or disables the query planner's use of gather - merge plan types. The default is on. + merge plan types. The default is on. @@ -3527,13 +3527,13 @@ ANY num_sync ( enable_hashagg (boolean) - enable_hashagg configuration parameter + enable_hashagg configuration parameter Enables or disables the query planner's use of hashed - aggregation plan types. The default is on. + aggregation plan types. The default is on. @@ -3541,13 +3541,13 @@ ANY num_sync ( enable_hashjoin (boolean) - enable_hashjoin configuration parameter + enable_hashjoin configuration parameter Enables or disables the query planner's use of hash-join plan - types. The default is on. + types. The default is on. @@ -3558,13 +3558,13 @@ ANY num_sync ( num_sync ( enable_indexonlyscan (boolean) - enable_indexonlyscan configuration parameter + enable_indexonlyscan configuration parameter Enables or disables the query planner's use of index-only-scan plan types (see ). - The default is on. + The default is on. @@ -3587,7 +3587,7 @@ ANY num_sync ( enable_material (boolean) - enable_material configuration parameter + enable_material configuration parameter @@ -3596,7 +3596,7 @@ ANY num_sync ( num_sync ( enable_mergejoin (boolean) - enable_mergejoin configuration parameter + enable_mergejoin configuration parameter Enables or disables the query planner's use of merge-join plan - types. The default is on. + types. The default is on. @@ -3618,7 +3618,7 @@ ANY num_sync ( enable_nestloop (boolean) - enable_nestloop configuration parameter + enable_nestloop configuration parameter @@ -3627,7 +3627,7 @@ ANY num_sync ( num_sync ( enable_partition_wise_join (boolean) - enable_partition_wise_join configuration parameter + enable_partition_wise_join configuration parameter @@ -3647,7 +3647,7 @@ ANY num_sync ( num_sync ( num_sync ( num_sync ( enable_sort (boolean) - enable_sort configuration parameter + enable_sort configuration parameter @@ -3684,7 +3684,7 @@ ANY num_sync ( num_sync ( enable_tidscan (boolean) - enable_tidscan configuration parameter + enable_tidscan configuration parameter - Enables or disables the query planner's use of TID - scan plan types. The default is on. + Enables or disables the query planner's use of TID + scan plan types. The default is on. @@ -3709,12 +3709,12 @@ ANY num_sync ( num_sync ( seq_page_cost (floating point) - seq_page_cost configuration parameter + seq_page_cost configuration parameter @@ -3752,7 +3752,7 @@ ANY num_sync ( random_page_cost (floating point) - random_page_cost configuration parameter + random_page_cost configuration parameter @@ -3765,7 +3765,7 @@ ANY num_sync ( num_sync ( num_sync ( cpu_tuple_cost (floating point) - cpu_tuple_cost configuration parameter + cpu_tuple_cost configuration parameter @@ -3826,7 +3826,7 @@ ANY num_sync ( cpu_index_tuple_cost (floating point) - cpu_index_tuple_cost configuration parameter + cpu_index_tuple_cost configuration parameter @@ -3841,7 +3841,7 @@ ANY num_sync ( cpu_operator_cost (floating point) - cpu_operator_cost configuration parameter + cpu_operator_cost configuration parameter @@ -3856,7 +3856,7 @@ ANY num_sync ( parallel_setup_cost (floating point) - parallel_setup_cost configuration parameter + parallel_setup_cost configuration parameter @@ -3871,7 +3871,7 @@ ANY num_sync ( parallel_tuple_cost (floating point) - parallel_tuple_cost configuration parameter + parallel_tuple_cost configuration parameter @@ -3886,7 +3886,7 @@ ANY num_sync ( min_parallel_table_scan_size (integer) - min_parallel_table_scan_size configuration parameter + min_parallel_table_scan_size configuration parameter @@ -3896,7 +3896,7 @@ ANY num_sync ( num_sync ( min_parallel_index_scan_size (integer) - min_parallel_index_scan_size configuration parameter + min_parallel_index_scan_size configuration parameter @@ -3913,7 +3913,7 @@ ANY num_sync ( num_sync ( effective_cache_size (integer) - effective_cache_size configuration parameter + effective_cache_size configuration parameter @@ -3942,7 +3942,7 @@ ANY num_sync ( num_sync ( num_sync ( geqo_threshold (integer) - geqo_threshold configuration parameter + geqo_threshold configuration parameter Use genetic query optimization to plan queries with at least - this many FROM items involved. (Note that a - FULL OUTER JOIN construct counts as only one FROM + this many FROM items involved. (Note that a + FULL OUTER JOIN construct counts as only one FROM item.) The default is 12. For simpler queries it is usually best to use the regular, exhaustive-search planner, but for queries with many tables the exhaustive search takes too long, often @@ -4011,7 +4011,7 @@ ANY num_sync ( geqo_effort (integer) - geqo_effort configuration parameter + geqo_effort configuration parameter @@ -4037,7 +4037,7 @@ ANY num_sync ( geqo_pool_size (integer) - geqo_pool_size configuration parameter + geqo_pool_size configuration parameter @@ -4055,7 +4055,7 @@ ANY num_sync ( geqo_generations (integer) - geqo_generations configuration parameter + geqo_generations configuration parameter @@ -4073,7 +4073,7 @@ ANY num_sync ( geqo_selection_bias (floating point) - geqo_selection_bias configuration parameter + geqo_selection_bias configuration parameter @@ -4088,7 +4088,7 @@ ANY num_sync ( geqo_seed (floating point) - geqo_seed configuration parameter + geqo_seed configuration parameter @@ -4112,17 +4112,17 @@ ANY num_sync ( default_statistics_target (integer) - default_statistics_target configuration parameter + default_statistics_target configuration parameter Sets the default statistics target for table columns without a column-specific target set via ALTER TABLE - SET STATISTICS. Larger values increase the time needed to - do ANALYZE, but might improve the quality of the + SET STATISTICS. Larger values increase the time needed to + do ANALYZE, but might improve the quality of the planner's estimates. The default is 100. For more information - on the use of statistics by the PostgreSQL + on the use of statistics by the PostgreSQL query planner, refer to . @@ -4134,26 +4134,26 @@ ANY num_sync ( cursor_tuple_fraction (floating point) - cursor_tuple_fraction configuration parameter + cursor_tuple_fraction configuration parameter Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved. The default is 0.1. Smaller values of this - setting bias the planner towards using fast start plans + setting bias the planner towards using fast start plans for cursors, which will retrieve the first few rows quickly while perhaps taking a long time to fetch all rows. Larger values put more emphasis on the total estimated time. At the maximum @@ -4209,7 +4209,7 @@ SELECT * FROM parent WHERE key = 2400; from_collapse_limit (integer) - from_collapse_limit configuration parameter + from_collapse_limit configuration parameter @@ -4232,14 +4232,14 @@ SELECT * FROM parent WHERE key = 2400; join_collapse_limit (integer) - join_collapse_limit configuration parameter + join_collapse_limit configuration parameter - The planner will rewrite explicit JOIN - constructs (except FULL JOINs) into lists of - FROM items whenever a list of no more than this many items + The planner will rewrite explicit JOIN + constructs (except FULL JOINs) into lists of + FROM items whenever a list of no more than this many items would result. Smaller values reduce planning time but might yield inferior query plans. @@ -4248,7 +4248,7 @@ SELECT * FROM parent WHERE key = 2400; By default, this variable is set the same as from_collapse_limit, which is appropriate for most uses. Setting it to 1 prevents any reordering of - explicit JOINs. Thus, the explicit join order + explicit JOINs. Thus, the explicit join order specified in the query will be the actual order in which the relations are joined. Because the query planner does not always choose the optimal join order, advanced users can elect to @@ -4268,24 +4268,24 @@ SELECT * FROM parent WHERE key = 2400; force_parallel_mode (enum) - force_parallel_mode configuration parameter + force_parallel_mode configuration parameter Allows the use of parallel queries for testing purposes even in cases where no performance benefit is expected. - The allowed values of force_parallel_mode are - off (use parallel mode only when it is expected to improve - performance), on (force parallel query for all queries - for which it is thought to be safe), and regress (like - on, but with additional behavior changes as explained + The allowed values of force_parallel_mode are + off (use parallel mode only when it is expected to improve + performance), on (force parallel query for all queries + for which it is thought to be safe), and regress (like + on, but with additional behavior changes as explained below). - More specifically, setting this value to on will add - a Gather node to the top of any query plan for which this + More specifically, setting this value to on will add + a Gather node to the top of any query plan for which this appears to be safe, so that the query runs inside of a parallel worker. Even when a parallel worker is not available or cannot be used, operations such as starting a subtransaction that would be prohibited @@ -4297,15 +4297,15 @@ SELECT * FROM parent WHERE key = 2400; - Setting this value to regress has all of the same effects - as setting it to on plus some additional effects that are + Setting this value to regress has all of the same effects + as setting it to on plus some additional effects that are intended to facilitate automated regression testing. Normally, messages from a parallel worker include a context line indicating that, - but a setting of regress suppresses this line so that the + but a setting of regress suppresses this line so that the output is the same as in non-parallel execution. Also, - the Gather nodes added to plans by this setting are hidden - in EXPLAIN output so that the output matches what - would be obtained if this setting were turned off. + the Gather nodes added to plans by this setting are hidden + in EXPLAIN output so that the output matches what + would be obtained if this setting were turned off. @@ -4338,7 +4338,7 @@ SELECT * FROM parent WHERE key = 2400; log_destination (string) - log_destination configuration parameter + log_destination configuration parameter @@ -4351,13 +4351,13 @@ SELECT * FROM parent WHERE key = 2400; parameter to a list of desired log destinations separated by commas. The default is to log to stderr only. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - If csvlog is included in log_destination, + If csvlog is included in log_destination, log entries are output in comma separated - value (CSV) format, which is convenient for + value (CSV) format, which is convenient for loading logs into programs. See for details. must be enabled to generate @@ -4366,7 +4366,7 @@ SELECT * FROM parent WHERE key = 2400; When either stderr or csvlog are included, the file - current_logfiles is created to record the location + current_logfiles is created to record the location of the log file(s) currently in use by the logging collector and the associated logging destination. This provides a convenient way to find the logs currently in use by the instance. Here is an example of @@ -4378,10 +4378,10 @@ csvlog log/postgresql.csv current_logfiles is recreated when a new log file is created as an effect of rotation, and - when log_destination is reloaded. It is removed when + when log_destination is reloaded. It is removed when neither stderr nor csvlog are included - in log_destination, and when the logging collector is + in log_destination, and when the logging collector is disabled. @@ -4390,9 +4390,9 @@ csvlog log/postgresql.csv On most Unix systems, you will need to alter the configuration of your system's syslog daemon in order to make use of the syslog option for - log_destination. PostgreSQL + log_destination. PostgreSQL can log to syslog facilities - LOCAL0 through LOCAL7 (see LOCAL0 through LOCAL7 (see ), but the default syslog configuration on most platforms will discard all such messages. You will need to add something like: @@ -4404,7 +4404,7 @@ local0.* /var/log/postgresql On Windows, when you use the eventlog - option for log_destination, you should + option for log_destination, you should register an event source and its library with the operating system so that the Windows Event Viewer can display event log messages cleanly. @@ -4417,27 +4417,27 @@ local0.* /var/log/postgresql logging_collector (boolean) - logging_collector configuration parameter + logging_collector configuration parameter - This parameter enables the logging collector, which + This parameter enables the logging collector, which is a background process that captures log messages - sent to stderr and redirects them into log files. + sent to stderr and redirects them into log files. This approach is often more useful than - logging to syslog, since some types of messages - might not appear in syslog output. (One common + logging to syslog, since some types of messages + might not appear in syslog output. (One common example is dynamic-linker failure messages; another is error messages - produced by scripts such as archive_command.) + produced by scripts such as archive_command.) This parameter can only be set at server start. - It is possible to log to stderr without using the + It is possible to log to stderr without using the logging collector; the log messages will just go to wherever the - server's stderr is directed. However, that method is + server's stderr is directed. However, that method is only suitable for low log volumes, since it provides no convenient way to rotate log files. Also, on some platforms not using the logging collector can result in lost or garbled log output, because @@ -4451,7 +4451,7 @@ local0.* /var/log/postgresql The logging collector is designed to never lose messages. This means that in case of extremely high load, server processes could be blocked while trying to send additional log messages when the - collector has fallen behind. In contrast, syslog + collector has fallen behind. In contrast, syslog prefers to drop messages if it cannot write them, which means it may fail to log some messages in such cases but it will not block the rest of the system. @@ -4464,16 +4464,16 @@ local0.* /var/log/postgresql log_directory (string) - log_directory configuration parameter + log_directory configuration parameter - When logging_collector is enabled, + When logging_collector is enabled, this parameter determines the directory in which log files will be created. It can be specified as an absolute path, or relative to the cluster data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is log. @@ -4483,7 +4483,7 @@ local0.* /var/log/postgresql log_filename (string) - log_filename configuration parameter + log_filename configuration parameter @@ -4514,14 +4514,14 @@ local0.* /var/log/postgresql longer the case. - If CSV-format output is enabled in log_destination, - .csv will be appended to the timestamped + If CSV-format output is enabled in log_destination, + .csv will be appended to the timestamped log file name to create the file name for CSV-format output. - (If log_filename ends in .log, the suffix is + (If log_filename ends in .log, the suffix is replaced instead.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4530,7 +4530,7 @@ local0.* /var/log/postgresql log_file_mode (integer) - log_file_mode configuration parameter + log_file_mode configuration parameter @@ -4545,9 +4545,9 @@ local0.* /var/log/postgresql must start with a 0 (zero).) - The default permissions are 0600, meaning only the + The default permissions are 0600, meaning only the server owner can read or write the log files. The other commonly - useful setting is 0640, allowing members of the owner's + useful setting is 0640, allowing members of the owner's group to read the files. Note however that to make use of such a setting, you'll need to alter to store the files somewhere outside the cluster data directory. In @@ -4555,7 +4555,7 @@ local0.* /var/log/postgresql they might contain sensitive data. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4564,7 +4564,7 @@ local0.* /var/log/postgresql log_rotation_age (integer) - log_rotation_age configuration parameter + log_rotation_age configuration parameter @@ -4574,7 +4574,7 @@ local0.* /var/log/postgresql After this many minutes have elapsed, a new log file will be created. Set to zero to disable time-based creation of new log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4583,7 +4583,7 @@ local0.* /var/log/postgresql log_rotation_size (integer) - log_rotation_size configuration parameter + log_rotation_size configuration parameter @@ -4593,7 +4593,7 @@ local0.* /var/log/postgresql After this many kilobytes have been emitted into a log file, a new log file will be created. Set to zero to disable size-based creation of new log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4602,7 +4602,7 @@ local0.* /var/log/postgresql log_truncate_on_rotation (boolean) - log_truncate_on_rotation configuration parameter + log_truncate_on_rotation configuration parameter @@ -4617,7 +4617,7 @@ local0.* /var/log/postgresql a log_filename like postgresql-%H.log would result in generating twenty-four hourly log files and then cyclically overwriting them. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4635,7 +4635,7 @@ local0.* /var/log/postgresql log_truncate_on_rotation to on, log_rotation_age to 60, and log_rotation_size to 1000000. - Including %M in log_filename allows + Including %M in log_filename allows any size-driven rotations that might occur to select a file name different from the hour's initial file name. @@ -4645,21 +4645,21 @@ local0.* /var/log/postgresql syslog_facility (enum) - syslog_facility configuration parameter + syslog_facility configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines the syslog facility to be used. You can choose - from LOCAL0, LOCAL1, - LOCAL2, LOCAL3, LOCAL4, - LOCAL5, LOCAL6, LOCAL7; - the default is LOCAL0. See also the + from LOCAL0, LOCAL1, + LOCAL2, LOCAL3, LOCAL4, + LOCAL5, LOCAL6, LOCAL7; + the default is LOCAL0. See also the documentation of your system's syslog daemon. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4668,17 +4668,17 @@ local0.* /var/log/postgresql syslog_ident (string) - syslog_ident configuration parameter + syslog_ident configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines the program name used to identify PostgreSQL messages in syslog logs. The default is postgres. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4687,7 +4687,7 @@ local0.* /var/log/postgresql syslog_sequence_numbers (boolean) - syslog_sequence_numbers configuration parameter + syslog_sequence_numbers configuration parameter @@ -4706,7 +4706,7 @@ local0.* /var/log/postgresql - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4715,12 +4715,12 @@ local0.* /var/log/postgresql syslog_split_messages (boolean) - syslog_split_messages configuration parameter + syslog_split_messages configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines how messages are delivered to syslog. When on (the default), messages are split by lines, and long lines are split so that they will fit into 1024 bytes, which is a typical size limit for @@ -4739,7 +4739,7 @@ local0.* /var/log/postgresql - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4748,16 +4748,16 @@ local0.* /var/log/postgresql event_source (string) - event_source configuration parameter + event_source configuration parameter - When logging to event log is enabled, this parameter + When logging to event log is enabled, this parameter determines the program name used to identify PostgreSQL messages in the log. The default is PostgreSQL. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4773,21 +4773,21 @@ local0.* /var/log/postgresql client_min_messages (enum) - client_min_messages configuration parameter + client_min_messages configuration parameter Controls which message levels are sent to the client. - Valid values are DEBUG5, - DEBUG4, DEBUG3, DEBUG2, - DEBUG1, LOG, NOTICE, - WARNING, ERROR, FATAL, - and PANIC. Each level + Valid values are DEBUG5, + DEBUG4, DEBUG3, DEBUG2, + DEBUG1, LOG, NOTICE, + WARNING, ERROR, FATAL, + and PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The default is - NOTICE. Note that LOG has a different - rank here than in log_min_messages. + NOTICE. Note that LOG has a different + rank here than in log_min_messages. @@ -4795,21 +4795,21 @@ local0.* /var/log/postgresql log_min_messages (enum) - log_min_messages configuration parameter + log_min_messages configuration parameter Controls which message levels are written to the server log. - Valid values are DEBUG5, DEBUG4, - DEBUG3, DEBUG2, DEBUG1, - INFO, NOTICE, WARNING, - ERROR, LOG, FATAL, and - PANIC. Each level includes all the levels that + Valid values are DEBUG5, DEBUG4, + DEBUG3, DEBUG2, DEBUG1, + INFO, NOTICE, WARNING, + ERROR, LOG, FATAL, and + PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent - to the log. The default is WARNING. Note that - LOG has a different rank here than in - client_min_messages. + to the log. The default is WARNING. Note that + LOG has a different rank here than in + client_min_messages. Only superusers can change this setting. @@ -4818,7 +4818,7 @@ local0.* /var/log/postgresql log_min_error_statement (enum) - log_min_error_statement configuration parameter + log_min_error_statement configuration parameter @@ -4846,7 +4846,7 @@ local0.* /var/log/postgresql log_min_duration_statement (integer) - log_min_duration_statement configuration parameter + log_min_duration_statement configuration parameter @@ -4872,9 +4872,9 @@ local0.* /var/log/postgresql When using this option together with , the text of statements that are logged because of - log_statement will not be repeated in the + log_statement will not be repeated in the duration log message. - If you are not using syslog, it is recommended + If you are not using syslog, it is recommended that you log the PID or session ID using so that you can link the statement message to the later @@ -4888,7 +4888,7 @@ local0.* /var/log/postgresql explains the message - severity levels used by PostgreSQL. If logging output + severity levels used by PostgreSQL. If logging output is sent to syslog or Windows' eventlog, the severity levels are translated as shown in the table. @@ -4901,73 +4901,73 @@ local0.* /var/log/postgresql Severity Usage - syslog - eventlog + syslog + eventlog - DEBUG1..DEBUG5 + DEBUG1..DEBUG5 Provides successively-more-detailed information for use by developers. - DEBUG - INFORMATION + DEBUG + INFORMATION - INFO + INFO Provides information implicitly requested by the user, - e.g., output from VACUUM VERBOSE. - INFO - INFORMATION + e.g., output from VACUUM VERBOSE. + INFO + INFORMATION - NOTICE + NOTICE Provides information that might be helpful to users, e.g., notice of truncation of long identifiers. - NOTICE - INFORMATION + NOTICE + INFORMATION - WARNING - Provides warnings of likely problems, e.g., COMMIT + WARNING + Provides warnings of likely problems, e.g., COMMIT outside a transaction block. - NOTICE - WARNING + NOTICE + WARNING - ERROR + ERROR Reports an error that caused the current command to abort. - WARNING - ERROR + WARNING + ERROR - LOG + LOG Reports information of interest to administrators, e.g., checkpoint activity. - INFO - INFORMATION + INFO + INFORMATION - FATAL + FATAL Reports an error that caused the current session to abort. - ERR - ERROR + ERR + ERROR - PANIC + PANIC Reports an error that caused all database sessions to abort. - CRIT - ERROR + CRIT + ERROR @@ -4982,15 +4982,15 @@ local0.* /var/log/postgresql application_name (string) - application_name configuration parameter + application_name configuration parameter The application_name can be any string of less than - NAMEDATALEN characters (64 characters in a standard build). + NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. - The name will be displayed in the pg_stat_activity view + The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the parameter. Only printable ASCII characters may be used in the @@ -5003,17 +5003,17 @@ local0.* /var/log/postgresql debug_print_parse (boolean) - debug_print_parse configuration parameter + debug_print_parse configuration parameter debug_print_rewritten (boolean) - debug_print_rewritten configuration parameter + debug_print_rewritten configuration parameter debug_print_plan (boolean) - debug_print_plan configuration parameter + debug_print_plan configuration parameter @@ -5021,7 +5021,7 @@ local0.* /var/log/postgresql These parameters enable various debugging output to be emitted. When set, they print the resulting parse tree, the query rewriter output, or the execution plan for each executed query. - These messages are emitted at LOG message level, so by + These messages are emitted at LOG message level, so by default they will appear in the server log but will not be sent to the client. You can change that by adjusting and/or @@ -5034,7 +5034,7 @@ local0.* /var/log/postgresql debug_pretty_print (boolean) - debug_pretty_print configuration parameter + debug_pretty_print configuration parameter @@ -5043,7 +5043,7 @@ local0.* /var/log/postgresql produced by debug_print_parse, debug_print_rewritten, or debug_print_plan. This results in more readable - but much longer output than the compact format used when + but much longer output than the compact format used when it is off. It is on by default. @@ -5052,7 +5052,7 @@ local0.* /var/log/postgresql log_checkpoints (boolean) - log_checkpoints configuration parameter + log_checkpoints configuration parameter @@ -5060,7 +5060,7 @@ local0.* /var/log/postgresql Causes checkpoints and restartpoints to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is off. @@ -5069,7 +5069,7 @@ local0.* /var/log/postgresql log_connections (boolean) - log_connections configuration parameter + log_connections configuration parameter @@ -5078,14 +5078,14 @@ local0.* /var/log/postgresql as well as successful completion of client authentication. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is off. + The default is off. - Some client programs, like psql, attempt + Some client programs, like psql, attempt to connect twice while determining if a password is required, so - duplicate connection received messages do not + duplicate connection received messages do not necessarily indicate a problem. @@ -5095,7 +5095,7 @@ local0.* /var/log/postgresql log_disconnections (boolean) - log_disconnections configuration parameter + log_disconnections configuration parameter @@ -5105,7 +5105,7 @@ local0.* /var/log/postgresql plus the duration of the session. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is off. + The default is off. @@ -5114,13 +5114,13 @@ local0.* /var/log/postgresql log_duration (boolean) - log_duration configuration parameter + log_duration configuration parameter Causes the duration of every completed statement to be logged. - The default is off. + The default is off. Only superusers can change this setting. @@ -5133,10 +5133,10 @@ local0.* /var/log/postgresql The difference between setting this option and setting to zero is that - exceeding log_min_duration_statement forces the text of + exceeding log_min_duration_statement forces the text of the query to be logged, but this option doesn't. Thus, if - log_duration is on and - log_min_duration_statement has a positive value, all + log_duration is on and + log_min_duration_statement has a positive value, all durations are logged but the query text is included only for statements exceeding the threshold. This behavior can be useful for gathering statistics in high-load installations. @@ -5148,18 +5148,18 @@ local0.* /var/log/postgresql log_error_verbosity (enum) - log_error_verbosity configuration parameter + log_error_verbosity configuration parameter Controls the amount of detail written in the server log for each - message that is logged. Valid values are TERSE, - DEFAULT, and VERBOSE, each adding more - fields to displayed messages. TERSE excludes - the logging of DETAIL, HINT, - QUERY, and CONTEXT error information. - VERBOSE output includes the SQLSTATE error + message that is logged. Valid values are TERSE, + DEFAULT, and VERBOSE, each adding more + fields to displayed messages. TERSE excludes + the logging of DETAIL, HINT, + QUERY, and CONTEXT error information. + VERBOSE output includes the SQLSTATE error code (see also ) and the source code file name, function name, and line number that generated the error. Only superusers can change this setting. @@ -5170,7 +5170,7 @@ local0.* /var/log/postgresql log_hostname (boolean) - log_hostname configuration parameter + log_hostname configuration parameter @@ -5179,7 +5179,7 @@ local0.* /var/log/postgresql connecting host. Turning this parameter on causes logging of the host name as well. Note that depending on your host name resolution setup this might impose a non-negligible performance penalty. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5188,14 +5188,14 @@ local0.* /var/log/postgresql log_line_prefix (string) - log_line_prefix configuration parameter + log_line_prefix configuration parameter - This is a printf-style string that is output at the + This is a printf-style string that is output at the beginning of each log line. - % characters begin escape sequences + % characters begin escape sequences that are replaced with status information as outlined below. Unrecognized escapes are ignored. Other characters are copied straight to the log line. Some escapes are @@ -5207,9 +5207,9 @@ local0.* /var/log/postgresql right with spaces to give it a minimum width, whereas a positive value will pad on the left. Padding can be useful to aid human readability in log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is - '%m [%p] ' which logs a time stamp and the process ID. + '%m [%p] ' which logs a time stamp and the process ID. @@ -5310,19 +5310,19 @@ local0.* /var/log/postgresql %% - Literal % + Literal % no - The %c escape prints a quasi-unique session identifier, + The %c escape prints a quasi-unique session identifier, consisting of two 4-byte hexadecimal numbers (without leading zeros) separated by a dot. The numbers are the process start time and the - process ID, so %c can also be used as a space saving way + process ID, so %c can also be used as a space saving way of printing those items. For example, to generate the session - identifier from pg_stat_activity, use this query: + identifier from pg_stat_activity, use this query: SELECT to_hex(trunc(EXTRACT(EPOCH FROM backend_start))::integer) || '.' || to_hex(pid) @@ -5333,7 +5333,7 @@ FROM pg_stat_activity; - If you set a nonempty value for log_line_prefix, + If you set a nonempty value for log_line_prefix, you should usually make its last character be a space, to provide visual separation from the rest of the log line. A punctuation character can be used too. @@ -5342,15 +5342,15 @@ FROM pg_stat_activity; - Syslog produces its own + Syslog produces its own time stamp and process ID information, so you probably do not want to - include those escapes if you are logging to syslog. + include those escapes if you are logging to syslog. - The %q escape is useful when including information that is + The %q escape is useful when including information that is only available in session (backend) context like user or database name. For example: @@ -5364,7 +5364,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_lock_waits (boolean) - log_lock_waits configuration parameter + log_lock_waits configuration parameter @@ -5372,7 +5372,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Controls whether a log message is produced when a session waits longer than to acquire a lock. This is useful in determining if lock waits are causing - poor performance. The default is off. + poor performance. The default is off. Only superusers can change this setting. @@ -5381,22 +5381,22 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_statement (enum) - log_statement configuration parameter + log_statement configuration parameter Controls which SQL statements are logged. Valid values are - none (off), ddl, mod, and - all (all statements). ddl logs all data definition - statements, such as CREATE, ALTER, and - DROP statements. mod logs all - ddl statements, plus data-modifying statements - such as INSERT, - UPDATE, DELETE, TRUNCATE, - and COPY FROM. - PREPARE, EXECUTE, and - EXPLAIN ANALYZE statements are also logged if their + none (off), ddl, mod, and + all (all statements). ddl logs all data definition + statements, such as CREATE, ALTER, and + DROP statements. mod logs all + ddl statements, plus data-modifying statements + such as INSERT, + UPDATE, DELETE, TRUNCATE, + and COPY FROM. + PREPARE, EXECUTE, and + EXPLAIN ANALYZE statements are also logged if their contained command is of an appropriate type. For clients using extended query protocol, logging occurs when an Execute message is received, and values of the Bind parameters are included @@ -5404,20 +5404,20 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' - The default is none. Only superusers can change this + The default is none. Only superusers can change this setting. Statements that contain simple syntax errors are not logged - even by the log_statement = all setting, + even by the log_statement = all setting, because the log message is emitted only after basic parsing has been done to determine the statement type. In the case of extended query protocol, this setting likewise does not log statements that fail before the Execute phase (i.e., during parse analysis or - planning). Set log_min_error_statement to - ERROR (or lower) to log such statements. + planning). Set log_min_error_statement to + ERROR (or lower) to log such statements. @@ -5426,14 +5426,14 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_replication_commands (boolean) - log_replication_commands configuration parameter + log_replication_commands configuration parameter Causes each replication command to be logged in the server log. See for more information about - replication command. The default value is off. + replication command. The default value is off. Only superusers can change this setting. @@ -5442,7 +5442,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_temp_files (integer) - log_temp_files configuration parameter + log_temp_files configuration parameter @@ -5463,7 +5463,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_timezone (string) - log_timezone configuration parameter + log_timezone configuration parameter @@ -5471,11 +5471,11 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Sets the time zone used for timestamps written in the server log. Unlike , this value is cluster-wide, so that all sessions will report timestamps consistently. - The built-in default is GMT, but that is typically - overridden in postgresql.conf; initdb + The built-in default is GMT, but that is typically + overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. See for more information. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5487,10 +5487,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Using CSV-Format Log Output - Including csvlog in the log_destination list + Including csvlog in the log_destination list provides a convenient way to import log files into a database table. This option emits log lines in comma-separated-values - (CSV) format, + (CSV) format, with these columns: time stamp with milliseconds, user name, @@ -5512,10 +5512,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' character count of the error position therein, error context, user query that led to the error (if any and enabled by - log_min_error_statement), + log_min_error_statement), character count of the error position therein, location of the error in the PostgreSQL source code - (if log_error_verbosity is set to verbose), + (if log_error_verbosity is set to verbose), and application name. Here is a sample table definition for storing CSV-format log output: @@ -5551,7 +5551,7 @@ CREATE TABLE postgres_log - To import a log file into this table, use the COPY FROM + To import a log file into this table, use the COPY FROM command: @@ -5567,7 +5567,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Set log_filename and - log_rotation_age to provide a consistent, + log_rotation_age to provide a consistent, predictable naming scheme for your log files. This lets you predict what the file name will be and know when an individual log file is complete and therefore ready to be imported. @@ -5584,7 +5584,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - Set log_truncate_on_rotation to on so + Set log_truncate_on_rotation to on so that old log data isn't mixed with the new in the same file. @@ -5593,14 +5593,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The table definition above includes a primary key specification. This is useful to protect against accidentally importing the same - information twice. The COPY command commits all of the + information twice. The COPY command commits all of the data it imports at one time, so any error will cause the entire import to fail. If you import a partial log file and later import the file again when it is complete, the primary key violation will cause the import to fail. Wait until the log is complete and closed before importing. This procedure will also protect against accidentally importing a partial line that hasn't been completely - written, which would also cause COPY to fail. + written, which would also cause COPY to fail. @@ -5613,7 +5613,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; These settings control how process titles of server processes are modified. Process titles are typically viewed using programs like - ps or, on Windows, Process Explorer. + ps or, on Windows, Process Explorer. See for details. @@ -5621,18 +5621,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; cluster_name (string) - cluster_name configuration parameter + cluster_name configuration parameter Sets the cluster name that appears in the process title for all server processes in this cluster. The name can be any string of less - than NAMEDATALEN characters (64 characters in a standard + than NAMEDATALEN characters (64 characters in a standard build). Only printable ASCII characters may be used in the cluster_name value. Other characters will be replaced with question marks (?). No name is shown - if this parameter is set to the empty string '' (which is + if this parameter is set to the empty string '' (which is the default). This parameter can only be set at server start. @@ -5641,15 +5641,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; update_process_title (boolean) - update_process_title configuration parameter + update_process_title configuration parameter Enables updating of the process title every time a new SQL command is received by the server. - This setting defaults to on on most platforms, but it - defaults to off on Windows due to that platform's larger + This setting defaults to on on most platforms, but it + defaults to off on Windows due to that platform's larger overhead for updating the process title. Only superusers can change this setting. @@ -5678,7 +5678,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_activities (boolean) - track_activities configuration parameter + track_activities configuration parameter @@ -5698,14 +5698,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_activity_query_size (integer) - track_activity_query_size configuration parameter + track_activity_query_size configuration parameter Specifies the number of bytes reserved to track the currently executing command for each active session, for the - pg_stat_activity.query field. + pg_stat_activity.query field. The default value is 1024. This parameter can only be set at server start. @@ -5715,7 +5715,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_counts (boolean) - track_counts configuration parameter + track_counts configuration parameter @@ -5731,7 +5731,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_io_timing (boolean) - track_io_timing configuration parameter + track_io_timing configuration parameter @@ -5743,7 +5743,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; measure the overhead of timing on your system. I/O timing information is displayed in , in the output of - when the BUFFERS option is + when the BUFFERS option is used, and by . Only superusers can change this setting. @@ -5753,7 +5753,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_functions (enum) - track_functions configuration parameter + track_functions configuration parameter @@ -5767,7 +5767,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - SQL-language functions that are simple enough to be inlined + SQL-language functions that are simple enough to be inlined into the calling query will not be tracked, regardless of this setting. @@ -5778,7 +5778,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; stats_temp_directory (string) - stats_temp_directory configuration parameter + stats_temp_directory configuration parameter @@ -5788,7 +5788,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; is pg_stat_tmp. Pointing this at a RAM-based file system will decrease physical I/O requirements and can lead to improved performance. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5804,29 +5804,29 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; log_statement_stats (boolean) - log_statement_stats configuration parameter + log_statement_stats configuration parameter log_parser_stats (boolean) - log_parser_stats configuration parameter + log_parser_stats configuration parameter log_planner_stats (boolean) - log_planner_stats configuration parameter + log_planner_stats configuration parameter log_executor_stats (boolean) - log_executor_stats configuration parameter + log_executor_stats configuration parameter For each query, output performance statistics of the respective module to the server log. This is a crude profiling - instrument, similar to the Unix getrusage() operating + instrument, similar to the Unix getrusage() operating system facility. log_statement_stats reports total statement statistics, while the others report per-module statistics. log_statement_stats cannot be enabled together with @@ -5850,7 +5850,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - These settings control the behavior of the autovacuum + These settings control the behavior of the autovacuum feature. Refer to for more information. Note that many of these settings can be overridden on a per-table basis; see autovacuum (boolean) - autovacuum configuration parameter + autovacuum configuration parameter @@ -5871,7 +5871,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum launcher daemon. This is on by default; however, must also be enabled for autovacuum to work. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; however, autovacuuming can be disabled for individual tables by changing table storage parameters. @@ -5887,7 +5887,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; log_autovacuum_min_duration (integer) - log_autovacuum_min_duration configuration parameter + log_autovacuum_min_duration configuration parameter @@ -5902,7 +5902,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; logged if an autovacuum action is skipped due to the existence of a conflicting lock. Enabling this parameter can be helpful in tracking autovacuum activity. This parameter can only be set in - the postgresql.conf file or on the server command line; + the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5912,7 +5912,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_max_workers (integer) - autovacuum_max_workers configuration parameter + autovacuum_max_workers configuration parameter @@ -5927,17 +5927,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_naptime (integer) - autovacuum_naptime configuration parameter + autovacuum_naptime configuration parameter Specifies the minimum delay between autovacuum runs on any given database. In each round the daemon examines the - database and issues VACUUM and ANALYZE commands + database and issues VACUUM and ANALYZE commands as needed for tables in that database. The delay is measured - in seconds, and the default is one minute (1min). - This parameter can only be set in the postgresql.conf + in seconds, and the default is one minute (1min). + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5946,15 +5946,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_threshold (integer) - autovacuum_vacuum_threshold configuration parameter + autovacuum_vacuum_threshold configuration parameter Specifies the minimum number of updated or deleted tuples needed - to trigger a VACUUM in any one table. + to trigger a VACUUM in any one table. The default is 50 tuples. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5965,15 +5965,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_analyze_threshold (integer) - autovacuum_analyze_threshold configuration parameter + autovacuum_analyze_threshold configuration parameter Specifies the minimum number of inserted, updated or deleted tuples - needed to trigger an ANALYZE in any one table. + needed to trigger an ANALYZE in any one table. The default is 50 tuples. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5984,16 +5984,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_scale_factor (floating point) - autovacuum_vacuum_scale_factor configuration parameter + autovacuum_vacuum_scale_factor configuration parameter Specifies a fraction of the table size to add to autovacuum_vacuum_threshold - when deciding whether to trigger a VACUUM. + when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size). - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6004,16 +6004,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_analyze_scale_factor (floating point) - autovacuum_analyze_scale_factor configuration parameter + autovacuum_analyze_scale_factor configuration parameter Specifies a fraction of the table size to add to autovacuum_analyze_threshold - when deciding whether to trigger an ANALYZE. + when deciding whether to trigger an ANALYZE. The default is 0.1 (10% of table size). - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6024,14 +6024,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_freeze_max_age (integer) - autovacuum_freeze_max_age configuration parameter + autovacuum_freeze_max_age configuration parameter Specifies the maximum age (in transactions) that a table's - pg_class.relfrozenxid field can - attain before a VACUUM operation is forced + pg_class.relfrozenxid field can + attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6039,7 +6039,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Vacuum also allows removal of old files from the - pg_xact subdirectory, which is why the default + pg_xact subdirectory, which is why the default is a relatively low 200 million transactions. This parameter can only be set at server start, but the setting can be reduced for individual tables by @@ -6058,8 +6058,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Specifies the maximum age (in multixacts) that a table's - pg_class.relminmxid field can - attain before a VACUUM operation is forced to + pg_class.relminmxid field can + attain before a VACUUM operation is forced to prevent multixact ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6067,7 +6067,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Vacuuming multixacts also allows removal of old files from the - pg_multixact/members and pg_multixact/offsets + pg_multixact/members and pg_multixact/offsets subdirectories, which is why the default is a relatively low 400 million multixacts. This parameter can only be set at server start, but the setting can @@ -6080,16 +6080,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_cost_delay (integer) - autovacuum_vacuum_cost_delay configuration parameter + autovacuum_vacuum_cost_delay configuration parameter Specifies the cost delay value that will be used in automatic - VACUUM operations. If -1 is specified, the regular + VACUUM operations. If -1 is specified, the regular value will be used. The default value is 20 milliseconds. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6100,19 +6100,19 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_cost_limit (integer) - autovacuum_vacuum_cost_limit configuration parameter + autovacuum_vacuum_cost_limit configuration parameter Specifies the cost limit value that will be used in automatic - VACUUM operations. If -1 is specified (which is the + VACUUM operations. If -1 is specified (which is the default), the regular value will be used. Note that the value is distributed proportionally among the running autovacuum workers, if there is more than one, so that the sum of the limits for each worker does not exceed the value of this variable. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6133,9 +6133,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; search_path (string) - search_path configuration parameter + search_path configuration parameter - pathfor schemas + pathfor schemas @@ -6151,32 +6151,32 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The value for search_path must be a comma-separated list of schema names. Any name that is not an existing schema, or is - a schema for which the user does not have USAGE + a schema for which the user does not have USAGE permission, is silently ignored. If one of the list items is the special name $user, then the schema having the name returned by - SESSION_USER is substituted, if there is such a schema - and the user has USAGE permission for it. + SESSION_USER is substituted, if there is such a schema + and the user has USAGE permission for it. (If not, $user is ignored.) - The system catalog schema, pg_catalog, is always + The system catalog schema, pg_catalog, is always searched, whether it is mentioned in the path or not. If it is mentioned in the path then it will be searched in the specified - order. If pg_catalog is not in the path then it will - be searched before searching any of the path items. + order. If pg_catalog is not in the path then it will + be searched before searching any of the path items. Likewise, the current session's temporary-table schema, - pg_temp_nnn, is always searched if it + pg_temp_nnn, is always searched if it exists. It can be explicitly listed in the path by using the - alias pg_temppg_temp. If it is not listed in the path then - it is searched first (even before pg_catalog). However, + alias pg_temppg_temp. If it is not listed in the path then + it is searched first (even before pg_catalog). However, the temporary schema is only searched for relation (table, view, sequence, etc) and data type names. It is never searched for function or operator names. @@ -6193,7 +6193,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The default value for this parameter is "$user", public. This setting supports shared use of a database (where no users - have private schemas, and all share use of public), + have private schemas, and all share use of public), private per-user schemas, and combinations of these. Other effects can be obtained by altering the default search path setting, either globally or per-user. @@ -6202,11 +6202,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The current effective value of the search path can be examined via the SQL function - current_schemas + current_schemas (see ). This is not quite the same as examining the value of search_path, since - current_schemas shows how the items + current_schemas shows how the items appearing in search_path were resolved. @@ -6219,20 +6219,20 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; row_security (boolean) - row_security configuration parameter + row_security configuration parameter This variable controls whether to raise an error in lieu of applying a - row security policy. When set to on, policies apply - normally. When set to off, queries fail which would - otherwise apply at least one policy. The default is on. - Change to off where limited row visibility could cause - incorrect results; for example, pg_dump makes that + row security policy. When set to on, policies apply + normally. When set to off, queries fail which would + otherwise apply at least one policy. The default is on. + Change to off where limited row visibility could cause + incorrect results; for example, pg_dump makes that change by default. This variable has no effect on roles which bypass every row security policy, to wit, superusers and roles with - the BYPASSRLS attribute. + the BYPASSRLS attribute. @@ -6245,14 +6245,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; default_tablespace (string) - default_tablespace configuration parameter + default_tablespace configuration parameter - tablespacedefault + tablespacedefault This variable specifies the default tablespace in which to create - objects (tables and indexes) when a CREATE command does + objects (tables and indexes) when a CREATE command does not explicitly specify a tablespace. @@ -6260,9 +6260,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The value is either the name of a tablespace, or an empty string to specify using the default tablespace of the current database. If the value does not match the name of any existing tablespace, - PostgreSQL will automatically use the default + PostgreSQL will automatically use the default tablespace of the current database. If a nondefault tablespace - is specified, the user must have CREATE privilege + is specified, the user must have CREATE privilege for it, or creation attempts will fail. @@ -6287,38 +6287,38 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; temp_tablespaces (string) - temp_tablespaces configuration parameter + temp_tablespaces configuration parameter - tablespacetemporary + tablespacetemporary This variable specifies tablespaces in which to create temporary objects (temp tables and indexes on temp tables) when a - CREATE command does not explicitly specify a tablespace. + CREATE command does not explicitly specify a tablespace. Temporary files for purposes such as sorting large data sets are also created in these tablespaces. The value is a list of names of tablespaces. When there is more than - one name in the list, PostgreSQL chooses a random + one name in the list, PostgreSQL chooses a random member of the list each time a temporary object is to be created; except that within a transaction, successively created temporary objects are placed in successive tablespaces from the list. If the selected element of the list is an empty string, - PostgreSQL will automatically use the default + PostgreSQL will automatically use the default tablespace of the current database instead. - When temp_tablespaces is set interactively, specifying a + When temp_tablespaces is set interactively, specifying a nonexistent tablespace is an error, as is specifying a tablespace for - which the user does not have CREATE privilege. However, + which the user does not have CREATE privilege. However, when using a previously set value, nonexistent tablespaces are ignored, as are tablespaces for which the user lacks - CREATE privilege. In particular, this rule applies when - using a value set in postgresql.conf. + CREATE privilege. In particular, this rule applies when + using a value set in postgresql.conf. @@ -6336,18 +6336,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; check_function_bodies (boolean) - check_function_bodies configuration parameter + check_function_bodies configuration parameter - This parameter is normally on. When set to off, it + This parameter is normally on. When set to off, it disables validation of the function body string during . Disabling validation avoids side effects of the validation process and avoids false positives due to problems such as forward references. Set this parameter - to off before loading functions on behalf of other - users; pg_dump does so automatically. + to off before loading functions on behalf of other + users; pg_dump does so automatically. @@ -6359,7 +6359,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_isolation configuration parameter + default_transaction_isolation configuration parameter @@ -6386,14 +6386,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_read_only configuration parameter + default_transaction_read_only configuration parameter A read-only SQL transaction cannot alter non-temporary tables. This parameter controls the default read-only status of each new - transaction. The default is off (read/write). + transaction. The default is off (read/write). @@ -6409,12 +6409,12 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_deferrable configuration parameter + default_transaction_deferrable configuration parameter - When running at the serializable isolation level, + When running at the serializable isolation level, a deferrable read-only SQL transaction may be delayed before it is allowed to proceed. However, once it begins executing it does not incur any of the overhead required to ensure @@ -6427,7 +6427,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This parameter controls the default deferrable status of each new transaction. It currently has no effect on read-write transactions or those operating at isolation levels lower - than serializable. The default is off. + than serializable. The default is off. @@ -6440,7 +6440,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; session_replication_role (enum) - session_replication_role configuration parameter + session_replication_role configuration parameter @@ -6448,8 +6448,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Controls firing of replication-related triggers and rules for the current session. Setting this variable requires superuser privilege and results in discarding any previously cached - query plans. Possible values are origin (the default), - replica and local. + query plans. Possible values are origin (the default), + replica and local. See for more information. @@ -6459,21 +6459,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; statement_timeout (integer) - statement_timeout configuration parameter + statement_timeout configuration parameter Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server - from the client. If log_min_error_statement is set to - ERROR or lower, the statement that timed out will also be + from the client. If log_min_error_statement is set to + ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off. - Setting statement_timeout in - postgresql.conf is not recommended because it would + Setting statement_timeout in + postgresql.conf is not recommended because it would affect all sessions. @@ -6482,7 +6482,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; lock_timeout (integer) - lock_timeout configuration parameter + lock_timeout configuration parameter @@ -6491,24 +6491,24 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; milliseconds while attempting to acquire a lock on a table, index, row, or other database object. The time limit applies separately to each lock acquisition attempt. The limit applies both to explicit - locking requests (such as LOCK TABLE, or SELECT - FOR UPDATE without NOWAIT) and to implicitly-acquired - locks. If log_min_error_statement is set to - ERROR or lower, the statement that timed out will be + locking requests (such as LOCK TABLE, or SELECT + FOR UPDATE without NOWAIT) and to implicitly-acquired + locks. If log_min_error_statement is set to + ERROR or lower, the statement that timed out will be logged. A value of zero (the default) turns this off. - Unlike statement_timeout, this timeout can only occur - while waiting for locks. Note that if statement_timeout - is nonzero, it is rather pointless to set lock_timeout to + Unlike statement_timeout, this timeout can only occur + while waiting for locks. Note that if statement_timeout + is nonzero, it is rather pointless to set lock_timeout to the same or larger value, since the statement timeout would always trigger first. - Setting lock_timeout in - postgresql.conf is not recommended because it would + Setting lock_timeout in + postgresql.conf is not recommended because it would affect all sessions. @@ -6517,7 +6517,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; idle_in_transaction_session_timeout (integer) - idle_in_transaction_session_timeout configuration parameter + idle_in_transaction_session_timeout configuration parameter @@ -6537,21 +6537,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_freeze_table_age (integer) - vacuum_freeze_table_age configuration parameter + vacuum_freeze_table_age configuration parameter - VACUUM performs an aggressive scan if the table's - pg_class.relfrozenxid field has reached + VACUUM performs an aggressive scan if the table's + pg_class.relfrozenxid field has reached the age specified by this setting. An aggressive scan differs from - a regular VACUUM in that it visits every page that might + a regular VACUUM in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million transactions. Although users can - set this value anywhere from zero to two billions, VACUUM + set this value anywhere from zero to two billions, VACUUM will silently limit the effective value to 95% of , so that a - periodical manual VACUUM has a chance to run before an + periodical manual VACUUM has a chance to run before an anti-wraparound autovacuum is launched for the table. For more information see . @@ -6562,17 +6562,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_freeze_min_age (integer) - vacuum_freeze_min_age configuration parameter + vacuum_freeze_min_age configuration parameter - Specifies the cutoff age (in transactions) that VACUUM + Specifies the cutoff age (in transactions) that VACUUM should use to decide whether to freeze row versions while scanning a table. The default is 50 million transactions. Although users can set this value anywhere from zero to one billion, - VACUUM will silently limit the effective value to half + VACUUM will silently limit the effective value to half the value of , so that there is not an unreasonably short time between forced autovacuums. For more information see vacuum_multixact_freeze_table_age (integer) - vacuum_multixact_freeze_table_age configuration parameter + vacuum_multixact_freeze_table_age configuration parameter - VACUUM performs an aggressive scan if the table's - pg_class.relminmxid field has reached + VACUUM performs an aggressive scan if the table's + pg_class.relminmxid field has reached the age specified by this setting. An aggressive scan differs from - a regular VACUUM in that it visits every page that might + a regular VACUUM in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million multixacts. Although users can set this value anywhere from zero to two billions, - VACUUM will silently limit the effective value to 95% of + VACUUM will silently limit the effective value to 95% of , so that a - periodical manual VACUUM has a chance to run before an + periodical manual VACUUM has a chance to run before an anti-wraparound is launched for the table. For more information see . @@ -6608,17 +6608,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_multixact_freeze_min_age (integer) - vacuum_multixact_freeze_min_age configuration parameter + vacuum_multixact_freeze_min_age configuration parameter - Specifies the cutoff age (in multixacts) that VACUUM + Specifies the cutoff age (in multixacts) that VACUUM should use to decide whether to replace multixact IDs with a newer transaction ID or multixact ID while scanning a table. The default is 5 million multixacts. Although users can set this value anywhere from zero to one billion, - VACUUM will silently limit the effective value to half + VACUUM will silently limit the effective value to half the value of , so that there is not an unreasonably short time between forced autovacuums. @@ -6630,7 +6630,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; bytea_output (enum) - bytea_output configuration parameter + bytea_output configuration parameter @@ -6648,7 +6648,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; xmlbinary (enum) - xmlbinary configuration parameter + xmlbinary configuration parameter @@ -6676,10 +6676,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; xmloption (enum) - xmloption configuration parameter + xmloption configuration parameter - SET XML OPTION + SET XML OPTION XML option @@ -6709,16 +6709,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; gin_pending_list_limit (integer) - gin_pending_list_limit configuration parameter + gin_pending_list_limit configuration parameter Sets the maximum size of the GIN pending list which is used - when fastupdate is enabled. If the list grows + when fastupdate is enabled. If the list grows larger than this maximum size, it is cleaned up by moving the entries in it to the main GIN data structure in bulk. - The default is four megabytes (4MB). This setting + The default is four megabytes (4MB). This setting can be overridden for individual GIN indexes by changing index storage parameters. See and @@ -6737,7 +6737,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; DateStyle (string) - DateStyle configuration parameter + DateStyle configuration parameter @@ -6745,16 +6745,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. For historical reasons, this variable contains two independent - components: the output format specification (ISO, - Postgres, SQL, or German) + components: the output format specification (ISO, + Postgres, SQL, or German) and the input/output specification for year/month/day ordering - (DMY, MDY, or YMD). These - can be set separately or together. The keywords Euro - and European are synonyms for DMY; the - keywords US, NonEuro, and - NonEuropean are synonyms for MDY. See + (DMY, MDY, or YMD). These + can be set separately or together. The keywords Euro + and European are synonyms for DMY; the + keywords US, NonEuro, and + NonEuropean are synonyms for MDY. See for more information. The - built-in default is ISO, MDY, but + built-in default is ISO, MDY, but initdb will initialize the configuration file with a setting that corresponds to the behavior of the chosen lc_time locale. @@ -6765,28 +6765,28 @@ SET XML OPTION { DOCUMENT | CONTENT }; IntervalStyle (enum) - IntervalStyle configuration parameter + IntervalStyle configuration parameter Sets the display format for interval values. - The value sql_standard will produce + The value sql_standard will produce output matching SQL standard interval literals. - The value postgres (which is the default) will produce - output matching PostgreSQL releases prior to 8.4 + The value postgres (which is the default) will produce + output matching PostgreSQL releases prior to 8.4 when the - parameter was set to ISO. - The value postgres_verbose will produce output - matching PostgreSQL releases prior to 8.4 - when the DateStyle - parameter was set to non-ISO output. - The value iso_8601 will produce output matching the time - interval format with designators defined in section + parameter was set to ISO. + The value postgres_verbose will produce output + matching PostgreSQL releases prior to 8.4 + when the DateStyle + parameter was set to non-ISO output. + The value iso_8601 will produce output matching the time + interval format with designators defined in section 4.4.3.2 of ISO 8601. - The IntervalStyle parameter also affects the + The IntervalStyle parameter also affects the interpretation of ambiguous interval input. See for more information. @@ -6796,15 +6796,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; TimeZone (string) - TimeZone configuration parameter + TimeZone configuration parameter - time zone + time zone Sets the time zone for displaying and interpreting time stamps. - The built-in default is GMT, but that is typically - overridden in postgresql.conf; initdb + The built-in default is GMT, but that is typically + overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. See for more information. @@ -6814,14 +6814,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; timezone_abbreviations (string) - timezone_abbreviations configuration parameter + timezone_abbreviations configuration parameter - time zone names + time zone names Sets the collection of time zone abbreviations that will be accepted - by the server for datetime input. The default is 'Default', + by the server for datetime input. The default is 'Default', which is a collection that works in most of the world; there are also 'Australia' and 'India', and other collections can be defined for a particular installation. @@ -6840,15 +6840,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; display - extra_float_digits configuration parameter + extra_float_digits configuration parameter This parameter adjusts the number of digits displayed for - floating-point values, including float4, float8, + floating-point values, including float4, float8, and geometric data types. The parameter value is added to the - standard number of digits (FLT_DIG or DBL_DIG + standard number of digits (FLT_DIG or DBL_DIG as appropriate). The value can be set as high as 3, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set @@ -6861,9 +6861,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; client_encoding (string) - client_encoding configuration parameter + client_encoding configuration parameter - character set + character set @@ -6878,7 +6878,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_messages (string) - lc_messages configuration parameter + lc_messages configuration parameter @@ -6910,7 +6910,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_monetary (string) - lc_monetary configuration parameter + lc_monetary configuration parameter @@ -6929,7 +6929,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_numeric (string) - lc_numeric configuration parameter + lc_numeric configuration parameter @@ -6948,7 +6948,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_time (string) - lc_time configuration parameter + lc_time configuration parameter @@ -6967,7 +6967,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; default_text_search_config (string) - default_text_search_config configuration parameter + default_text_search_config configuration parameter @@ -6976,7 +6976,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; of the text search functions that do not have an explicit argument specifying the configuration. See for further information. - The built-in default is pg_catalog.simple, but + The built-in default is pg_catalog.simple, but initdb will initialize the configuration file with a setting that corresponds to the chosen lc_ctype locale, if a configuration @@ -6997,8 +6997,8 @@ SET XML OPTION { DOCUMENT | CONTENT }; server, in order to load additional functionality or achieve performance benefits. For example, a setting of '$libdir/mylib' would cause - mylib.so (or on some platforms, - mylib.sl) to be preloaded from the installation's standard + mylib.so (or on some platforms, + mylib.sl) to be preloaded from the installation's standard library directory. The differences between the settings are when they take effect and what privileges are required to change them. @@ -7007,14 +7007,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; PostgreSQL procedural language libraries can be preloaded in this way, typically by using the syntax '$libdir/plXXX' where - XXX is pgsql, perl, - tcl, or python. + XXX is pgsql, perl, + tcl, or python. Only shared libraries specifically intended to be used with PostgreSQL can be loaded this way. Every PostgreSQL-supported library has - a magic block that is checked to guarantee compatibility. For + a magic block that is checked to guarantee compatibility. For this reason, non-PostgreSQL libraries cannot be loaded in this way. You might be able to use operating-system facilities such as LD_PRELOAD for that. @@ -7029,10 +7029,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; local_preload_libraries (string) - local_preload_libraries configuration parameter + local_preload_libraries configuration parameter - $libdir/plugins + $libdir/plugins @@ -7051,10 +7051,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; This option can be set by any user. Because of that, the libraries that can be loaded are restricted to those appearing in the - plugins subdirectory of the installation's + plugins subdirectory of the installation's standard library directory. (It is the database administrator's - responsibility to ensure that only safe libraries - are installed there.) Entries in local_preload_libraries + responsibility to ensure that only safe libraries + are installed there.) Entries in local_preload_libraries can specify this directory explicitly, for example $libdir/plugins/mylib, or just specify the library name — mylib would have @@ -7064,11 +7064,11 @@ SET XML OPTION { DOCUMENT | CONTENT }; The intent of this feature is to allow unprivileged users to load debugging or performance-measurement libraries into specific sessions - without requiring an explicit LOAD command. To that end, + without requiring an explicit LOAD command. To that end, it would be typical to set this parameter using the PGOPTIONS environment variable on the client or by using - ALTER ROLE SET. + ALTER ROLE SET. @@ -7083,7 +7083,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; session_preload_libraries (string) - session_preload_libraries configuration parameter + session_preload_libraries configuration parameter @@ -7104,10 +7104,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; The intent of this feature is to allow debugging or performance-measurement libraries to be loaded into specific sessions without an explicit - LOAD command being given. For + LOAD command being given. For example, could be enabled for all sessions under a given user name by setting this parameter - with ALTER ROLE SET. Also, this parameter can be changed + with ALTER ROLE SET. Also, this parameter can be changed without restarting the server (but changes only take effect when a new session is started), so it is easier to add new modules this way, even if they should apply to all sessions. @@ -7125,7 +7125,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; shared_preload_libraries (string) - shared_preload_libraries configuration parameter + shared_preload_libraries configuration parameter @@ -7182,9 +7182,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; dynamic_library_path (string) - dynamic_library_path configuration parameter + dynamic_library_path configuration parameter - dynamic loading + dynamic loading @@ -7236,7 +7236,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' gin_fuzzy_search_limit (integer) - gin_fuzzy_search_limit configuration parameter + gin_fuzzy_search_limit configuration parameter @@ -7267,7 +7267,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' deadlock - deadlock_timeout configuration parameter + deadlock_timeout configuration parameter @@ -7280,7 +7280,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' just wait on the lock for a while before checking for a deadlock. Increasing this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting of - real deadlock errors. The default is one second (1s), + real deadlock errors. The default is one second (1s), which is probably about the smallest value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the setting should exceed your typical transaction time, @@ -7302,7 +7302,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_locks_per_transaction (integer) - max_locks_per_transaction configuration parameter + max_locks_per_transaction configuration parameter @@ -7315,7 +7315,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is not the number of + fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have queries that touch many different @@ -7334,7 +7334,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_transaction (integer) - max_pred_locks_per_transaction configuration parameter + max_pred_locks_per_transaction configuration parameter @@ -7347,7 +7347,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is not the number of + fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has generally been sufficient in testing, but you might need to raise this value if you have clients that touch many different @@ -7360,7 +7360,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_relation (integer) - max_pred_locks_per_relation configuration parameter + max_pred_locks_per_relation configuration parameter @@ -7371,8 +7371,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' limit, while negative values mean divided by the absolute value of this setting. The default is -2, which keeps - the behavior from previous versions of PostgreSQL. - This parameter can only be set in the postgresql.conf + the behavior from previous versions of PostgreSQL. + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -7381,7 +7381,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_page (integer) - max_pred_locks_per_page configuration parameter + max_pred_locks_per_page configuration parameter @@ -7389,7 +7389,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' This controls how many rows on a single page can be predicate-locked before the lock is promoted to covering the whole page. The default is 2. This parameter can only be set in - the postgresql.conf file or on the server command line. + the postgresql.conf file or on the server command line. @@ -7408,62 +7408,62 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' array_nulls (boolean) - array_nulls configuration parameter + array_nulls configuration parameter This controls whether the array input parser recognizes - unquoted NULL as specifying a null array element. - By default, this is on, allowing array values containing - null values to be entered. However, PostgreSQL versions + unquoted NULL as specifying a null array element. + By default, this is on, allowing array values containing + null values to be entered. However, PostgreSQL versions before 8.2 did not support null values in arrays, and therefore would - treat NULL as specifying a normal array element with - the string value NULL. For backward compatibility with + treat NULL as specifying a normal array element with + the string value NULL. For backward compatibility with applications that require the old behavior, this variable can be - turned off. + turned off. Note that it is possible to create array values containing null values - even when this variable is off. + even when this variable is off. backslash_quote (enum) - stringsbackslash quotes + stringsbackslash quotes - backslash_quote configuration parameter + backslash_quote configuration parameter This controls whether a quote mark can be represented by - \' in a string literal. The preferred, SQL-standard way - to represent a quote mark is by doubling it ('') but - PostgreSQL has historically also accepted - \'. However, use of \' creates security risks + \' in a string literal. The preferred, SQL-standard way + to represent a quote mark is by doubling it ('') but + PostgreSQL has historically also accepted + \'. However, use of \' creates security risks because in some client character set encodings, there are multibyte characters in which the last byte is numerically equivalent to ASCII - \. If client-side code does escaping incorrectly then a + \. If client-side code does escaping incorrectly then a SQL-injection attack is possible. This risk can be prevented by making the server reject queries in which a quote mark appears to be escaped by a backslash. - The allowed values of backslash_quote are - on (allow \' always), - off (reject always), and - safe_encoding (allow only if client encoding does not - allow ASCII \ within a multibyte character). - safe_encoding is the default setting. + The allowed values of backslash_quote are + on (allow \' always), + off (reject always), and + safe_encoding (allow only if client encoding does not + allow ASCII \ within a multibyte character). + safe_encoding is the default setting. - Note that in a standard-conforming string literal, \ just - means \ anyway. This parameter only affects the handling of + Note that in a standard-conforming string literal, \ just + means \ anyway. This parameter only affects the handling of non-standard-conforming literals, including - escape string syntax (E'...'). + escape string syntax (E'...'). @@ -7471,7 +7471,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' default_with_oids (boolean) - default_with_oids configuration parameter + default_with_oids configuration parameter @@ -7481,9 +7481,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' newly-created tables, if neither WITH OIDS nor WITHOUT OIDS is specified. It also determines whether OIDs will be included in tables created by - SELECT INTO. The parameter is off - by default; in PostgreSQL 8.0 and earlier, it - was on by default. + SELECT INTO. The parameter is off + by default; in PostgreSQL 8.0 and earlier, it + was on by default. @@ -7499,21 +7499,21 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' escape_string_warning (boolean) - stringsescape warning + stringsescape warning - escape_string_warning configuration parameter + escape_string_warning configuration parameter - When on, a warning is issued if a backslash (\) - appears in an ordinary string literal ('...' + When on, a warning is issued if a backslash (\) + appears in an ordinary string literal ('...' syntax) and standard_conforming_strings is off. - The default is on. + The default is on. Applications that wish to use backslash as escape should be - modified to use escape string syntax (E'...'), + modified to use escape string syntax (E'...'), because the default behavior of ordinary strings is now to treat backslash as an ordinary character, per SQL standard. This variable can be enabled to help locate code that needs to be changed. @@ -7524,22 +7524,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lo_compat_privileges (boolean) - lo_compat_privileges configuration parameter + lo_compat_privileges configuration parameter - In PostgreSQL releases prior to 9.0, large objects + In PostgreSQL releases prior to 9.0, large objects did not have access privileges and were, therefore, always readable - and writable by all users. Setting this variable to on + and writable by all users. Setting this variable to on disables the new privilege checks, for compatibility with prior - releases. The default is off. + releases. The default is off. Only superusers can change this setting. Setting this variable does not disable all security checks related to large objects — only those for which the default behavior has - changed in PostgreSQL 9.0. + changed in PostgreSQL 9.0. For example, lo_import() and lo_export() need superuser privileges regardless of this setting. @@ -7550,18 +7550,18 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' operator_precedence_warning (boolean) - operator_precedence_warning configuration parameter + operator_precedence_warning configuration parameter When on, the parser will emit a warning for any construct that might - have changed meanings since PostgreSQL 9.4 as a result + have changed meanings since PostgreSQL 9.4 as a result of changes in operator precedence. This is useful for auditing applications to see if precedence changes have broken anything; but it is not meant to be kept turned on in production, since it will warn about some perfectly valid, standard-compliant SQL code. - The default is off. + The default is off. @@ -7573,15 +7573,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' quote_all_identifiers (boolean) - quote_all_identifiers configuration parameter + quote_all_identifiers configuration parameter When the database generates SQL, force all identifiers to be quoted, even if they are not (currently) keywords. This will affect the - output of EXPLAIN as well as the results of functions - like pg_get_viewdef. See also the + output of EXPLAIN as well as the results of functions + like pg_get_viewdef. See also the option of and . @@ -7590,22 +7590,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' standard_conforming_strings (boolean) - stringsstandard conforming + stringsstandard conforming - standard_conforming_strings configuration parameter + standard_conforming_strings configuration parameter This controls whether ordinary string literals - ('...') treat backslashes literally, as specified in + ('...') treat backslashes literally, as specified in the SQL standard. Beginning in PostgreSQL 9.1, the default is - on (prior releases defaulted to off). + on (prior releases defaulted to off). Applications can check this parameter to determine how string literals will be processed. The presence of this parameter can also be taken as an indication - that the escape string syntax (E'...') is supported. + that the escape string syntax (E'...') is supported. Escape string syntax () should be used if an application desires backslashes to be treated as escape characters. @@ -7616,7 +7616,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' synchronize_seqscans (boolean) - synchronize_seqscans configuration parameter + synchronize_seqscans configuration parameter @@ -7625,13 +7625,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' other, so that concurrent scans read the same block at about the same time and hence share the I/O workload. When this is enabled, a scan might start in the middle of the table and then wrap - around the end to cover all rows, so as to synchronize with the + around the end to cover all rows, so as to synchronize with the activity of scans already in progress. This can result in unpredictable changes in the row ordering returned by queries that - have no ORDER BY clause. Setting this parameter to - off ensures the pre-8.3 behavior in which a sequential + have no ORDER BY clause. Setting this parameter to + off ensures the pre-8.3 behavior in which a sequential scan always starts from the beginning of the table. The default - is on. + is on. @@ -7645,31 +7645,31 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' transform_null_equals (boolean) - IS NULL + IS NULL - transform_null_equals configuration parameter + transform_null_equals configuration parameter - When on, expressions of the form expr = + When on, expressions of the form expr = NULL (or NULL = - expr) are treated as - expr IS NULL, that is, they - return true if expr evaluates to the null value, + expr) are treated as + expr IS NULL, that is, they + return true if expr evaluates to the null value, and false otherwise. The correct SQL-spec-compliant behavior of - expr = NULL is to always + expr = NULL is to always return null (unknown). Therefore this parameter defaults to - off. + off. However, filtered forms in Microsoft Access generate queries that appear to use - expr = NULL to test for + expr = NULL to test for null values, so if you use that interface to access the database you might want to turn this option on. Since expressions of the - form expr = NULL always + form expr = NULL always return the null value (using the SQL standard interpretation), they are not very useful and do not appear often in normal applications so this option does little harm in practice. But new users are @@ -7678,7 +7678,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' - Note that this option only affects the exact form = NULL, + Note that this option only affects the exact form = NULL, not other comparison operators or other expressions that are computationally equivalent to some expression involving the equals operator (such as IN). @@ -7703,7 +7703,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' exit_on_error (boolean) - exit_on_error configuration parameter + exit_on_error configuration parameter @@ -7718,16 +7718,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' restart_after_crash (boolean) - restart_after_crash configuration parameter + restart_after_crash configuration parameter - When set to true, which is the default, PostgreSQL + When set to true, which is the default, PostgreSQL will automatically reinitialize after a backend crash. Leaving this value set to true is normally the best way to maximize the availability of the database. However, in some circumstances, such as when - PostgreSQL is being invoked by clusterware, it may be + PostgreSQL is being invoked by clusterware, it may be useful to disable the restart so that the clusterware can gain control and take any actions it deems appropriate. @@ -7742,10 +7742,10 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Preset Options - The following parameters are read-only, and are determined + The following parameters are read-only, and are determined when PostgreSQL is compiled or when it is installed. As such, they have been excluded from the sample - postgresql.conf file. These options report + postgresql.conf file. These options report various aspects of PostgreSQL behavior that might be of interest to certain applications, particularly administrative front-ends. @@ -7756,13 +7756,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' block_size (integer) - block_size configuration parameter + block_size configuration parameter Reports the size of a disk block. It is determined by the value - of BLCKSZ when building the server. The default + of BLCKSZ when building the server. The default value is 8192 bytes. The meaning of some configuration variables (such as ) is influenced by block_size. See data_checksums (boolean) - data_checksums configuration parameter + data_checksums configuration parameter @@ -7788,7 +7788,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' debug_assertions (boolean) - debug_assertions configuration parameter + debug_assertions configuration parameter @@ -7808,13 +7808,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' integer_datetimes (boolean) - integer_datetimes configuration parameter + integer_datetimes configuration parameter - Reports whether PostgreSQL was built with support for - 64-bit-integer dates and times. As of PostgreSQL 10, + Reports whether PostgreSQL was built with support for + 64-bit-integer dates and times. As of PostgreSQL 10, this is always on. @@ -7823,7 +7823,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lc_collate (string) - lc_collate configuration parameter + lc_collate configuration parameter @@ -7838,7 +7838,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lc_ctype (string) - lc_ctype configuration parameter + lc_ctype configuration parameter @@ -7855,13 +7855,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_function_args (integer) - max_function_args configuration parameter + max_function_args configuration parameter Reports the maximum number of function arguments. It is determined by - the value of FUNC_MAX_ARGS when building the server. The + the value of FUNC_MAX_ARGS when building the server. The default value is 100 arguments. @@ -7870,14 +7870,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_identifier_length (integer) - max_identifier_length configuration parameter + max_identifier_length configuration parameter Reports the maximum identifier length. It is determined as one - less than the value of NAMEDATALEN when building - the server. The default value of NAMEDATALEN is + less than the value of NAMEDATALEN when building + the server. The default value of NAMEDATALEN is 64; therefore the default max_identifier_length is 63 bytes, which can be less than 63 characters when using multibyte encodings. @@ -7888,13 +7888,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_index_keys (integer) - max_index_keys configuration parameter + max_index_keys configuration parameter Reports the maximum number of index keys. It is determined by - the value of INDEX_MAX_KEYS when building the server. The + the value of INDEX_MAX_KEYS when building the server. The default value is 32 keys. @@ -7903,16 +7903,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' segment_size (integer) - segment_size configuration parameter + segment_size configuration parameter Reports the number of blocks (pages) that can be stored within a file - segment. It is determined by the value of RELSEG_SIZE + segment. It is determined by the value of RELSEG_SIZE when building the server. The maximum size of a segment file in bytes - is equal to segment_size multiplied by - block_size; by default this is 1GB. + is equal to segment_size multiplied by + block_size; by default this is 1GB. @@ -7920,9 +7920,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_encoding (string) - server_encoding configuration parameter + server_encoding configuration parameter - character set + character set @@ -7937,13 +7937,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_version (string) - server_version configuration parameter + server_version configuration parameter Reports the version number of the server. It is determined by the - value of PG_VERSION when building the server. + value of PG_VERSION when building the server. @@ -7951,13 +7951,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_version_num (integer) - server_version_num configuration parameter + server_version_num configuration parameter Reports the version number of the server as an integer. It is determined - by the value of PG_VERSION_NUM when building the server. + by the value of PG_VERSION_NUM when building the server. @@ -7965,13 +7965,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' wal_block_size (integer) - wal_block_size configuration parameter + wal_block_size configuration parameter Reports the size of a WAL disk block. It is determined by the value - of XLOG_BLCKSZ when building the server. The default value + of XLOG_BLCKSZ when building the server. The default value is 8192 bytes. @@ -7980,14 +7980,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' wal_segment_size (integer) - wal_segment_size configuration parameter + wal_segment_size configuration parameter Reports the number of blocks (pages) in a WAL segment file. The total size of a WAL segment file in bytes is equal to - wal_segment_size multiplied by wal_block_size; + wal_segment_size multiplied by wal_block_size; by default this is 16MB. See for more information. @@ -8010,12 +8010,12 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Custom options have two-part names: an extension name, then a dot, then the parameter name proper, much like qualified names in SQL. An example - is plpgsql.variable_conflict. + is plpgsql.variable_conflict. Because custom options may need to be set in processes that have not - loaded the relevant extension module, PostgreSQL + loaded the relevant extension module, PostgreSQL will accept a setting for any two-part parameter name. Such variables are treated as placeholders and have no function until the module that defines them is loaded. When an extension module is loaded, it will add @@ -8034,7 +8034,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' to assist with recovery of severely damaged databases. There should be no reason to use them on a production database. As such, they have been excluded from the sample - postgresql.conf file. Note that many of these + postgresql.conf file. Note that many of these parameters require special source compilation flags to work at all. @@ -8073,7 +8073,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' post_auth_delay (integer) - post_auth_delay configuration parameter + post_auth_delay configuration parameter @@ -8090,7 +8090,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' pre_auth_delay (integer) - pre_auth_delay configuration parameter + pre_auth_delay configuration parameter @@ -8100,7 +8100,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' authentication procedure. This is intended to give developers an opportunity to attach to the server process with a debugger to trace down misbehavior in authentication. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -8109,7 +8109,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_notify (boolean) - trace_notify configuration parameter + trace_notify configuration parameter @@ -8127,7 +8127,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_recovery_messages (enum) - trace_recovery_messages configuration parameter + trace_recovery_messages configuration parameter @@ -8136,15 +8136,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' would not be logged. This parameter allows the user to override the normal setting of , but only for specific messages. This is intended for use in debugging Hot Standby. - Valid values are DEBUG5, DEBUG4, - DEBUG3, DEBUG2, DEBUG1, and - LOG. The default, LOG, does not affect + Valid values are DEBUG5, DEBUG4, + DEBUG3, DEBUG2, DEBUG1, and + LOG. The default, LOG, does not affect logging decisions at all. The other values cause recovery-related debug messages of that priority or higher to be logged as though they - had LOG priority; for common settings of - log_min_messages this results in unconditionally sending + had LOG priority; for common settings of + log_min_messages this results in unconditionally sending them to the server log. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -8153,7 +8153,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_sort (boolean) - trace_sort configuration parameter + trace_sort configuration parameter @@ -8169,7 +8169,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_locks (boolean) - trace_locks configuration parameter + trace_locks configuration parameter @@ -8210,7 +8210,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lwlocks (boolean) - trace_lwlocks configuration parameter + trace_lwlocks configuration parameter @@ -8230,7 +8230,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_userlocks (boolean) - trace_userlocks configuration parameter + trace_userlocks configuration parameter @@ -8249,7 +8249,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lock_oidmin (integer) - trace_lock_oidmin configuration parameter + trace_lock_oidmin configuration parameter @@ -8268,7 +8268,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lock_table (integer) - trace_lock_table configuration parameter + trace_lock_table configuration parameter @@ -8286,7 +8286,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) debug_deadlocks (boolean) - debug_deadlocks configuration parameter + debug_deadlocks configuration parameter @@ -8305,7 +8305,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) log_btree_build_stats (boolean) - log_btree_build_stats configuration parameter + log_btree_build_stats configuration parameter @@ -8324,7 +8324,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) wal_consistency_checking (string) - wal_consistency_checking configuration parameter + wal_consistency_checking configuration parameter @@ -8344,10 +8344,10 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) the feature. It can be set to all to check all records, or to a comma-separated list of resource managers to check only records originating from those resource managers. Currently, - the supported resource managers are heap, - heap2, btree, hash, - gin, gist, sequence, - spgist, brin, and generic. Only + the supported resource managers are heap, + heap2, btree, hash, + gin, gist, sequence, + spgist, brin, and generic. Only superusers can change this setting. @@ -8356,7 +8356,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) wal_debug (boolean) - wal_debug configuration parameter + wal_debug configuration parameter @@ -8372,7 +8372,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) ignore_checksum_failure (boolean) - ignore_checksum_failure configuration parameter + ignore_checksum_failure configuration parameter @@ -8381,15 +8381,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) Detection of a checksum failure during a read normally causes - PostgreSQL to report an error, aborting the current - transaction. Setting ignore_checksum_failure to on causes + PostgreSQL to report an error, aborting the current + transaction. Setting ignore_checksum_failure to on causes the system to ignore the failure (but still report a warning), and continue processing. This behavior may cause crashes, propagate - or hide corruption, or other serious problems. However, it may allow + or hide corruption, or other serious problems. However, it may allow you to get past the error and retrieve undamaged tuples that might still be present in the table if the block header is still sane. If the header is corrupt an error will be reported even if this option is enabled. The - default setting is off, and it can only be changed by a superuser. + default setting is off, and it can only be changed by a superuser. @@ -8397,16 +8397,16 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) zero_damaged_pages (boolean) - zero_damaged_pages configuration parameter + zero_damaged_pages configuration parameter Detection of a damaged page header normally causes - PostgreSQL to report an error, aborting the current - transaction. Setting zero_damaged_pages to on causes + PostgreSQL to report an error, aborting the current + transaction. Setting zero_damaged_pages to on causes the system to instead report a warning, zero out the damaged - page in memory, and continue processing. This behavior will destroy data, + page in memory, and continue processing. This behavior will destroy data, namely all the rows on the damaged page. However, it does allow you to get past the error and retrieve rows from any undamaged pages that might be present in the table. It is useful for recovering data if @@ -8415,7 +8415,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) data from the damaged pages of a table. Zeroed-out pages are not forced to disk so it is recommended to recreate the table or the index before turning this parameter off again. The - default setting is off, and it can only be changed + default setting is off, and it can only be changed by a superuser. @@ -8447,15 +8447,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) - shared_buffers = x + shared_buffers = x - log_min_messages = DEBUGx + log_min_messages = DEBUGx - datestyle = euro + datestyle = euro @@ -8464,69 +8464,69 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) , - enable_bitmapscan = off, - enable_hashjoin = off, - enable_indexscan = off, - enable_mergejoin = off, - enable_nestloop = off, - enable_indexonlyscan = off, - enable_seqscan = off, - enable_tidscan = off + enable_bitmapscan = off, + enable_hashjoin = off, + enable_indexscan = off, + enable_mergejoin = off, + enable_nestloop = off, + enable_indexonlyscan = off, + enable_seqscan = off, + enable_tidscan = off - fsync = off + fsync = off - listen_addresses = x + listen_addresses = x - listen_addresses = '*' + listen_addresses = '*' - unix_socket_directories = x + unix_socket_directories = x - ssl = on + ssl = on - max_connections = x + max_connections = x - allow_system_table_mods = on + allow_system_table_mods = on - port = x + port = x - ignore_system_indexes = on + ignore_system_indexes = on - log_statement_stats = on + log_statement_stats = on - work_mem = x + work_mem = x , , - log_parser_stats = on, - log_planner_stats = on, - log_executor_stats = on + log_parser_stats = on, + log_planner_stats = on, + log_executor_stats = on - post_auth_delay = x + post_auth_delay = x diff --git a/doc/src/sgml/contrib-spi.sgml b/doc/src/sgml/contrib-spi.sgml index 3287c18d27..32c7105cf6 100644 --- a/doc/src/sgml/contrib-spi.sgml +++ b/doc/src/sgml/contrib-spi.sgml @@ -9,7 +9,7 @@ - The spi module provides several workable examples + The spi module provides several workable examples of using SPI and triggers. While these functions are of some value in their own right, they are even more useful as examples to modify for your own purposes. The functions are general enough to be used @@ -26,15 +26,15 @@ refint — Functions for Implementing Referential Integrity - check_primary_key() and - check_foreign_key() are used to check foreign key constraints. + check_primary_key() and + check_foreign_key() are used to check foreign key constraints. (This functionality is long since superseded by the built-in foreign key mechanism, of course, but the module is still useful as an example.) - check_primary_key() checks the referencing table. - To use, create a BEFORE INSERT OR UPDATE trigger using this + check_primary_key() checks the referencing table. + To use, create a BEFORE INSERT OR UPDATE trigger using this function on a table referencing another table. Specify as the trigger arguments: the referencing table's column name(s) which form the foreign key, the referenced table name, and the column names in the referenced table @@ -43,14 +43,14 @@ - check_foreign_key() checks the referenced table. - To use, create a BEFORE DELETE OR UPDATE trigger using this + check_foreign_key() checks the referenced table. + To use, create a BEFORE DELETE OR UPDATE trigger using this function on a table referenced by other table(s). Specify as the trigger arguments: the number of referencing tables for which the function has to perform checking, the action if a referencing key is found - (cascade — to delete the referencing row, - restrict — to abort transaction if referencing keys - exist, setnull — to set referencing key fields to null), + (cascade — to delete the referencing row, + restrict — to abort transaction if referencing keys + exist, setnull — to set referencing key fields to null), the triggered table's column names which form the primary/unique key, then the referencing table name and column names (repeated for as many referencing tables as were specified by first argument). Note that the @@ -59,7 +59,7 @@ - There are examples in refint.example. + There are examples in refint.example. @@ -67,10 +67,10 @@ timetravel — Functions for Implementing Time Travel - Long ago, PostgreSQL had a built-in time travel feature + Long ago, PostgreSQL had a built-in time travel feature that kept the insert and delete times for each tuple. This can be emulated using these functions. To use these functions, - you must add to a table two columns of abstime type to store + you must add to a table two columns of abstime type to store the date when a tuple was inserted (start_date) and changed/deleted (stop_date): @@ -89,7 +89,7 @@ CREATE TABLE mytab ( When a new row is inserted, start_date should normally be set to - current time, and stop_date to infinity. The trigger + current time, and stop_date to infinity. The trigger will automatically substitute these values if the inserted data contains nulls in these columns. Generally, inserting explicit non-null data in these columns should only be done when re-loading @@ -97,7 +97,7 @@ CREATE TABLE mytab ( - Tuples with stop_date equal to infinity are valid + Tuples with stop_date equal to infinity are valid now, and can be modified. Tuples with a finite stop_date cannot be modified anymore — the trigger will prevent it. (If you need to do that, you can turn off time travel as shown below.) @@ -107,7 +107,7 @@ CREATE TABLE mytab ( For a modifiable row, on update only the stop_date in the tuple being updated will be changed (to current time) and a new tuple with the modified data will be inserted. Start_date in this new tuple will be set to current - time and stop_date to infinity. + time and stop_date to infinity. @@ -117,29 +117,29 @@ CREATE TABLE mytab ( To query for tuples valid now, include - stop_date = 'infinity' in the query's WHERE condition. + stop_date = 'infinity' in the query's WHERE condition. (You might wish to incorporate that in a view.) Similarly, you can query for tuples valid at any past time with suitable conditions on start_date and stop_date. - timetravel() is the general trigger function that supports - this behavior. Create a BEFORE INSERT OR UPDATE OR DELETE + timetravel() is the general trigger function that supports + this behavior. Create a BEFORE INSERT OR UPDATE OR DELETE trigger using this function on each time-traveled table. Specify two trigger arguments: the actual names of the start_date and stop_date columns. Optionally, you can specify one to three more arguments, which must refer - to columns of type text. The trigger will store the name of + to columns of type text. The trigger will store the name of the current user into the first of these columns during INSERT, the second column during UPDATE, and the third during DELETE. - set_timetravel() allows you to turn time-travel on or off for + set_timetravel() allows you to turn time-travel on or off for a table. - set_timetravel('mytab', 1) will turn TT ON for table mytab. - set_timetravel('mytab', 0) will turn TT OFF for table mytab. + set_timetravel('mytab', 1) will turn TT ON for table mytab. + set_timetravel('mytab', 0) will turn TT OFF for table mytab. In both cases the old status is reported. While TT is off, you can modify the start_date and stop_date columns freely. Note that the on/off status is local to the current database session — fresh sessions will @@ -147,12 +147,12 @@ CREATE TABLE mytab ( - get_timetravel() returns the TT state for a table without + get_timetravel() returns the TT state for a table without changing it. - There is an example in timetravel.example. + There is an example in timetravel.example. @@ -160,17 +160,17 @@ CREATE TABLE mytab ( autoinc — Functions for Autoincrementing Fields - autoinc() is a trigger that stores the next value of + autoinc() is a trigger that stores the next value of a sequence into an integer field. This has some overlap with the - built-in serial column feature, but it is not the same: - autoinc() will override attempts to substitute a + built-in serial column feature, but it is not the same: + autoinc() will override attempts to substitute a different field value during inserts, and optionally it can be used to increment the field during updates, too. - To use, create a BEFORE INSERT (or optionally BEFORE - INSERT OR UPDATE) trigger using this function. Specify two + To use, create a BEFORE INSERT (or optionally BEFORE + INSERT OR UPDATE) trigger using this function. Specify two trigger arguments: the name of the integer column to be modified, and the name of the sequence object that will supply values. (Actually, you can specify any number of pairs of such names, if @@ -178,7 +178,7 @@ CREATE TABLE mytab ( - There is an example in autoinc.example. + There is an example in autoinc.example. @@ -187,19 +187,19 @@ CREATE TABLE mytab ( insert_username — Functions for Tracking Who Changed a Table - insert_username() is a trigger that stores the current + insert_username() is a trigger that stores the current user's name into a text field. This can be useful for tracking who last modified a particular row within a table. - To use, create a BEFORE INSERT and/or UPDATE + To use, create a BEFORE INSERT and/or UPDATE trigger using this function. Specify a single trigger argument: the name of the text column to be modified. - There is an example in insert_username.example. + There is an example in insert_username.example. @@ -208,21 +208,21 @@ CREATE TABLE mytab ( moddatetime — Functions for Tracking Last Modification Time - moddatetime() is a trigger that stores the current - time into a timestamp field. This can be useful for tracking + moddatetime() is a trigger that stores the current + time into a timestamp field. This can be useful for tracking the last modification time of a particular row within a table. - To use, create a BEFORE UPDATE + To use, create a BEFORE UPDATE trigger using this function. Specify a single trigger argument: the name of the column to be modified. - The column must be of type timestamp or timestamp with - time zone. + The column must be of type timestamp or timestamp with + time zone. - There is an example in moddatetime.example. + There is an example in moddatetime.example. diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index f32b8a81a2..7dd203e9cd 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -6,7 +6,7 @@ This appendix and the next one contain information regarding the modules that can be found in the contrib directory of the - PostgreSQL distribution. + PostgreSQL distribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental @@ -41,54 +41,54 @@ make installcheck - once you have a PostgreSQL server running. + once you have a PostgreSQL server running. - If you are using a pre-packaged version of PostgreSQL, + If you are using a pre-packaged version of PostgreSQL, these modules are typically made available as a separate subpackage, - such as postgresql-contrib. + such as postgresql-contrib. Many modules supply new user-defined functions, operators, or types. To make use of one of these modules, after you have installed the code you need to register the new SQL objects in the database system. - In PostgreSQL 9.1 and later, this is done by executing + In PostgreSQL 9.1 and later, this is done by executing a command. In a fresh database, you can simply do -CREATE EXTENSION module_name; +CREATE EXTENSION module_name; This command must be run by a database superuser. This registers the new SQL objects in the current database only, so you need to run this command in each database that you want the module's facilities to be available in. Alternatively, run it in - database template1 so that the extension will be copied into + database template1 so that the extension will be copied into subsequently-created databases by default. Many modules allow you to install their objects in a schema of your choice. To do that, add SCHEMA - schema_name to the CREATE EXTENSION + schema_name to the CREATE EXTENSION command. By default, the objects will be placed in your current creation - target schema, typically public. + target schema, typically public. If your database was brought forward by dump and reload from a pre-9.1 - version of PostgreSQL, and you had been using the pre-9.1 + version of PostgreSQL, and you had been using the pre-9.1 version of the module in it, you should instead do -CREATE EXTENSION module_name FROM unpackaged; +CREATE EXTENSION module_name FROM unpackaged; This will update the pre-9.1 objects of the module into a proper - extension object. Future updates to the module will be + extension object. Future updates to the module will be managed by . For more information about extension updates, see . @@ -163,7 +163,7 @@ pages. This appendix and the previous one contain information regarding the modules that can be found in the contrib directory of the - PostgreSQL distribution. See for + PostgreSQL distribution. See for more information about the contrib section in general and server extensions and plug-ins found in contrib specifically. diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index 1ffc40f1a5..46d8e4eb8f 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -8,7 +8,7 @@ - This module implements a data type cube for + This module implements a data type cube for representing multidimensional cubes. @@ -17,8 +17,8 @@ shows the valid external - representations for the cube - type. x, y, etc. denote + representations for the cube + type. x, y, etc. denote floating-point numbers. @@ -34,43 +34,43 @@ - x + x A one-dimensional point (or, zero-length one-dimensional interval) - (x) + (x) Same as above - x1,x2,...,xn + x1,x2,...,xn A point in n-dimensional space, represented internally as a zero-volume cube - (x1,x2,...,xn) + (x1,x2,...,xn) Same as above - (x),(y) - A one-dimensional interval starting at x and ending at y or vice versa; the + (x),(y) + A one-dimensional interval starting at x and ending at y or vice versa; the order does not matter - [(x),(y)] + [(x),(y)] Same as above - (x1,...,xn),(y1,...,yn) + (x1,...,xn),(y1,...,yn) An n-dimensional cube represented by a pair of its diagonally opposite corners - [(x1,...,xn),(y1,...,yn)] + [(x1,...,xn),(y1,...,yn)] Same as above @@ -79,17 +79,17 @@ It does not matter which order the opposite corners of a cube are - entered in. The cube functions + entered in. The cube functions automatically swap values if needed to create a uniform - lower left — upper right internal representation. - When the corners coincide, cube stores only one corner - along with an is point flag to avoid wasting space. + lower left — upper right internal representation. + When the corners coincide, cube stores only one corner + along with an is point flag to avoid wasting space. White space is ignored on input, so - [(x),(y)] is the same as - [ ( x ), ( y ) ]. + [(x),(y)] is the same as + [ ( x ), ( y ) ]. @@ -107,7 +107,7 @@ shows the operators provided for - type cube. + type cube. @@ -123,91 +123,91 @@ - a = b - boolean + a = b + boolean The cubes a and b are identical. - a && b - boolean + a && b + boolean The cubes a and b overlap. - a @> b - boolean + a @> b + boolean The cube a contains the cube b. - a <@ b - boolean + a <@ b + boolean The cube a is contained in the cube b. - a < b - boolean + a < b + boolean The cube a is less than the cube b. - a <= b - boolean + a <= b + boolean The cube a is less than or equal to the cube b. - a > b - boolean + a > b + boolean The cube a is greater than the cube b. - a >= b - boolean + a >= b + boolean The cube a is greater than or equal to the cube b. - a <> b - boolean + a <> b + boolean The cube a is not equal to the cube b. - a -> n - float8 - Get n-th coordinate of cube (counting from 1). + a -> n + float8 + Get n-th coordinate of cube (counting from 1). - a ~> n - float8 + a ~> n + float8 - Get n-th coordinate in normalized cube + Get n-th coordinate in normalized cube representation, in which the coordinates have been rearranged into - the form lower left — upper right; that is, the + the form lower left — upper right; that is, the smaller endpoint along each dimension appears first. - a <-> b - float8 + a <-> b + float8 Euclidean distance between a and b. - a <#> b - float8 + a <#> b + float8 Taxicab (L-1 metric) distance between a and b. - a <=> b - float8 + a <=> b + float8 Chebyshev (L-inf metric) distance between a and b. @@ -216,35 +216,35 @@
- (Before PostgreSQL 8.2, the containment operators @> and <@ were - respectively called @ and ~. These names are still available, but are + (Before PostgreSQL 8.2, the containment operators @> and <@ were + respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) - The scalar ordering operators (<, >=, etc) + The scalar ordering operators (<, >=, etc) do not make a lot of sense for any practical purpose but sorting. These operators first compare the first coordinates, and if those are equal, compare the second coordinates, etc. They exist mainly to support the - b-tree index operator class for cube, which can be useful for - example if you would like a UNIQUE constraint on a cube column. + b-tree index operator class for cube, which can be useful for + example if you would like a UNIQUE constraint on a cube column. - The cube module also provides a GiST index operator class for - cube values. - A cube GiST index can be used to search for values using the - =, &&, @>, and - <@ operators in WHERE clauses. + The cube module also provides a GiST index operator class for + cube values. + A cube GiST index can be used to search for values using the + =, &&, @>, and + <@ operators in WHERE clauses. - In addition, a cube GiST index can be used to find nearest + In addition, a cube GiST index can be used to find nearest neighbors using the metric operators - <->, <#>, and - <=> in ORDER BY clauses. + <->, <#>, and + <=> in ORDER BY clauses. For example, the nearest neighbor of the 3-D point (0.5, 0.5, 0.5) could be found efficiently with: @@ -253,7 +253,7 @@ SELECT c FROM test ORDER BY c <-> cube(array[0.5,0.5,0.5]) LIMIT 1; - The ~> operator can also be used in this way to + The ~> operator can also be used in this way to efficiently retrieve the first few values sorted by a selected coordinate. For example, to get the first few cubes ordered by the first coordinate (lower left corner) ascending one could use the following query: @@ -365,7 +365,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_ll_coord(cube, integer) float8 - Returns the n-th coordinate value for the lower + Returns the n-th coordinate value for the lower left corner of the cube. @@ -376,7 +376,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_ur_coord(cube, integer) float8 - Returns the n-th coordinate value for the + Returns the n-th coordinate value for the upper right corner of the cube. @@ -412,9 +412,9 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; desired. - cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)' + cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)' cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]) == - '(5,3,1,1),(8,7,6,6)' + '(5,3,1,1),(8,7,6,6)' @@ -440,24 +440,24 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_enlarge(c cube, r double, n integer) cube Increases the size of the cube by the specified - radius r in at least n dimensions. + radius r in at least n dimensions. If the radius is negative the cube is shrunk instead. - All defined dimensions are changed by the radius r. - Lower-left coordinates are decreased by r and - upper-right coordinates are increased by r. If a + All defined dimensions are changed by the radius r. + Lower-left coordinates are decreased by r and + upper-right coordinates are increased by r. If a lower-left coordinate is increased to more than the corresponding - upper-right coordinate (this can only happen when r + upper-right coordinate (this can only happen when r < 0) than both coordinates are set to their average. - If n is greater than the number of defined dimensions - and the cube is being enlarged (r > 0), then extra - dimensions are added to make n altogether; + If n is greater than the number of defined dimensions + and the cube is being enlarged (r > 0), then extra + dimensions are added to make n altogether; 0 is used as the initial value for the extra coordinates. This function is useful for creating bounding boxes around a point for searching for nearby points. cube_enlarge('(1,2),(3,4)', 0.5, 3) == - '(0.5,1.5,-0.5),(3.5,4.5,0.5)' + '(0.5,1.5,-0.5),(3.5,4.5,0.5)' @@ -523,13 +523,13 @@ t Notes - For examples of usage, see the regression test sql/cube.sql. + For examples of usage, see the regression test sql/cube.sql. To make it harder for people to break things, there is a limit of 100 on the number of dimensions of cubes. This is set - in cubedata.h if you need something bigger. + in cubedata.h if you need something bigger. diff --git a/doc/src/sgml/custom-scan.sgml b/doc/src/sgml/custom-scan.sgml index 9d1ca7bfe1..a46641674f 100644 --- a/doc/src/sgml/custom-scan.sgml +++ b/doc/src/sgml/custom-scan.sgml @@ -9,9 +9,9 @@ - PostgreSQL supports a set of experimental facilities which + PostgreSQL supports a set of experimental facilities which are intended to allow extension modules to add new scan types to the system. - Unlike a foreign data wrapper, which is only + Unlike a foreign data wrapper, which is only responsible for knowing how to scan its own foreign tables, a custom scan provider can provide an alternative method of scanning any relation in the system. Typically, the motivation for writing a custom scan provider will @@ -51,9 +51,9 @@ extern PGDLLIMPORT set_rel_pathlist_hook_type set_rel_pathlist_hook; Although this hook function can be used to examine, modify, or remove paths generated by the core system, a custom scan provider will typically - confine itself to generating CustomPath objects and adding - them to rel using add_path. The custom scan - provider is responsible for initializing the CustomPath + confine itself to generating CustomPath objects and adding + them to rel using add_path. The custom scan + provider is responsible for initializing the CustomPath object, which is declared like this: typedef struct CustomPath @@ -68,22 +68,22 @@ typedef struct CustomPath - path must be initialized as for any other path, including + path must be initialized as for any other path, including the row-count estimate, start and total cost, and sort ordering provided - by this path. flags is a bit mask, which should include - CUSTOMPATH_SUPPORT_BACKWARD_SCAN if the custom path can support - a backward scan and CUSTOMPATH_SUPPORT_MARK_RESTORE if it + by this path. flags is a bit mask, which should include + CUSTOMPATH_SUPPORT_BACKWARD_SCAN if the custom path can support + a backward scan and CUSTOMPATH_SUPPORT_MARK_RESTORE if it can support mark and restore. Both capabilities are optional. - An optional custom_paths is a list of Path + An optional custom_paths is a list of Path nodes used by this custom-path node; these will be transformed into - Plan nodes by planner. - custom_private can be used to store the custom path's + Plan nodes by planner. + custom_private can be used to store the custom path's private data. Private data should be stored in a form that can be handled - by nodeToString, so that debugging routines that attempt to - print the custom path will work as designed. methods must + by nodeToString, so that debugging routines that attempt to + print the custom path will work as designed. methods must point to a (usually statically allocated) object implementing the required custom path methods, of which there is currently only one. The - LibraryName and SymbolName fields must also + LibraryName and SymbolName fields must also be initialized so that the dynamic loader can resolve them to locate the method table. @@ -93,7 +93,7 @@ typedef struct CustomPath relations, such a path must produce the same output as would normally be produced by the join it replaces. To do this, the join provider should set the following hook, and then within the hook function, - create CustomPath path(s) for the join relation. + create CustomPath path(s) for the join relation. typedef void (*set_join_pathlist_hook_type) (PlannerInfo *root, RelOptInfo *joinrel, @@ -122,7 +122,7 @@ Plan *(*PlanCustomPath) (PlannerInfo *root, List *custom_plans); Convert a custom path to a finished plan. The return value will generally - be a CustomScan object, which the callback must allocate and + be a CustomScan object, which the callback must allocate and initialize. See for more details. @@ -150,45 +150,45 @@ typedef struct CustomScan - scan must be initialized as for any other scan, including + scan must be initialized as for any other scan, including estimated costs, target lists, qualifications, and so on. - flags is a bit mask with the same meaning as in - CustomPath. - custom_plans can be used to store child - Plan nodes. - custom_exprs should be used to + flags is a bit mask with the same meaning as in + CustomPath. + custom_plans can be used to store child + Plan nodes. + custom_exprs should be used to store expression trees that will need to be fixed up by - setrefs.c and subselect.c, while - custom_private should be used to store other private data + setrefs.c and subselect.c, while + custom_private should be used to store other private data that is only used by the custom scan provider itself. - custom_scan_tlist can be NIL when scanning a base + custom_scan_tlist can be NIL when scanning a base relation, indicating that the custom scan returns scan tuples that match the base relation's row type. Otherwise it is a target list describing - the actual scan tuples. custom_scan_tlist must be + the actual scan tuples. custom_scan_tlist must be provided for joins, and could be provided for scans if the custom scan provider can compute some non-Var expressions. - custom_relids is set by the core code to the set of + custom_relids is set by the core code to the set of relations (range table indexes) that this scan node handles; except when this scan is replacing a join, it will have only one member. - methods must point to a (usually statically allocated) + methods must point to a (usually statically allocated) object implementing the required custom scan methods, which are further detailed below. - When a CustomScan scans a single relation, - scan.scanrelid must be the range table index of the table - to be scanned. When it replaces a join, scan.scanrelid + When a CustomScan scans a single relation, + scan.scanrelid must be the range table index of the table + to be scanned. When it replaces a join, scan.scanrelid should be zero. - Plan trees must be able to be duplicated using copyObject, - so all the data stored within the custom fields must consist of + Plan trees must be able to be duplicated using copyObject, + so all the data stored within the custom fields must consist of nodes that that function can handle. Furthermore, custom scan providers cannot substitute a larger structure that embeds - a CustomScan for the structure itself, as would be possible - for a CustomPath or CustomScanState. + a CustomScan for the structure itself, as would be possible + for a CustomPath or CustomScanState. @@ -197,14 +197,14 @@ typedef struct CustomScan Node *(*CreateCustomScanState) (CustomScan *cscan); - Allocate a CustomScanState for this - CustomScan. The actual allocation will often be larger than - required for an ordinary CustomScanState, because many + Allocate a CustomScanState for this + CustomScan. The actual allocation will often be larger than + required for an ordinary CustomScanState, because many providers will wish to embed that as the first field of a larger structure. - The value returned must have the node tag and methods + The value returned must have the node tag and methods set appropriately, but other fields should be left as zeroes at this - stage; after ExecInitCustomScan performs basic initialization, - the BeginCustomScan callback will be invoked to give the + stage; after ExecInitCustomScan performs basic initialization, + the BeginCustomScan callback will be invoked to give the custom scan provider a chance to do whatever else is needed.
@@ -214,8 +214,8 @@ Node *(*CreateCustomScanState) (CustomScan *cscan); Executing Custom Scans - When a CustomScan is executed, its execution state is - represented by a CustomScanState, which is declared as + When a CustomScan is executed, its execution state is + represented by a CustomScanState, which is declared as follows: typedef struct CustomScanState @@ -228,15 +228,15 @@ typedef struct CustomScanState - ss is initialized as for any other scan state, + ss is initialized as for any other scan state, except that if the scan is for a join rather than a base relation, - ss.ss_currentRelation is left NULL. - flags is a bit mask with the same meaning as in - CustomPath and CustomScan. - methods must point to a (usually statically allocated) + ss.ss_currentRelation is left NULL. + flags is a bit mask with the same meaning as in + CustomPath and CustomScan. + methods must point to a (usually statically allocated) object implementing the required custom scan state methods, which are - further detailed below. Typically, a CustomScanState, which - need not support copyObject, will actually be a larger + further detailed below. Typically, a CustomScanState, which + need not support copyObject, will actually be a larger structure embedding the above as its first member. @@ -249,8 +249,8 @@ void (*BeginCustomScan) (CustomScanState *node, EState *estate, int eflags); - Complete initialization of the supplied CustomScanState. - Standard fields have been initialized by ExecInitCustomScan, + Complete initialization of the supplied CustomScanState. + Standard fields have been initialized by ExecInitCustomScan, but any private fields should be initialized here.
@@ -259,16 +259,16 @@ void (*BeginCustomScan) (CustomScanState *node, TupleTableSlot *(*ExecCustomScan) (CustomScanState *node); Fetch the next scan tuple. If any tuples remain, it should fill - ps_ResultTupleSlot with the next tuple in the current scan + ps_ResultTupleSlot with the next tuple in the current scan direction, and then return the tuple slot. If not, - NULL or an empty slot should be returned. + NULL or an empty slot should be returned.
void (*EndCustomScan) (CustomScanState *node); - Clean up any private data associated with the CustomScanState. + Clean up any private data associated with the CustomScanState. This method is required, but it does not need to do anything if there is no associated data or it will be cleaned up automatically. @@ -286,9 +286,9 @@ void (*ReScanCustomScan) (CustomScanState *node); void (*MarkPosCustomScan) (CustomScanState *node); Save the current scan position so that it can subsequently be restored - by the RestrPosCustomScan callback. This callback is + by the RestrPosCustomScan callback. This callback is optional, and need only be supplied if the - CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. + CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set.
@@ -296,9 +296,9 @@ void (*MarkPosCustomScan) (CustomScanState *node); void (*RestrPosCustomScan) (CustomScanState *node); Restore the previous scan position as saved by the - MarkPosCustomScan callback. This callback is optional, + MarkPosCustomScan callback. This callback is optional, and need only be supplied if the - CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. + CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. @@ -320,8 +320,8 @@ void (*InitializeDSMCustomScan) (CustomScanState *node, void *coordinate); Initialize the dynamic shared memory that will be required for parallel - operation. coordinate points to a shared memory area of - size equal to the return value of EstimateDSMCustomScan. + operation. coordinate points to a shared memory area of + size equal to the return value of EstimateDSMCustomScan. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. @@ -337,9 +337,9 @@ void (*ReInitializeDSMCustomScan) (CustomScanState *node, This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. Recommended practice is that this callback reset only shared state, - while the ReScanCustomScan callback resets only local + while the ReScanCustomScan callback resets only local state. Currently, this callback will be called - before ReScanCustomScan, but it's best not to rely on + before ReScanCustomScan, but it's best not to rely on that ordering.
@@ -350,7 +350,7 @@ void (*InitializeWorkerCustomScan) (CustomScanState *node, void *coordinate); Initialize a parallel worker's local state based on the shared state - set up by the leader during InitializeDSMCustomScan. + set up by the leader during InitializeDSMCustomScan. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution.
@@ -361,7 +361,7 @@ void (*ShutdownCustomScan) (CustomScanState *node); Release resources when it is anticipated the node will not be executed to completion. This is not called in all cases; sometimes, - EndCustomScan may be called without this function having + EndCustomScan may be called without this function having been called first. Since the DSM segment used by parallel query is destroyed just after this callback is invoked, custom scan providers that wish to take some action before the DSM segment goes away should implement @@ -374,9 +374,9 @@ void (*ExplainCustomScan) (CustomScanState *node, List *ancestors, ExplainState *es); - Output additional information for EXPLAIN of a custom-scan + Output additional information for EXPLAIN of a custom-scan plan node. This callback is optional. Common data stored in the - ScanState, such as the target list and scan relation, will + ScanState, such as the target list and scan relation, will be shown even without this callback, but the callback allows the display of additional, private state.
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 512756df4a..6a15f9030c 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -79,7 +79,7 @@ bytea - binary data (byte array) + binary data (byte array) @@ -354,45 +354,45 @@ - smallint + smallint 2 bytes small-range integer -32768 to +32767 - integer + integer 4 bytes typical choice for integer -2147483648 to +2147483647 - bigint + bigint 8 bytes large-range integer -9223372036854775808 to +9223372036854775807 - decimal + decimal variable user-specified precision, exact up to 131072 digits before the decimal point; up to 16383 digits after the decimal point - numeric + numeric variable user-specified precision, exact up to 131072 digits before the decimal point; up to 16383 digits after the decimal point - real + real 4 bytes variable-precision, inexact 6 decimal digits precision - double precision + double precision 8 bytes variable-precision, inexact 15 decimal digits precision @@ -406,7 +406,7 @@ - serial + serial 4 bytes autoincrementing integer 1 to 2147483647 @@ -574,9 +574,9 @@ NUMERIC Numeric values are physically stored without any extra leading or trailing zeroes. Thus, the declared precision and scale of a column - are maximums, not fixed allocations. (In this sense the numeric - type is more akin to varchar(n) - than to char(n).) The actual storage + are maximums, not fixed allocations. (In this sense the numeric + type is more akin to varchar(n) + than to char(n).) The actual storage requirement is two bytes for each group of four decimal digits, plus three to eight bytes overhead. @@ -593,22 +593,22 @@ NUMERIC In addition to ordinary numeric values, the numeric - type allows the special value NaN, meaning - not-a-number. Any operation on NaN - yields another NaN. When writing this value + type allows the special value NaN, meaning + not-a-number. Any operation on NaN + yields another NaN. When writing this value as a constant in an SQL command, you must put quotes around it, - for example UPDATE table SET x = 'NaN'. On input, - the string NaN is recognized in a case-insensitive manner. + for example UPDATE table SET x = 'NaN'. On input, + the string NaN is recognized in a case-insensitive manner. - In most implementations of the not-a-number concept, - NaN is not considered equal to any other numeric - value (including NaN). In order to allow - numeric values to be sorted and used in tree-based - indexes, PostgreSQL treats NaN - values as equal, and greater than all non-NaN + In most implementations of the not-a-number concept, + NaN is not considered equal to any other numeric + value (including NaN). In order to allow + numeric values to be sorted and used in tree-based + indexes, PostgreSQL treats NaN + values as equal, and greater than all non-NaN values. @@ -756,18 +756,18 @@ FROM generate_series(-3.5, 3.5, 1) as x; floating-point arithmetic does not follow IEEE 754, these values will probably not work as expected.) When writing these values as constants in an SQL command, you must put quotes around them, - for example UPDATE table SET x = '-Infinity'. On input, + for example UPDATE table SET x = '-Infinity'. On input, these strings are recognized in a case-insensitive manner.
- IEEE754 specifies that NaN should not compare equal - to any other floating-point value (including NaN). + IEEE754 specifies that NaN should not compare equal + to any other floating-point value (including NaN). In order to allow floating-point values to be sorted and used - in tree-based indexes, PostgreSQL treats - NaN values as equal, and greater than all - non-NaN values. + in tree-based indexes, PostgreSQL treats + NaN values as equal, and greater than all + non-NaN values. @@ -776,7 +776,7 @@ FROM generate_series(-3.5, 3.5, 1) as x; notations float and float(p) for specifying inexact numeric types. Here, p specifies - the minimum acceptable precision in binary digits. + the minimum acceptable precision in binary digits. PostgreSQL accepts float(1) to float(24) as selecting the real type, while @@ -870,12 +870,12 @@ ALTER SEQUENCE tablename_ Thus, we have created an integer column and arranged for its default - values to be assigned from a sequence generator. A NOT NULL + values to be assigned from a sequence generator. A NOT NULL constraint is applied to ensure that a null value cannot be inserted. (In most cases you would also want to attach a - UNIQUE or PRIMARY KEY constraint to prevent + UNIQUE or PRIMARY KEY constraint to prevent duplicate values from being inserted by accident, but this is - not automatic.) Lastly, the sequence is marked as owned by + not automatic.) Lastly, the sequence is marked as owned by the column, so that it will be dropped if the column or table is dropped.
@@ -908,7 +908,7 @@ ALTER SEQUENCE tablename_bigserial and serial8 work the same way, except that they create a bigint column. bigserial should be used if you anticipate - the use of more than 231 identifiers over the + the use of more than 231 identifiers over the lifetime of the table. The type names smallserial and serial2 also work the same way, except that they create a smallint column. @@ -962,9 +962,9 @@ ALTER SEQUENCE tablename_ Since the output of this data type is locale-sensitive, it might not - work to load money data into a database that has a different - setting of lc_monetary. To avoid problems, before - restoring a dump into a new database make sure lc_monetary has + work to load money data into a database that has a different + setting of lc_monetary. To avoid problems, before + restoring a dump into a new database make sure lc_monetary has the same or equivalent value as in the database that was dumped.
@@ -994,7 +994,7 @@ SELECT '52093.89'::money::numeric::float8; Division of a money value by an integer value is performed with truncation of the fractional part towards zero. To get a rounded result, divide by a floating-point value, or cast the money - value to numeric before dividing and back to money + value to numeric before dividing and back to money afterwards. (The latter is preferable to avoid risking precision loss.) When a money value is divided by another money value, the result is double precision (i.e., a pure number, @@ -1047,11 +1047,11 @@ SELECT '52093.89'::money::numeric::float8; - character varying(n), varchar(n) + character varying(n), varchar(n) variable-length with limit - character(n), char(n) + character(n), char(n) fixed-length, blank padded @@ -1070,10 +1070,10 @@ SELECT '52093.89'::money::numeric::float8; SQL defines two primary character types: - character varying(n) and - character(n), where n + character varying(n) and + character(n), where n is a positive integer. Both of these types can store strings up to - n characters (not bytes) in length. An attempt to store a + n characters (not bytes) in length. An attempt to store a longer string into a column of these types will result in an error, unless the excess characters are all spaces, in which case the string will be truncated to the maximum length. (This somewhat @@ -1087,22 +1087,22 @@ SELECT '52093.89'::money::numeric::float8; If one explicitly casts a value to character - varying(n) or - character(n), then an over-length - value will be truncated to n characters without + varying(n) or + character(n), then an over-length + value will be truncated to n characters without raising an error. (This too is required by the SQL standard.)
- The notations varchar(n) and - char(n) are aliases for character - varying(n) and - character(n), respectively. + The notations varchar(n) and + char(n) are aliases for character + varying(n) and + character(n), respectively. character without length specifier is equivalent to character(1). If character varying is used without length specifier, the type accepts strings of any size. The - latter is a PostgreSQL extension. + latter is a PostgreSQL extension. @@ -1115,19 +1115,19 @@ SELECT '52093.89'::money::numeric::float8; Values of type character are physically padded - with spaces to the specified width n, and are + with spaces to the specified width n, and are stored and displayed that way. However, trailing spaces are treated as semantically insignificant and disregarded when comparing two values of type character. In collations where whitespace is significant, this behavior can produce unexpected results; for example SELECT 'a '::CHAR(2) collate "C" < - E'a\n'::CHAR(2) returns true, even though C + E'a\n'::CHAR(2)
returns true, even though C locale would consider a space to be greater than a newline. Trailing spaces are removed when converting a character value to one of the other string types. Note that trailing spaces - are semantically significant in + are semantically significant in character varying and text values, and - when using pattern matching, that is LIKE and + when using pattern matching, that is LIKE and regular expressions.
@@ -1140,7 +1140,7 @@ SELECT '52093.89'::money::numeric::float8; stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The - maximum value that will be allowed for n in the data + maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to @@ -1155,10 +1155,10 @@ SELECT '52093.89'::money::numeric::float8; apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While - character(n) has performance + character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact - character(n) is usually the slowest of + character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead. @@ -1220,7 +1220,7 @@ SELECT b, char_length(b) FROM test2; in the internal system catalogs and is not intended for use by the general user. Its length is currently defined as 64 bytes (63 usable characters plus terminator) but should be referenced using the constant - NAMEDATALEN in C source code. + NAMEDATALEN in C source code. The length is set at compile time (and is therefore adjustable for special uses); the default maximum length might change in a future release. The type "char" @@ -1304,7 +1304,7 @@ SELECT b, char_length(b) FROM test2; Second, operations on binary strings process the actual bytes, whereas the processing of character strings depends on locale settings. In short, binary strings are appropriate for storing data that the - programmer thinks of as raw bytes, whereas character + programmer thinks of as raw bytes, whereas character strings are appropriate for storing text.
@@ -1328,10 +1328,10 @@ SELECT b, char_length(b) FROM test2;
- <type>bytea</> Hex Format + <type>bytea</type> Hex Format - The hex format encodes binary data as 2 hexadecimal digits + The hex format encodes binary data as 2 hexadecimal digits per byte, most significant nibble first. The entire string is preceded by the sequence \x (to distinguish it from the escape format). In some contexts, the initial backslash may @@ -1355,7 +1355,7 @@ SELECT E'\\xDEADBEEF'; - <type>bytea</> Escape Format + <type>bytea</type> Escape Format The escape format is the traditional @@ -1390,7 +1390,7 @@ SELECT E'\\xDEADBEEF'; - <type>bytea</> Literal Escaped Octets + <type>bytea</type> Literal Escaped Octets @@ -1430,7 +1430,7 @@ SELECT E'\\xDEADBEEF'; 0 to 31 and 127 to 255 non-printable octets - E'\\xxx' (octal value) + E'\\xxx' (octal value) SELECT E'\\001'::bytea; \001 @@ -1481,7 +1481,7 @@ SELECT E'\\xDEADBEEF';
- <type>bytea</> Output Escaped Octets + <type>bytea</type> Output Escaped Octets @@ -1506,7 +1506,7 @@ SELECT E'\\xDEADBEEF'; 0 to 31 and 127 to 255 non-printable octets - \xxx (octal value) + \xxx (octal value) SELECT E'\\001'::bytea; \001 @@ -1524,7 +1524,7 @@ SELECT E'\\xDEADBEEF';
- Depending on the front end to PostgreSQL you use, + Depending on the front end to PostgreSQL you use, you might have additional work to do in terms of escaping and unescaping bytea strings. For example, you might also have to escape line feeds and carriage returns if your interface @@ -1685,7 +1685,7 @@ MINUTE TO SECOND Note that if both fields and p are specified, the - fields must include SECOND, + fields must include SECOND, since the precision applies only to the seconds. @@ -1717,9 +1717,9 @@ MINUTE TO SECOND For some formats, ordering of day, month, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields. Set the parameter - to MDY to select month-day-year interpretation, - DMY to select day-month-year interpretation, or - YMD to select year-month-day interpretation. + to MDY to select month-day-year interpretation, + DMY to select day-month-year interpretation, or + YMD to select year-month-day interpretation.
@@ -1784,19 +1784,19 @@ MINUTE TO SECOND 1/8/1999 - January 8 in MDY mode; - August 1 in DMY mode + January 8 in MDY mode; + August 1 in DMY mode 1/18/1999 - January 18 in MDY mode; + January 18 in MDY mode; rejected in other modes 01/02/03 - January 2, 2003 in MDY mode; - February 1, 2003 in DMY mode; - February 3, 2001 in YMD mode + January 2, 2003 in MDY mode; + February 1, 2003 in DMY mode; + February 3, 2001 in YMD mode @@ -1813,15 +1813,15 @@ MINUTE TO SECOND 99-Jan-08 - January 8 in YMD mode, else error + January 8 in YMD mode, else error 08-Jan-99 - January 8, except error in YMD mode + January 8, except error in YMD mode Jan-08-99 - January 8, except error in YMD mode + January 8, except error in YMD mode 19990108 @@ -2070,20 +2070,20 @@ January 8 04:05:06 1999 PST For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, - GMT). An input value that has an explicit + GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's parameter, and is converted to UTC using the - offset for the timezone zone. + offset for the timezone zone. When a timestamp with time zone value is output, it is always converted from UTC to the - current timezone zone, and displayed as local time in that + current timezone zone, and displayed as local time in that zone. To see the time in another time zone, either change - timezone or use the AT TIME ZONE construct + timezone or use the AT TIME ZONE construct (see ). @@ -2091,8 +2091,8 @@ January 8 04:05:06 1999 PST Conversions between timestamp without time zone and timestamp with time zone normally assume that the timestamp without time zone value should be taken or given - as timezone local time. A different time zone can - be specified for the conversion using AT TIME ZONE. + as timezone local time. A different time zone can + be specified for the conversion using AT TIME ZONE.
@@ -2117,7 +2117,7 @@ January 8 04:05:06 1999 PST are specially represented inside the system and will be displayed unchanged; but the others are simply notational shorthands that will be converted to ordinary date/time values when read. - (In particular, now and related strings are converted + (In particular, now and related strings are converted to a specific time value as soon as they are read.) All of these values need to be enclosed in single quotes when used as constants in SQL commands. @@ -2187,7 +2187,7 @@ January 8 04:05:06 1999 PST LOCALTIMESTAMP. The latter four accept an optional subsecond precision specification. (See .) Note that these are - SQL functions and are not recognized in data input strings. + SQL functions and are not recognized in data input strings.
@@ -2211,8 +2211,8 @@ January 8 04:05:06 1999 PST The output format of the date/time types can be set to one of the four styles ISO 8601, - SQL (Ingres), traditional POSTGRES - (Unix date format), or + SQL (Ingres), traditional POSTGRES + (Unix date format), or German. The default is the ISO format. (The SQL standard requires the use of the ISO 8601 @@ -2222,7 +2222,7 @@ January 8 04:05:06 1999 PST output style. The output of the date and time types is generally only the date or time part in accordance with the given examples. However, the - POSTGRES style outputs date-only values in + POSTGRES style outputs date-only values in ISO format. @@ -2263,9 +2263,9 @@ January 8 04:05:06 1999 PST - ISO 8601 specifies the use of uppercase letter T to separate - the date and time. PostgreSQL accepts that format on - input, but on output it uses a space rather than T, as shown + ISO 8601 specifies the use of uppercase letter T to separate + the date and time. PostgreSQL accepts that format on + input, but on output it uses a space rather than T, as shown above. This is for readability and for consistency with RFC 3339 as well as some other database systems. @@ -2292,17 +2292,17 @@ January 8 04:05:06 1999 PST - SQL, DMY + SQL, DMY day/month/year 17/12/1997 15:37:16.00 CET - SQL, MDY + SQL, MDY month/day/year 12/17/1997 07:37:16.00 PST - Postgres, DMY + Postgres, DMY day/month/year Wed 17 Dec 07:37:16 1997 PST @@ -2368,7 +2368,7 @@ January 8 04:05:06 1999 PST The default time zone is specified as a constant numeric offset - from UTC. It is therefore impossible to adapt to + from UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. @@ -2380,7 +2380,7 @@ January 8 04:05:06 1999 PST To address these difficulties, we recommend using date/time types that contain both date and time when using time zones. We - do not recommend using the type time with + do not recommend using the type time with time zone (though it is supported by PostgreSQL for legacy applications and for compliance with the SQL standard). @@ -2401,7 +2401,7 @@ January 8 04:05:06 1999 PST - A full time zone name, for example America/New_York. + A full time zone name, for example America/New_York. The recognized time zone names are listed in the pg_timezone_names view (see ). @@ -2412,16 +2412,16 @@ January 8 04:05:06 1999 PST - A time zone abbreviation, for example PST. Such a + A time zone abbreviation, for example PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations - are listed in the pg_timezone_abbrevs view (see pg_timezone_abbrevs view (see ). You cannot set the configuration parameters or to a time zone abbreviation, but you can use abbreviations in - date/time input values and with the AT TIME ZONE + date/time input values and with the AT TIME ZONE operator. @@ -2429,25 +2429,25 @@ January 8 04:05:06 1999 PST In addition to the timezone names and abbreviations, PostgreSQL will accept POSIX-style time zone - specifications of the form STDoffset or - STDoffsetDST, where - STD is a zone abbreviation, offset is a - numeric offset in hours west from UTC, and DST is an + specifications of the form STDoffset or + STDoffsetDST, where + STD is a zone abbreviation, offset is a + numeric offset in hours west from UTC, and DST is an optional daylight-savings zone abbreviation, assumed to stand for one - hour ahead of the given offset. For example, if EST5EDT + hour ahead of the given offset. For example, if EST5EDT were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. In this syntax, a zone abbreviation can be a string of letters, or an - arbitrary string surrounded by angle brackets (<>). + arbitrary string surrounded by angle brackets (<>). When a daylight-savings zone abbreviation is present, it is assumed to be used according to the same daylight-savings transition rules used in the - IANA time zone database's posixrules entry. + IANA time zone database's posixrules entry. In a standard PostgreSQL installation, - posixrules is the same as US/Eastern, so + posixrules is the same as US/Eastern, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the - posixrules file. + posixrules file.
@@ -2456,10 +2456,10 @@ January 8 04:05:06 1999 PST and full names: abbreviations represent a specific offset from UTC, whereas many of the full names imply a local daylight-savings time rule, and so have two possible UTC offsets. As an example, - 2014-06-04 12:00 America/New_York represents noon local + 2014-06-04 12:00 America/New_York represents noon local time in New York, which for this particular date was Eastern Daylight - Time (UTC-4). So 2014-06-04 12:00 EDT specifies that - same time instant. But 2014-06-04 12:00 EST specifies + Time (UTC-4). So 2014-06-04 12:00 EDT specifies that + same time instant. But 2014-06-04 12:00 EST specifies noon Eastern Standard Time (UTC-5), regardless of whether daylight savings was nominally in effect on that date. @@ -2467,10 +2467,10 @@ January 8 04:05:06 1999 PST To complicate matters, some jurisdictions have used the same timezone abbreviation to mean different UTC offsets at different times; for - example, in Moscow MSK has meant UTC+3 in some years and - UTC+4 in others. PostgreSQL interprets such + example, in Moscow MSK has meant UTC+3 in some years and + UTC+4 in others. PostgreSQL interprets such abbreviations according to whatever they meant (or had most recently - meant) on the specified date; but, as with the EST example + meant) on the specified date; but, as with the EST example above, this is not necessarily the same as local civil time on that date. @@ -2478,18 +2478,18 @@ January 8 04:05:06 1999 PST One should be wary that the POSIX-style time zone feature can lead to silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations. For example, SET - TIMEZONE TO FOOBAR0 will work, leaving the system effectively using + TIMEZONE TO FOOBAR0 will work, leaving the system effectively using a rather peculiar abbreviation for UTC. Another issue to keep in mind is that in POSIX time zone names, - positive offsets are used for locations west of Greenwich. + positive offsets are used for locations west of Greenwich. Everywhere else, PostgreSQL follows the - ISO-8601 convention that positive timezone offsets are east + ISO-8601 convention that positive timezone offsets are east of Greenwich. In all cases, timezone names and abbreviations are recognized - case-insensitively. (This is a change from PostgreSQL + case-insensitively. (This is a change from PostgreSQL versions prior to 8.2, which were case-sensitive in some contexts but not others.) @@ -2497,14 +2497,14 @@ January 8 04:05:06 1999 PST Neither timezone names nor abbreviations are hard-wired into the server; they are obtained from configuration files stored under - .../share/timezone/ and .../share/timezonesets/ + .../share/timezone/ and .../share/timezonesets/ of the installation directory (see ). The configuration parameter can - be set in the file postgresql.conf, or in any of the + be set in the file postgresql.conf, or in any of the other standard ways described in . There are also some special ways to set it: @@ -2513,7 +2513,7 @@ January 8 04:05:06 1999 PST The SQL command SET TIME ZONE sets the time zone for the session. This is an alternative spelling - of SET TIMEZONE TO with a more SQL-spec-compatible syntax. + of SET TIMEZONE TO with a more SQL-spec-compatible syntax. @@ -2541,52 +2541,52 @@ January 8 04:05:06 1999 PST verbose syntax: -@ quantity unit quantity unit... direction +@ quantity unit quantity unit... direction - where quantity is a number (possibly signed); - unit is microsecond, + where quantity is a number (possibly signed); + unit is microsecond, millisecond, second, minute, hour, day, week, month, year, decade, century, millennium, or abbreviations or plurals of these units; - direction can be ago or - empty. The at sign (@) is optional noise. The amounts + direction can be ago or + empty. The at sign (@) is optional noise. The amounts of the different units are implicitly added with appropriate sign accounting. ago negates all the fields. This syntax is also used for interval output, if is set to - postgres_verbose. + postgres_verbose. Quantities of days, hours, minutes, and seconds can be specified without - explicit unit markings. For example, '1 12:59:10' is read - the same as '1 day 12 hours 59 min 10 sec'. Also, + explicit unit markings. For example, '1 12:59:10' is read + the same as '1 day 12 hours 59 min 10 sec'. Also, a combination of years and months can be specified with a dash; - for example '200-10' is read the same as '200 years - 10 months'. (These shorter forms are in fact the only ones allowed + for example '200-10' is read the same as '200 years + 10 months'. (These shorter forms are in fact the only ones allowed by the SQL standard, and are used for output when - IntervalStyle is set to sql_standard.) + IntervalStyle is set to sql_standard.) Interval values can also be written as ISO 8601 time intervals, using - either the format with designators of the standard's section - 4.4.3.2 or the alternative format of section 4.4.3.3. The + either the format with designators of the standard's section + 4.4.3.2 or the alternative format of section 4.4.3.3. The format with designators looks like this: -P quantity unit quantity unit ... T quantity unit ... +P quantity unit quantity unit ... T quantity unit ... - The string must start with a P, and may include a - T that introduces the time-of-day units. The + The string must start with a P, and may include a + T that introduces the time-of-day units. The available unit abbreviations are given in . Units may be omitted, and may be specified in any order, but units smaller than - a day must appear after T. In particular, the meaning of - M depends on whether it is before or after - T. + a day must appear after T. In particular, the meaning of + M depends on whether it is before or after + T. @@ -2634,51 +2634,51 @@ P quantity unit quantity In the alternative format: -P years-months-days T hours:minutes:seconds +P years-months-days T hours:minutes:seconds the string must begin with P, and a - T separates the date and time parts of the interval. + T separates the date and time parts of the interval. The values are given as numbers similar to ISO 8601 dates. - When writing an interval constant with a fields + When writing an interval constant with a fields specification, or when assigning a string to an interval column that was - defined with a fields specification, the interpretation of - unmarked quantities depends on the fields. For - example INTERVAL '1' YEAR is read as 1 year, whereas - INTERVAL '1' means 1 second. Also, field values - to the right of the least significant field allowed by the - fields specification are silently discarded. For - example, writing INTERVAL '1 day 2:03:04' HOUR TO MINUTE + defined with a fields specification, the interpretation of + unmarked quantities depends on the fields. For + example INTERVAL '1' YEAR is read as 1 year, whereas + INTERVAL '1' means 1 second. Also, field values + to the right of the least significant field allowed by the + fields specification are silently discarded. For + example, writing INTERVAL '1 day 2:03:04' HOUR TO MINUTE results in dropping the seconds field, but not the day field. - According to the SQL standard all fields of an interval + According to the SQL standard all fields of an interval value must have the same sign, so a leading negative sign applies to all fields; for example the negative sign in the interval literal - '-1 2:03:04' applies to both the days and hour/minute/second - parts. PostgreSQL allows the fields to have different + '-1 2:03:04' applies to both the days and hour/minute/second + parts. PostgreSQL allows the fields to have different signs, and traditionally treats each field in the textual representation as independently signed, so that the hour/minute/second part is - considered positive in this example. If IntervalStyle is + considered positive in this example. If IntervalStyle is set to sql_standard then a leading sign is considered to apply to all fields (but only if no additional signs appear). - Otherwise the traditional PostgreSQL interpretation is + Otherwise the traditional PostgreSQL interpretation is used. To avoid ambiguity, it's recommended to attach an explicit sign to each field if any field is negative. - Internally interval values are stored as months, days, + Internally interval values are stored as months, days, and seconds. This is done because the number of days in a month varies, and a day can have 23 or 25 hours if a daylight savings time adjustment is involved. The months and days fields are integers while the seconds field can store fractions. Because intervals are - usually created from constant strings or timestamp subtraction, + usually created from constant strings or timestamp subtraction, this storage method works well in most cases. Functions - justify_days and justify_hours are + justify_days and justify_hours are available for adjusting days and hours that overflow their normal ranges. @@ -2686,18 +2686,18 @@ P years-months-days < In the verbose input format, and in some fields of the more compact input formats, field values can have fractional parts; for example - '1.5 week' or '01:02:03.45'. Such input is + '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. - For example, '1.5 month' becomes 1 month and 15 days. + For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. shows some examples - of valid interval input. + of valid interval input.
@@ -2724,11 +2724,11 @@ P years-months-days < P1Y2M3DT4H5M6S - ISO 8601 format with designators: same meaning as above + ISO 8601 format with designators: same meaning as above P0001-02-03T04:05:06 - ISO 8601 alternative format: same meaning as above + ISO 8601 alternative format: same meaning as above @@ -2747,16 +2747,16 @@ P years-months-days < The output format of the interval type can be set to one of the - four styles sql_standard, postgres, - postgres_verbose, or iso_8601, + four styles sql_standard, postgres, + postgres_verbose, or iso_8601, using the command SET intervalstyle. - The default is the postgres format. + The default is the postgres format. shows examples of each output style. - The sql_standard style produces output that conforms to + The sql_standard style produces output that conforms to the SQL standard's specification for interval literal strings, if the interval value meets the standard's restrictions (either year-month only or day-time only, with no mixing of positive @@ -2766,20 +2766,20 @@ P years-months-days < - The output of the postgres style matches the output of - PostgreSQL releases prior to 8.4 when the - parameter was set to ISO. + The output of the postgres style matches the output of + PostgreSQL releases prior to 8.4 when the + parameter was set to ISO. - The output of the postgres_verbose style matches the output of - PostgreSQL releases prior to 8.4 when the - DateStyle parameter was set to non-ISO output. + The output of the postgres_verbose style matches the output of + PostgreSQL releases prior to 8.4 when the + DateStyle parameter was set to non-ISO output. - The output of the iso_8601 style matches the format - with designators described in section 4.4.3.2 of the + The output of the iso_8601 style matches the format + with designators described in section 4.4.3.2 of the ISO 8601 standard. @@ -2796,25 +2796,25 @@ P years-months-days < - sql_standard + sql_standard 1-2 3 4:05:06 -1-2 +3 -4:05:06 - postgres + postgres 1 year 2 mons 3 days 04:05:06 -1 year -2 mons +3 days -04:05:06 - postgres_verbose + postgres_verbose @ 1 year 2 mons @ 3 days 4 hours 5 mins 6 secs @ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago - iso_8601 + iso_8601 P1Y2M P3DT4H5M6S P-1Y-2M3DT-4H-5M-6S @@ -3178,7 +3178,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays x , y - where x and y are the respective + where x and y are the respective coordinates, as floating-point numbers. @@ -3196,8 +3196,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Lines are represented by the linear - equation Ax + By + C = 0, - where A and B are not both zero. Values + equation Ax + By + C = 0, + where A and B are not both zero. Values of type line are input and output in the following form: { A, B, C } @@ -3324,8 +3324,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays where the points are the end points of the line segments - comprising the path. Square brackets ([]) indicate - an open path, while parentheses (()) indicate a + comprising the path. Square brackets ([]) indicate + an open path, while parentheses (()) indicate a closed path. When the outermost parentheses are omitted, as in the third through fifth syntaxes, a closed path is assumed. @@ -3388,7 +3388,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays where - (x,y) + (x,y) is the center point and r is the radius of the circle. @@ -3409,7 +3409,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - PostgreSQL offers data types to store IPv4, IPv6, and MAC + PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses, as shown in . It is better to use these types instead of plain text types to store network addresses, because @@ -3503,7 +3503,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - <type>cidr</> + <type>cidr</type> cidr @@ -3514,11 +3514,11 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Input and output formats follow Classless Internet Domain Routing conventions. The format for specifying networks is address/y where address is the network represented as an + class="parameter">address/y where address is the network represented as an IPv4 or IPv6 address, and y is the number of bits in the netmask. If - y is omitted, it is calculated + class="parameter">y is the number of bits in the netmask. If + y is omitted, it is calculated using assumptions from the older classful network numbering system, except it will be at least large enough to include all of the octets written in the input. It is an error to specify a network address @@ -3530,7 +3530,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays
- <type>cidr</> Type Input Examples + <type>cidr</type> Type Input Examples @@ -3639,8 +3639,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays If you do not like the output format for inet or - cidr values, try the functions host, - text, and abbrev. + cidr values, try the functions host, + text, and abbrev. @@ -3658,24 +3658,24 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - The macaddr type stores MAC addresses, known for example + The macaddr type stores MAC addresses, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). Input is accepted in the following formats: - '08:00:2b:01:02:03' - '08-00-2b-01-02-03' - '08002b:010203' - '08002b-010203' - '0800.2b01.0203' - '0800-2b01-0203' - '08002b010203' + '08:00:2b:01:02:03' + '08-00-2b-01-02-03' + '08002b:010203' + '08002b-010203' + '0800.2b01.0203' + '0800-2b01-0203' + '08002b010203' These examples would all specify the same address. Upper and lower case is accepted for the digits - a through f. Output is always in the + a through f. Output is always in the first of the forms shown. @@ -3708,7 +3708,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - The macaddr8 type stores MAC addresses in EUI-64 + The macaddr8 type stores MAC addresses in EUI-64 format, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). This type can accept both 6 and 8 byte length MAC addresses @@ -3718,31 +3718,31 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Note that IPv6 uses a modified EUI-64 format where the 7th bit should be set to one after the conversion from EUI-48. The - function macaddr8_set7bit is provided to make this + function macaddr8_set7bit is provided to make this change. Generally speaking, any input which is comprised of pairs of hex digits (on byte boundaries), optionally separated consistently by - one of ':', '-' or '.', is + one of ':', '-' or '.', is accepted. The number of hex digits must be either 16 (8 bytes) or 12 (6 bytes). Leading and trailing whitespace is ignored. The following are examples of input formats that are accepted: - '08:00:2b:01:02:03:04:05' - '08-00-2b-01-02-03-04-05' - '08002b:0102030405' - '08002b-0102030405' - '0800.2b01.0203.0405' - '0800-2b01-0203-0405' - '08002b01:02030405' - '08002b0102030405' + '08:00:2b:01:02:03:04:05' + '08-00-2b-01-02-03-04-05' + '08002b:0102030405' + '08002b-0102030405' + '0800.2b01.0203.0405' + '0800-2b01-0203-0405' + '08002b01:02030405' + '08002b0102030405' These examples would all specify the same address. Upper and lower case is accepted for the digits - a through f. Output is always in the + a through f. Output is always in the first of the forms shown. The last six input formats that are mentioned above are not part @@ -3750,7 +3750,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays To convert a traditional 48 bit MAC address in EUI-48 format to modified EUI-64 format to be included as the host portion of an - IPv6 address, use macaddr8_set7bit as shown: + IPv6 address, use macaddr8_set7bit as shown: SELECT macaddr8_set7bit('08:00:2b:01:02:03'); @@ -3798,12 +3798,12 @@ SELECT macaddr8_set7bit('08:00:2b:01:02:03'); If one explicitly casts a bit-string value to - bit(n), it will be truncated or - zero-padded on the right to be exactly n bits, + bit(n), it will be truncated or + zero-padded on the right to be exactly n bits, without raising an error. Similarly, if one explicitly casts a bit-string value to - bit varying(n), it will be truncated - on the right if it is more than n bits. + bit varying(n), it will be truncated + on the right if it is more than n bits. @@ -3860,8 +3860,8 @@ SELECT * FROM test; PostgreSQL provides two data types that are designed to support full text search, which is the activity of - searching through a collection of natural-language documents - to locate those that best match a query. + searching through a collection of natural-language documents + to locate those that best match a query. The tsvector type represents a document in a form optimized for text search; the tsquery type similarly represents a text query. @@ -3879,8 +3879,8 @@ SELECT * FROM test; A tsvector value is a sorted list of distinct - lexemes, which are words that have been - normalized to merge different variants of the same word + lexemes, which are words that have been + normalized to merge different variants of the same word (see for details). Sorting and duplicate-elimination are done automatically during input, as shown in this example: @@ -3913,7 +3913,7 @@ SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; 'Joe''s' 'a' 'contains' 'lexeme' 'quote' 'the' - Optionally, integer positions + Optionally, integer positions can be attached to lexemes: @@ -3932,7 +3932,7 @@ SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::ts Lexemes that have positions can further be labeled with a - weight, which can be A, + weight, which can be A, B, C, or D. D is the default and hence is not shown on output: @@ -3965,7 +3965,7 @@ SELECT 'The Fat Rats'::tsvector; For most English-text-searching applications the above words would be considered non-normalized, but tsvector doesn't care. Raw document text should usually be passed through - to_tsvector to normalize the words appropriately + to_tsvector to normalize the words appropriately for searching: @@ -3991,17 +3991,17 @@ SELECT to_tsvector('english', 'The Fat Rats'); A tsquery value stores lexemes that are to be searched for, and can combine them using the Boolean operators & (AND), | (OR), and - ! (NOT), as well as the phrase search operator - <-> (FOLLOWED BY). There is also a variant - <N> of the FOLLOWED BY - operator, where N is an integer constant that + ! (NOT), as well as the phrase search operator + <-> (FOLLOWED BY). There is also a variant + <N> of the FOLLOWED BY + operator, where N is an integer constant that specifies the distance between the two lexemes being searched - for. <-> is equivalent to <1>. + for. <-> is equivalent to <1>. Parentheses can be used to enforce grouping of these operators. - In the absence of parentheses, ! (NOT) binds most tightly, + In the absence of parentheses, ! (NOT) binds most tightly, <-> (FOLLOWED BY) next most tightly, then & (AND), with | (OR) binding the least tightly. @@ -4031,7 +4031,7 @@ SELECT 'fat & rat & ! cat'::tsquery; Optionally, lexemes in a tsquery can be labeled with one or more weight letters, which restricts them to match only - tsvector lexemes with one of those weights: + tsvector lexemes with one of those weights: SELECT 'fat:ab & cat'::tsquery; @@ -4042,7 +4042,7 @@ SELECT 'fat:ab & cat'::tsquery; - Also, lexemes in a tsquery can be labeled with * + Also, lexemes in a tsquery can be labeled with * to specify prefix matching: SELECT 'super:*'::tsquery; @@ -4050,15 +4050,15 @@ SELECT 'super:*'::tsquery; ----------- 'super':* - This query will match any word in a tsvector that begins - with super. + This query will match any word in a tsvector that begins + with super. Quoting rules for lexemes are the same as described previously for - lexemes in tsvector; and, as with tsvector, + lexemes in tsvector; and, as with tsvector, any required normalization of words must be done before converting - to the tsquery type. The to_tsquery + to the tsquery type. The to_tsquery function is convenient for performing such normalization: @@ -4068,7 +4068,7 @@ SELECT to_tsquery('Fat:ab & Cats'); 'fat':AB & 'cat' - Note that to_tsquery will process prefixes in the same way + Note that to_tsquery will process prefixes in the same way as other words, which means this comparison returns true: @@ -4077,14 +4077,14 @@ SELECT to_tsvector( 'postgraduate' ) @@ to_tsquery( 'postgres:*' ); ---------- t - because postgres gets stemmed to postgr: + because postgres gets stemmed to postgr: SELECT to_tsvector( 'postgraduate' ), to_tsquery( 'postgres:*' ); to_tsvector | to_tsquery ---------------+------------ 'postgradu':1 | 'postgr':* - which will match the stemmed form of postgraduate. + which will match the stemmed form of postgraduate. @@ -4150,7 +4150,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 - <acronym>XML</> Type + <acronym>XML</acronym> Type XML @@ -4163,7 +4163,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 functions to perform type-safe operations on it; see . Use of this data type requires the installation to have been built with configure - --with-libxml. + --with-libxml. @@ -4311,7 +4311,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; Some XML-related functions may not work at all on non-ASCII data when the server encoding is not UTF-8. This is known to be an - issue for xmltable() and xpath() in particular. + issue for xmltable() and xpath() in particular. @@ -4421,17 +4421,17 @@ SET xmloption TO { DOCUMENT | CONTENT }; system tables. OIDs are not added to user-created tables, unless WITH OIDS is specified when the table is created, or the - configuration variable is enabled. Type oid represents + configuration variable is enabled. Type oid represents an object identifier. There are also several alias types for - oid: regproc, regprocedure, - regoper, regoperator, regclass, - regtype, regrole, regnamespace, - regconfig, and regdictionary. + oid: regproc, regprocedure, + regoper, regoperator, regclass, + regtype, regrole, regnamespace, + regconfig, and regdictionary. shows an overview. - The oid type is currently implemented as an unsigned + The oid type is currently implemented as an unsigned four-byte integer. Therefore, it is not large enough to provide database-wide uniqueness in large databases, or even in large individual tables. So, using a user-created table's OID column as @@ -4440,7 +4440,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; - The oid type itself has few operations beyond comparison. + The oid type itself has few operations beyond comparison. It can be cast to integer, however, and then manipulated using the standard integer operators. (Beware of possible signed-versus-unsigned confusion if you do this.) @@ -4450,10 +4450,10 @@ SET xmloption TO { DOCUMENT | CONTENT }; The OID alias types have no operations of their own except for specialized input and output routines. These routines are able to accept and display symbolic names for system objects, rather than - the raw numeric value that type oid would use. The alias + the raw numeric value that type oid would use. The alias types allow simplified lookup of OID values for objects. For example, - to examine the pg_attribute rows related to a table - mytable, one could write: + to examine the pg_attribute rows related to a table + mytable, one could write: SELECT * FROM pg_attribute WHERE attrelid = 'mytable'::regclass; @@ -4465,11 +4465,11 @@ SELECT * FROM pg_attribute While that doesn't look all that bad by itself, it's still oversimplified. A far more complicated sub-select would be needed to select the right OID if there are multiple tables named - mytable in different schemas. - The regclass input converter handles the table lookup according - to the schema path setting, and so it does the right thing + mytable in different schemas. + The regclass input converter handles the table lookup according + to the schema path setting, and so it does the right thing automatically. Similarly, casting a table's OID to - regclass is handy for symbolic display of a numeric OID. + regclass is handy for symbolic display of a numeric OID.
@@ -4487,80 +4487,80 @@ SELECT * FROM pg_attribute - oid + oid any numeric object identifier - 564182 + 564182 - regproc - pg_proc + regproc + pg_proc function name - sum + sum - regprocedure - pg_proc + regprocedure + pg_proc function with argument types - sum(int4) + sum(int4) - regoper - pg_operator + regoper + pg_operator operator name - + + + - regoperator - pg_operator + regoperator + pg_operator operator with argument types - *(integer,integer) or -(NONE,integer) + *(integer,integer) or -(NONE,integer) - regclass - pg_class + regclass + pg_class relation name - pg_type + pg_type - regtype - pg_type + regtype + pg_type data type name - integer + integer - regrole - pg_authid + regrole + pg_authid role name - smithee + smithee - regnamespace - pg_namespace + regnamespace + pg_namespace namespace name - pg_catalog + pg_catalog - regconfig - pg_ts_config + regconfig + pg_ts_config text search configuration - english + english - regdictionary - pg_ts_dict + regdictionary + pg_ts_dict text search dictionary - simple + simple @@ -4571,11 +4571,11 @@ SELECT * FROM pg_attribute schema-qualified names, and will display schema-qualified names on output if the object would not be found in the current search path without being qualified. - The regproc and regoper alias types will only + The regproc and regoper alias types will only accept input names that are unique (not overloaded), so they are - of limited use; for most uses regprocedure or - regoperator are more appropriate. For regoperator, - unary operators are identified by writing NONE for the unused + of limited use; for most uses regprocedure or + regoperator are more appropriate. For regoperator, + unary operators are identified by writing NONE for the unused operand. @@ -4585,12 +4585,12 @@ SELECT * FROM pg_attribute constant of one of these types appears in a stored expression (such as a column default expression or view), it creates a dependency on the referenced object. For example, if a column has a default - expression nextval('my_seq'::regclass), + expression nextval('my_seq'::regclass), PostgreSQL understands that the default expression depends on the sequence - my_seq; the system will not let the sequence be dropped + my_seq; the system will not let the sequence be dropped without first removing the default expression. - regrole is the only exception for the property. Constants of this + regrole is the only exception for the property. Constants of this type are not allowed in such expressions. @@ -4603,21 +4603,21 @@ SELECT * FROM pg_attribute - Another identifier type used by the system is xid, or transaction - (abbreviated xact) identifier. This is the data type of the system columns - xmin and xmax. Transaction identifiers are 32-bit quantities. + Another identifier type used by the system is xid, or transaction + (abbreviated xact) identifier. This is the data type of the system columns + xmin and xmax. Transaction identifiers are 32-bit quantities. - A third identifier type used by the system is cid, or + A third identifier type used by the system is cid, or command identifier. This is the data type of the system columns - cmin and cmax. Command identifiers are also 32-bit quantities. + cmin and cmax. Command identifiers are also 32-bit quantities. - A final identifier type used by the system is tid, or tuple + A final identifier type used by the system is tid, or tuple identifier (row identifier). This is the data type of the system column - ctid. A tuple ID is a pair + ctid. A tuple ID is a pair (block number, tuple index within block) that identifies the physical location of the row within its table. @@ -4646,7 +4646,7 @@ SELECT * FROM pg_attribute Internally, an LSN is a 64-bit integer, representing a byte position in the write-ahead log stream. It is printed as two hexadecimal numbers of up to 8 digits each, separated by a slash; for example, - 16/B374D848. The pg_lsn type supports the + 16/B374D848. The pg_lsn type supports the standard comparison operators, like = and >. Two LSNs can be subtracted using the - operator; the result is the number of bytes separating @@ -4736,7 +4736,7 @@ SELECT * FROM pg_attribute The PostgreSQL type system contains a number of special-purpose entries that are collectively called - pseudo-types. A pseudo-type cannot be used as a + pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type. Each of the available pseudo-types is useful in situations where a function's behavior does not @@ -4758,106 +4758,106 @@ SELECT * FROM pg_attribute - any + any Indicates that a function accepts any input data type. - anyelement + anyelement Indicates that a function accepts any data type (see ). - anyarray + anyarray Indicates that a function accepts any array data type (see ). - anynonarray + anynonarray Indicates that a function accepts any non-array data type (see ). - anyenum + anyenum Indicates that a function accepts any enum data type (see and ). - anyrange + anyrange Indicates that a function accepts any range data type (see and ). - cstring + cstring Indicates that a function accepts or returns a null-terminated C string. - internal + internal Indicates that a function accepts or returns a server-internal data type. - language_handler - A procedural language call handler is declared to return language_handler. + language_handler + A procedural language call handler is declared to return language_handler. - fdw_handler - A foreign-data wrapper handler is declared to return fdw_handler. + fdw_handler + A foreign-data wrapper handler is declared to return fdw_handler. - index_am_handler - An index access method handler is declared to return index_am_handler. + index_am_handler + An index access method handler is declared to return index_am_handler. - tsm_handler - A tablesample method handler is declared to return tsm_handler. + tsm_handler + A tablesample method handler is declared to return tsm_handler. - record + record Identifies a function taking or returning an unspecified row type. - trigger - A trigger function is declared to return trigger. + trigger + A trigger function is declared to return trigger. - event_trigger - An event trigger function is declared to return event_trigger. + event_trigger + An event trigger function is declared to return event_trigger. - pg_ddl_command + pg_ddl_command Identifies a representation of DDL commands that is available to event triggers. - void + void Indicates that a function returns no value. - unknown + unknown Identifies a not-yet-resolved type, e.g. of an undecorated string literal. - opaque + opaque An obsolete type name that formerly served many of the above purposes. @@ -4876,24 +4876,24 @@ SELECT * FROM pg_attribute Functions coded in procedural languages can use pseudo-types only as allowed by their implementation languages. At present most procedural languages forbid use of a pseudo-type as an argument type, and allow - only void and record as a result type (plus - trigger or event_trigger when the function is used + only void and record as a result type (plus + trigger or event_trigger when the function is used as a trigger or event trigger). Some also - support polymorphic functions using the types anyelement, - anyarray, anynonarray, anyenum, and - anyrange. + support polymorphic functions using the types anyelement, + anyarray, anynonarray, anyenum, and + anyrange. - The internal pseudo-type is used to declare functions + The internal pseudo-type is used to declare functions that are meant only to be called internally by the database system, and not by direct invocation in an SQL - query. If a function has at least one internal-type + query. If a function has at least one internal-type argument then it cannot be called from SQL. To preserve the type safety of this restriction it is important to follow this coding rule: do not create any function that is - declared to return internal unless it has at least one - internal argument. + declared to return internal unless it has at least one + internal argument. diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml index ef9139f9e3..a533bbf8d2 100644 --- a/doc/src/sgml/datetime.sgml +++ b/doc/src/sgml/datetime.sgml @@ -37,18 +37,18 @@ - If the numeric token contains a colon (:), this is + If the numeric token contains a colon (:), this is a time string. Include all subsequent digits and colons. - If the numeric token contains a dash (-), slash - (/), or two or more dots (.), this is + If the numeric token contains a dash (-), slash + (/), or two or more dots (.), this is a date string which might have a text month. If a date token has already been seen, it is instead interpreted as a time zone - name (e.g., America/New_York). + name (e.g., America/New_York). @@ -63,8 +63,8 @@ - If the token starts with a plus (+) or minus - (-), then it is either a numeric time zone or a special + If the token starts with a plus (+) or minus + (-), then it is either a numeric time zone or a special field. @@ -114,7 +114,7 @@ and if no other date fields have been previously read, then interpret as a concatenated date (e.g., 19990118 or 990118). - The interpretation is YYYYMMDD or YYMMDD. + The interpretation is YYYYMMDD or YYMMDD. @@ -128,7 +128,7 @@ If four or six digits and a year has already been read, then - interpret as a time (HHMM or HHMMSS). + interpret as a time (HHMM or HHMMSS). @@ -143,7 +143,7 @@ Otherwise the date field ordering is assumed to follow the - DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. + DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range. @@ -167,7 +167,7 @@ Gregorian years AD 1-99 can be entered by using 4 digits with leading - zeros (e.g., 0099 is AD 99). + zeros (e.g., 0099 is AD 99). @@ -317,7 +317,7 @@ Ignored - JULIAN, JD, J + JULIAN, JD, J Next field is Julian Date @@ -354,23 +354,23 @@ can be altered by any database user, the possible values for it are under the control of the database administrator — they are in fact names of configuration files stored in - .../share/timezonesets/ of the installation directory. + .../share/timezonesets/ of the installation directory. By adding or altering files in that directory, the administrator can set local policy for timezone abbreviations. - timezone_abbreviations can be set to any file name - found in .../share/timezonesets/, if the file's name + timezone_abbreviations can be set to any file name + found in .../share/timezonesets/, if the file's name is entirely alphabetic. (The prohibition against non-alphabetic - characters in timezone_abbreviations prevents reading + characters in timezone_abbreviations prevents reading files outside the intended directory, as well as reading editor backup files and other extraneous files.) A timezone abbreviation file can contain blank lines and comments - beginning with #. Non-comment lines must have one of + beginning with #. Non-comment lines must have one of these formats: @@ -388,12 +388,12 @@ the equivalent offset in seconds from UTC, positive being east from Greenwich and negative being west. For example, -18000 would be five hours west of Greenwich, or North American east coast standard time. - D indicates that the zone name represents local + D indicates that the zone name represents local daylight-savings time rather than standard time. - Alternatively, a time_zone_name can be given, referencing + Alternatively, a time_zone_name can be given, referencing a zone name defined in the IANA timezone database. The zone's definition is consulted to see whether the abbreviation is or has been in use in that zone, and if so, the appropriate meaning is used — that is, @@ -417,34 +417,34 @@ - The @INCLUDE syntax allows inclusion of another file in the - .../share/timezonesets/ directory. Inclusion can be nested, + The @INCLUDE syntax allows inclusion of another file in the + .../share/timezonesets/ directory. Inclusion can be nested, to a limited depth. - The @OVERRIDE syntax indicates that subsequent entries in the + The @OVERRIDE syntax indicates that subsequent entries in the file can override previous entries (typically, entries obtained from included files). Without this, conflicting definitions of the same timezone abbreviation are considered an error. - In an unmodified installation, the file Default contains + In an unmodified installation, the file Default contains all the non-conflicting time zone abbreviations for most of the world. - Additional files Australia and India are + Additional files Australia and India are provided for those regions: these files first include the - Default file and then add or modify abbreviations as needed. + Default file and then add or modify abbreviations as needed. For reference purposes, a standard installation also contains files - Africa.txt, America.txt, etc, containing + Africa.txt, America.txt, etc, containing information about every time zone abbreviation known to be in use according to the IANA timezone database. The zone name definitions found in these files can be copied and pasted into a custom configuration file as needed. Note that these files cannot be directly - referenced as timezone_abbreviations settings, because of + referenced as timezone_abbreviations settings, because of the dot embedded in their names. @@ -460,16 +460,16 @@ Time zone abbreviations defined in the configuration file override non-timezone meanings built into PostgreSQL. - For example, the Australia configuration file defines - SAT (for South Australian Standard Time). When this - file is active, SAT will not be recognized as an abbreviation + For example, the Australia configuration file defines + SAT (for South Australian Standard Time). When this + file is active, SAT will not be recognized as an abbreviation for Saturday. - If you modify files in .../share/timezonesets/, + If you modify files in .../share/timezonesets/, it is up to you to make backups — a normal database dump will not include this directory. @@ -492,10 +492,10 @@ datetime literal, the datetime values are constrained by the natural rules for dates and times according to the Gregorian calendar. - PostgreSQL follows the SQL + PostgreSQL follows the SQL standard's lead by counting dates exclusively in the Gregorian calendar, even for years before that calendar was in use. - This rule is known as the proleptic Gregorian calendar. + This rule is known as the proleptic Gregorian calendar. @@ -569,7 +569,7 @@ $ cal 9 1752 dominions, not other places. Since it would be difficult and confusing to try to track the actual calendars that were in use in various places at various times, - PostgreSQL does not try, but rather follows the Gregorian + PostgreSQL does not try, but rather follows the Gregorian calendar rules for all dates, even though this method is not historically accurate. @@ -597,7 +597,7 @@ $ cal 9 1752 and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484-1558). In the Julian Date system, each day has a sequential number, starting - from JD 0 (which is sometimes called the Julian Date). + from JD 0 (which is sometimes called the Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, @@ -607,10 +607,10 @@ $ cal 9 1752 - Although PostgreSQL supports Julian Date notation for + Although PostgreSQL supports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates - run from noon to noon. PostgreSQL treats a Julian Date + run from noon to noon. PostgreSQL treats a Julian Date as running from midnight to midnight. diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index f19c6b19f5..1f17d3ad2d 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -8,8 +8,8 @@ - dblink is a module that supports connections to - other PostgreSQL databases from within a database + dblink is a module that supports connections to + other PostgreSQL databases from within a database session. @@ -44,9 +44,9 @@ dblink_connect(text connname, text connstr) returns text Description - dblink_connect() establishes a connection to a remote - PostgreSQL database. The server and database to - be contacted are identified through a standard libpq + dblink_connect() establishes a connection to a remote + PostgreSQL database. The server and database to + be contacted are identified through a standard libpq connection string. Optionally, a name can be assigned to the connection. Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection @@ -81,9 +81,9 @@ dblink_connect(text connname, text connstr) returns text connstr - libpq-style connection info string, for example + libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres - password=mypasswd. + password=mypasswd. For details see . Alternatively, the name of a foreign server. @@ -96,7 +96,7 @@ dblink_connect(text connname, text connstr) returns text Return Value - Returns status, which is always OK (since any error + Returns status, which is always OK (since any error causes the function to throw an error instead of returning). @@ -105,15 +105,15 @@ dblink_connect(text connname, text connstr) returns text Notes - Only superusers may use dblink_connect to create + Only superusers may use dblink_connect to create non-password-authenticated connections. If non-superusers need this - capability, use dblink_connect_u instead. + capability, use dblink_connect_u instead. It is unwise to choose connection names that contain equal signs, as this opens a risk of confusion with connection info strings - in other dblink functions. + in other dblink functions. @@ -208,8 +208,8 @@ dblink_connect_u(text connname, text connstr) returns text Description - dblink_connect_u() is identical to - dblink_connect(), except that it will allow non-superusers + dblink_connect_u() is identical to + dblink_connect(), except that it will allow non-superusers to connect using any authentication method. @@ -217,24 +217,24 @@ dblink_connect_u(text connname, text connstr) returns text If the remote server selects an authentication method that does not involve a password, then impersonation and subsequent escalation of privileges can occur, because the session will appear to have - originated from the user as which the local PostgreSQL + originated from the user as which the local PostgreSQL server runs. Also, even if the remote server does demand a password, it is possible for the password to be supplied from the server - environment, such as a ~/.pgpass file belonging to the + environment, such as a ~/.pgpass file belonging to the server's user. This opens not only a risk of impersonation, but the possibility of exposing a password to an untrustworthy remote server. - Therefore, dblink_connect_u() is initially - installed with all privileges revoked from PUBLIC, + Therefore, dblink_connect_u() is initially + installed with all privileges revoked from PUBLIC, making it un-callable except by superusers. In some situations - it may be appropriate to grant EXECUTE permission for - dblink_connect_u() to specific users who are considered + it may be appropriate to grant EXECUTE permission for + dblink_connect_u() to specific users who are considered trustworthy, but this should be done with care. It is also recommended - that any ~/.pgpass file belonging to the server's user - not contain any records specifying a wildcard host name. + that any ~/.pgpass file belonging to the server's user + not contain any records specifying a wildcard host name. - For further details see dblink_connect(). + For further details see dblink_connect(). @@ -265,8 +265,8 @@ dblink_disconnect(text connname) returns text Description - dblink_disconnect() closes a connection previously opened - by dblink_connect(). The form with no arguments closes + dblink_disconnect() closes a connection previously opened + by dblink_connect(). The form with no arguments closes an unnamed connection. @@ -290,7 +290,7 @@ dblink_disconnect(text connname) returns text Return Value - Returns status, which is always OK (since any error + Returns status, which is always OK (since any error causes the function to throw an error instead of returning). @@ -341,15 +341,15 @@ dblink(text sql [, bool fail_on_error]) returns setof record Description - dblink executes a query (usually a SELECT, + dblink executes a query (usually a SELECT, but it can be any SQL statement that returns rows) in a remote database. - When two text arguments are given, the first one is first + When two text arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for dblink_connect, + is treated as a connection info string as for dblink_connect, and the indicated connection is made just for the duration of this command. @@ -373,7 +373,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record A connection info string, as previously described for - dblink_connect. + dblink_connect. @@ -383,7 +383,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record The SQL query that you wish to execute in the remote database, - for example select * from foo. + for example select * from foo. @@ -407,11 +407,11 @@ dblink(text sql [, bool fail_on_error]) returns setof record The function returns the row(s) produced by the query. Since - dblink can be used with any query, it is declared - to return record, rather than specifying any particular + dblink can be used with any query, it is declared + to return record, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise - PostgreSQL would not know what to expect. + PostgreSQL would not know what to expect. Here is an example: @@ -421,20 +421,20 @@ SELECT * WHERE proname LIKE 'bytea%'; - The alias part of the FROM clause must + The alias part of the FROM clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL - syntax, but specifying column types is a PostgreSQL + syntax, but specifying column types is a PostgreSQL extension.) This allows the system to understand what - * should expand to, and what proname - in the WHERE clause refers to, in advance of trying + * should expand to, and what proname + in the WHERE clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not - have the same number of columns shown in the FROM clause. - The column names need not match, however, and dblink + have the same number of columns shown in the FROM clause. + The column names need not match, however, and dblink does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the - column type declared in the FROM clause. + column type declared in the FROM clause. @@ -442,7 +442,7 @@ SELECT * Notes - A convenient way to use dblink with predetermined + A convenient way to use dblink with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example, @@ -559,15 +559,15 @@ dblink_exec(text sql [, bool fail_on_error]) returns text Description - dblink_exec executes a command (that is, any SQL statement + dblink_exec executes a command (that is, any SQL statement that doesn't return rows) in a remote database. - When two text arguments are given, the first one is first + When two text arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for dblink_connect, + is treated as a connection info string as for dblink_connect, and the indicated connection is made just for the duration of this command. @@ -591,7 +591,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text A connection info string, as previously described for - dblink_connect. + dblink_connect. @@ -602,7 +602,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text The SQL command that you wish to execute in the remote database, for example - insert into foo values(0,'a','{"a0","b0","c0"}'). + insert into foo values(0,'a','{"a0","b0","c0"}'). @@ -614,7 +614,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -625,7 +625,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text Return Value - Returns status, either the command's status string or ERROR. + Returns status, either the command's status string or ERROR. @@ -695,9 +695,9 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Description - dblink_open() opens a cursor in a remote database. + dblink_open() opens a cursor in a remote database. The cursor can subsequently be manipulated with - dblink_fetch() and dblink_close(). + dblink_fetch() and dblink_close(). @@ -728,8 +728,8 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret sql - The SELECT statement that you wish to execute in the remote - database, for example select * from pg_class. + The SELECT statement that you wish to execute in the remote + database, for example select * from pg_class. @@ -741,7 +741,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -752,7 +752,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Return Value - Returns status, either OK or ERROR. + Returns status, either OK or ERROR. @@ -761,16 +761,16 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Since a cursor can only persist within a transaction, - dblink_open starts an explicit transaction block - (BEGIN) on the remote side, if the remote side was + dblink_open starts an explicit transaction block + (BEGIN) on the remote side, if the remote side was not already within a transaction. This transaction will be - closed again when the matching dblink_close is + closed again when the matching dblink_close is executed. Note that if - you use dblink_exec to change data between - dblink_open and dblink_close, - and then an error occurs or you use dblink_disconnect before - dblink_close, your change will be - lost because the transaction will be aborted. + you use dblink_exec to change data between + dblink_open and dblink_close, + and then an error occurs or you use dblink_disconnect before + dblink_close, your change will be + lost because the transaction will be aborted. @@ -819,8 +819,8 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) Description - dblink_fetch fetches rows from a cursor previously - established by dblink_open. + dblink_fetch fetches rows from a cursor previously + established by dblink_open. @@ -851,7 +851,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) howmany - The maximum number of rows to retrieve. The next howmany + The maximum number of rows to retrieve. The next howmany rows are fetched, starting at the current cursor position, moving forward. Once the cursor has reached its end, no more rows are produced. @@ -878,7 +878,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) The function returns the row(s) fetched from the cursor. To use this function, you will need to specify the expected set of columns, - as previously discussed for dblink. + as previously discussed for dblink. @@ -887,11 +887,11 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) On a mismatch between the number of return columns specified in the - FROM clause, and the actual number of columns returned by the + FROM clause, and the actual number of columns returned by the remote cursor, an error will be thrown. In this event, the remote cursor is still advanced by as many rows as it would have been if the error had not occurred. The same is true for any other error occurring in the local - query after the remote FETCH has been done. + query after the remote FETCH has been done. @@ -972,8 +972,8 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Description - dblink_close closes a cursor previously opened with - dblink_open. + dblink_close closes a cursor previously opened with + dblink_open. @@ -1007,7 +1007,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -1018,7 +1018,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Return Value - Returns status, either OK or ERROR. + Returns status, either OK or ERROR. @@ -1026,9 +1026,9 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Notes - If dblink_open started an explicit transaction block, + If dblink_open started an explicit transaction block, and this is the last remaining open cursor in this connection, - dblink_close will issue the matching COMMIT. + dblink_close will issue the matching COMMIT. @@ -1082,8 +1082,8 @@ dblink_get_connections() returns text[] Description - dblink_get_connections returns an array of the names - of all open named dblink connections. + dblink_get_connections returns an array of the names + of all open named dblink connections. @@ -1127,7 +1127,7 @@ dblink_error_message(text connname) returns text Description - dblink_error_message fetches the most recent remote + dblink_error_message fetches the most recent remote error message for a given connection. @@ -1190,7 +1190,7 @@ dblink_send_query(text connname, text sql) returns int Description - dblink_send_query sends a query to be executed + dblink_send_query sends a query to be executed asynchronously, that is, without immediately waiting for the result. There must not be an async query already in progress on the connection. @@ -1198,10 +1198,10 @@ dblink_send_query(text connname, text sql) returns int After successfully dispatching an async query, completion status - can be checked with dblink_is_busy, and the results - are ultimately collected with dblink_get_result. + can be checked with dblink_is_busy, and the results + are ultimately collected with dblink_get_result. It is also possible to attempt to cancel an active async query - using dblink_cancel_query. + using dblink_cancel_query. @@ -1223,7 +1223,7 @@ dblink_send_query(text connname, text sql) returns int The SQL statement that you wish to execute in the remote database, - for example select * from pg_class. + for example select * from pg_class. @@ -1272,7 +1272,7 @@ dblink_is_busy(text connname) returns int Description - dblink_is_busy tests whether an async query is in progress. + dblink_is_busy tests whether an async query is in progress. @@ -1297,7 +1297,7 @@ dblink_is_busy(text connname) returns int Returns 1 if connection is busy, 0 if it is not busy. If this function returns 0, it is guaranteed that - dblink_get_result will not block. + dblink_get_result will not block. @@ -1336,10 +1336,10 @@ dblink_get_notify(text connname) returns setof (notify_name text, be_pid int, ex Description - dblink_get_notify retrieves notifications on either + dblink_get_notify retrieves notifications on either the unnamed connection, or on a named connection if specified. - To receive notifications via dblink, LISTEN must - first be issued, using dblink_exec. + To receive notifications via dblink, LISTEN must + first be issued, using dblink_exec. For details see and . @@ -1417,9 +1417,9 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record Description - dblink_get_result collects the results of an - asynchronous query previously sent with dblink_send_query. - If the query is not already completed, dblink_get_result + dblink_get_result collects the results of an + asynchronous query previously sent with dblink_send_query. + If the query is not already completed, dblink_get_result will wait until it is. @@ -1458,14 +1458,14 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record For an async query (that is, a SQL statement returning rows), the function returns the row(s) produced by the query. To use this function, you will need to specify the expected set of columns, - as previously discussed for dblink. + as previously discussed for dblink. For an async command (that is, a SQL statement not returning rows), the function returns a single row with a single text column containing the command's status string. It is still necessary to specify that - the result will have a single text column in the calling FROM + the result will have a single text column in the calling FROM clause. @@ -1474,22 +1474,22 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record Notes - This function must be called if - dblink_send_query returned 1. + This function must be called if + dblink_send_query returned 1. It must be called once for each query sent, and one additional time to obtain an empty set result, before the connection can be used again. - When using dblink_send_query and - dblink_get_result, dblink fetches the entire + When using dblink_send_query and + dblink_get_result, dblink fetches the entire remote query result before returning any of it to the local query processor. If the query returns a large number of rows, this can result in transient memory bloat in the local session. It may be better to open - such a query as a cursor with dblink_open and then fetch a + such a query as a cursor with dblink_open and then fetch a manageable number of rows at a time. Alternatively, use plain - dblink(), which avoids memory bloat by spooling large result + dblink(), which avoids memory bloat by spooling large result sets to disk. @@ -1581,13 +1581,13 @@ dblink_cancel_query(text connname) returns text Description - dblink_cancel_query attempts to cancel any query that + dblink_cancel_query attempts to cancel any query that is in progress on the named connection. Note that this is not certain to succeed (since, for example, the remote query might already have finished). A cancel request simply improves the odds that the query will fail soon. You must still complete the normal query protocol, for example by calling - dblink_get_result. + dblink_get_result. @@ -1610,7 +1610,7 @@ dblink_cancel_query(text connname) returns text Return Value - Returns OK if the cancel request has been sent, or + Returns OK if the cancel request has been sent, or the text of an error message on failure. @@ -1651,7 +1651,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results Description - dblink_get_pkey provides information about the primary + dblink_get_pkey provides information about the primary key of a relation in the local database. This is sometimes useful in generating queries to be sent to remote databases. @@ -1665,10 +1665,10 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1687,7 +1687,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results CREATE TYPE dblink_pkey_results AS (position int, colname text); - The position column simply runs from 1 to N; + The position column simply runs from 1 to N; it is the number of the field within the primary key, not the number within the table's columns. @@ -1748,10 +1748,10 @@ dblink_build_sql_insert(text relname, Description - dblink_build_sql_insert can be useful in doing selective + dblink_build_sql_insert can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - INSERT command that will duplicate that row, but with + INSERT command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.) @@ -1766,10 +1766,10 @@ dblink_build_sql_insert(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1780,7 +1780,7 @@ dblink_build_sql_insert(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -1811,7 +1811,7 @@ dblink_build_sql_insert(text relname, Values of the primary key fields to be placed in the resulting - INSERT command. Each field is represented in text form. + INSERT command. Each field is represented in text form. @@ -1828,10 +1828,10 @@ dblink_build_sql_insert(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -1881,9 +1881,9 @@ dblink_build_sql_delete(text relname, Description - dblink_build_sql_delete can be useful in doing selective + dblink_build_sql_delete can be useful in doing selective replication of a local table to a remote database. It builds a SQL - DELETE command that will delete the row with the given + DELETE command that will delete the row with the given primary key values. @@ -1896,10 +1896,10 @@ dblink_build_sql_delete(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1910,7 +1910,7 @@ dblink_build_sql_delete(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -1929,7 +1929,7 @@ dblink_build_sql_delete(text relname, Values of the primary key fields to be used in the resulting - DELETE command. Each field is represented in text form. + DELETE command. Each field is represented in text form. @@ -1946,10 +1946,10 @@ dblink_build_sql_delete(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -2000,15 +2000,15 @@ dblink_build_sql_update(text relname, Description - dblink_build_sql_update can be useful in doing selective + dblink_build_sql_update can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - UPDATE command that will duplicate that row, but with + UPDATE command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for - the last two arguments.) The UPDATE command always assigns + the last two arguments.) The UPDATE command always assigns all fields of the row — the main difference between this and - dblink_build_sql_insert is that it's assumed that + dblink_build_sql_insert is that it's assumed that the target row already exists in the remote table. @@ -2021,10 +2021,10 @@ dblink_build_sql_update(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -2035,7 +2035,7 @@ dblink_build_sql_update(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -2066,7 +2066,7 @@ dblink_build_sql_update(text relname, Values of the primary key fields to be placed in the resulting - UPDATE command. Each field is represented in text form. + UPDATE command. Each field is represented in text form. @@ -2083,10 +2083,10 @@ dblink_build_sql_update(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index b05a9c2150..817db92af2 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -149,7 +149,7 @@ DROP TABLE products; Nevertheless, it is common in SQL script files to unconditionally try to drop each table before creating it, ignoring any error messages, so that the script works whether or not the table exists. - (If you like, you can use the DROP TABLE IF EXISTS variant + (If you like, you can use the DROP TABLE IF EXISTS variant to avoid the error messages, but this is not standard SQL.) @@ -207,9 +207,9 @@ CREATE TABLE products ( The default value can be an expression, which will be evaluated whenever the default value is inserted (not when the table is created). A common example - is for a timestamp column to have a default of CURRENT_TIMESTAMP, + is for a timestamp column to have a default of CURRENT_TIMESTAMP, so that it gets set to the time of row insertion. Another common - example is generating a serial number for each row. + example is generating a serial number for each row. In PostgreSQL this is typically done by something like: @@ -218,8 +218,8 @@ CREATE TABLE products ( ... ); - where the nextval() function supplies successive values - from a sequence object (see nextval() function supplies successive values + from a sequence object (see ). This arrangement is sufficiently common that there's a special shorthand for it: @@ -228,7 +228,7 @@ CREATE TABLE products ( ... ); - The SERIAL shorthand is discussed further in SERIAL shorthand is discussed further in . @@ -385,7 +385,7 @@ CREATE TABLE products ( CHECK (price > 0), discounted_price numeric, CHECK (discounted_price > 0), - CONSTRAINT valid_discount CHECK (price > discounted_price) + CONSTRAINT valid_discount CHECK (price > discounted_price) ); @@ -623,7 +623,7 @@ CREATE TABLE example ( Adding a primary key will automatically create a unique B-tree index on the column or group of columns listed in the primary key, and will - force the column(s) to be marked NOT NULL. + force the column(s) to be marked NOT NULL. @@ -828,7 +828,7 @@ CREATE TABLE order_items ( (The essential difference between these two choices is that NO ACTION allows the check to be deferred until later in the transaction, whereas RESTRICT does not.) - CASCADE specifies that when a referenced row is deleted, + CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT. @@ -845,19 +845,19 @@ CREATE TABLE order_items ( Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is changed (updated). The possible actions are the same. - In this case, CASCADE means that the updated values of the + In this case, CASCADE means that the updated values of the referenced column(s) should be copied into the referencing row(s). Normally, a referencing row need not satisfy the foreign key constraint - if any of its referencing columns are null. If MATCH FULL + if any of its referencing columns are null. If MATCH FULL is added to the foreign key declaration, a referencing row escapes satisfying the constraint only if all its referencing columns are null (so a mix of null and non-null values is guaranteed to fail a - MATCH FULL constraint). If you don't want referencing rows + MATCH FULL constraint). If you don't want referencing rows to be able to avoid satisfying the foreign key constraint, declare the - referencing column(s) as NOT NULL. + referencing column(s) as NOT NULL. @@ -909,7 +909,7 @@ CREATE TABLE circles ( See also CREATE - TABLE ... CONSTRAINT ... EXCLUDE for details. + TABLE ... CONSTRAINT ... EXCLUDE for details. @@ -923,7 +923,7 @@ CREATE TABLE circles ( System Columns - Every table has several system columns that are + Every table has several system columns that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these restrictions are separate from whether the name is a key word or @@ -939,7 +939,7 @@ CREATE TABLE circles ( - oid + oid @@ -957,7 +957,7 @@ CREATE TABLE circles ( - tableoid + tableoid tableoid @@ -976,7 +976,7 @@ CREATE TABLE circles ( - xmin + xmin xmin @@ -992,7 +992,7 @@ CREATE TABLE circles ( - cmin + cmin cmin @@ -1006,7 +1006,7 @@ CREATE TABLE circles ( - xmax + xmax xmax @@ -1023,7 +1023,7 @@ CREATE TABLE circles ( - cmax + cmax cmax @@ -1036,7 +1036,7 @@ CREATE TABLE circles ( - ctid + ctid ctid @@ -1047,7 +1047,7 @@ CREATE TABLE circles ( although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is - updated or moved by VACUUM FULL. Therefore + updated or moved by VACUUM FULL. Therefore ctid is useless as a long-term row identifier. The OID, or even better a user-defined serial number, should be used to identify logical rows. @@ -1074,7 +1074,7 @@ CREATE TABLE circles ( a unique constraint (or unique index) exists, the system takes care not to generate an OID matching an already-existing row. (Of course, this is only possible if the table contains fewer - than 232 (4 billion) rows, and in practice the + than 232 (4 billion) rows, and in practice the table size had better be much less than that, or performance might suffer.) @@ -1082,7 +1082,7 @@ CREATE TABLE circles ( OIDs should never be assumed to be unique across tables; use - the combination of tableoid and row OID if you + the combination of tableoid and row OID if you need a database-wide identifier. @@ -1090,7 +1090,7 @@ CREATE TABLE circles ( Of course, the tables in question must be created WITH OIDS. As of PostgreSQL 8.1, - WITHOUT OIDS is the default. + WITHOUT OIDS is the default. @@ -1107,7 +1107,7 @@ CREATE TABLE circles ( Command identifiers are also 32-bit quantities. This creates a hard limit - of 232 (4 billion) SQL commands + of 232 (4 billion) SQL commands within a single transaction. In practice this limit is not a problem — note that the limit is on the number of SQL commands, not the number of rows processed. @@ -1186,7 +1186,7 @@ CREATE TABLE circles ( ALTER TABLE products ADD COLUMN description text; The new column is initially filled with whatever default - value is given (null if you don't specify a DEFAULT clause). + value is given (null if you don't specify a DEFAULT clause). @@ -1196,9 +1196,9 @@ ALTER TABLE products ADD COLUMN description text; ALTER TABLE products ADD COLUMN description text CHECK (description <> ''); In fact all the options that can be applied to a column description - in CREATE TABLE can be used here. Keep in mind however + in CREATE TABLE can be used here. Keep in mind however that the default value must satisfy the given constraints, or the - ADD will fail. Alternatively, you can add + ADD will fail. Alternatively, you can add constraints later (see below) after you've filled in the new column correctly. @@ -1210,7 +1210,7 @@ ALTER TABLE products ADD COLUMN description text CHECK (description <> '') specified, PostgreSQL is able to avoid the physical update. So if you intend to fill the column with mostly nondefault values, it's best to add the column with no default, - insert the correct values using UPDATE, and then add any + insert the correct values using UPDATE, and then add any desired default as described below. @@ -1234,7 +1234,7 @@ ALTER TABLE products DROP COLUMN description; foreign key constraint of another table, PostgreSQL will not silently drop that constraint. You can authorize dropping everything that depends on - the column by adding CASCADE: + the column by adding CASCADE: ALTER TABLE products DROP COLUMN description CASCADE; @@ -1290,13 +1290,13 @@ ALTER TABLE products ALTER COLUMN product_no SET NOT NULL; ALTER TABLE products DROP CONSTRAINT some_name; - (If you are dealing with a generated constraint name like $2, + (If you are dealing with a generated constraint name like $2, don't forget that you'll need to double-quote it to make it a valid identifier.) - As with dropping a column, you need to add CASCADE if you + As with dropping a column, you need to add CASCADE if you want to drop a constraint that something else depends on. An example is that a foreign key constraint depends on a unique or primary key constraint on the referenced column(s). @@ -1326,7 +1326,7 @@ ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL; ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77; Note that this doesn't affect any existing rows in the table, it - just changes the default for future INSERT commands. + just changes the default for future INSERT commands. @@ -1356,12 +1356,12 @@ ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2); This will succeed only if each existing entry in the column can be converted to the new type by an implicit cast. If a more complex - conversion is needed, you can add a USING clause that + conversion is needed, you can add a USING clause that specifies how to compute the new values from the old. - PostgreSQL will attempt to convert the column's + PostgreSQL will attempt to convert the column's default value (if any) to the new type, as well as any constraints that involve the column. But these conversions might fail, or might produce surprising results. It's often best to drop any constraints @@ -1437,11 +1437,11 @@ ALTER TABLE products RENAME TO items; - There are different kinds of privileges: SELECT, - INSERT, UPDATE, DELETE, - TRUNCATE, REFERENCES, TRIGGER, - CREATE, CONNECT, TEMPORARY, - EXECUTE, and USAGE. + There are different kinds of privileges: SELECT, + INSERT, UPDATE, DELETE, + TRUNCATE, REFERENCES, TRIGGER, + CREATE, CONNECT, TEMPORARY, + EXECUTE, and USAGE. The privileges applicable to a particular object vary depending on the object's type (table, function, etc). For complete information on the different types of privileges @@ -1480,7 +1480,7 @@ GRANT UPDATE ON accounts TO joe; The special role name PUBLIC can be used to grant a privilege to every role on the system. Also, - group roles can be set up to help manage privileges when + group roles can be set up to help manage privileges when there are many users of a database — for details see . @@ -1492,7 +1492,7 @@ GRANT UPDATE ON accounts TO joe; REVOKE ALL ON accounts FROM PUBLIC; The special privileges of the object owner (i.e., the right to do - DROP, GRANT, REVOKE, etc.) + DROP, GRANT, REVOKE, etc.) are always implicit in being the owner, and cannot be granted or revoked. But the object owner can choose to revoke their own ordinary privileges, for example to make a @@ -1502,7 +1502,7 @@ REVOKE ALL ON accounts FROM PUBLIC; Ordinarily, only the object's owner (or a superuser) can grant or revoke privileges on an object. However, it is possible to grant a - privilege with grant option, which gives the recipient + privilege with grant option, which gives the recipient the right to grant it in turn to others. If the grant option is subsequently revoked then all who received the privilege from that recipient (directly or through a chain of grants) will lose the @@ -1525,10 +1525,10 @@ REVOKE ALL ON accounts FROM PUBLIC; In addition to the SQL-standard privilege system available through , - tables can have row security policies that restrict, + tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. - This feature is also known as Row-Level Security. + This feature is also known as Row-Level Security. By default, tables do not have any policies, so that if a user has access privileges to a table according to the SQL privilege system, all rows within it are equally available for querying or updating. @@ -1537,20 +1537,20 @@ REVOKE ALL ON accounts FROM PUBLIC; When row security is enabled on a table (with ALTER TABLE ... ENABLE ROW LEVEL - SECURITY), all normal access to the table for selecting rows or + SECURITY), all normal access to the table for selecting rows or modifying rows must be allowed by a row security policy. (However, the table's owner is typically not subject to row security policies.) If no policy exists for the table, a default-deny policy is used, meaning that no rows are visible or can be modified. Operations that apply to the - whole table, such as TRUNCATE and REFERENCES, + whole table, such as TRUNCATE and REFERENCES, are not subject to row security. Row security policies can be specific to commands, or to roles, or to both. A policy can be specified to apply to ALL - commands, or to SELECT, INSERT, UPDATE, - or DELETE. Multiple roles can be assigned to a given + commands, or to SELECT, INSERT, UPDATE, + or DELETE. Multiple roles can be assigned to a given policy, and normal role membership and inheritance rules apply. @@ -1562,7 +1562,7 @@ REVOKE ALL ON accounts FROM PUBLIC; rule are leakproof functions, which are guaranteed to not leak information; the optimizer may choose to apply such functions ahead of the row-security check.) Rows for which the expression does - not return true will not be processed. Separate expressions + not return true will not be processed. Separate expressions may be specified to provide independent control over the rows which are visible and the rows which are allowed to be modified. Policy expressions are run as part of the query and with the privileges of the @@ -1571,11 +1571,11 @@ REVOKE ALL ON accounts FROM PUBLIC; - Superusers and roles with the BYPASSRLS attribute always + Superusers and roles with the BYPASSRLS attribute always bypass the row security system when accessing a table. Table owners normally bypass row security as well, though a table owner can choose to be subject to row security with ALTER - TABLE ... FORCE ROW LEVEL SECURITY. + TABLE ... FORCE ROW LEVEL SECURITY. @@ -1609,8 +1609,8 @@ REVOKE ALL ON accounts FROM PUBLIC; As a simple example, here is how to create a policy on - the account relation to allow only members of - the managers role to access rows, and only rows of their + the account relation to allow only members of + the managers role to access rows, and only rows of their accounts: @@ -1627,7 +1627,7 @@ CREATE POLICY account_managers ON accounts TO managers If no role is specified, or the special user name PUBLIC is used, then the policy applies to all users on the system. To allow all users to access their own row in - a users table, a simple policy can be used: + a users table, a simple policy can be used: @@ -1637,9 +1637,9 @@ CREATE POLICY user_policy ON users To use a different policy for rows that are being added to the table - compared to those rows that are visible, the WITH CHECK + compared to those rows that are visible, the WITH CHECK clause can be used. This policy would allow all users to view all rows - in the users table, but only modify their own: + in the users table, but only modify their own: @@ -1649,7 +1649,7 @@ CREATE POLICY user_policy ON users - Row security can also be disabled with the ALTER TABLE + Row security can also be disabled with the ALTER TABLE command. Disabling row security does not remove any policies that are defined on the table; they are simply ignored. Then all rows in the table are visible and modifiable, subject to the standard SQL privileges @@ -1658,7 +1658,7 @@ CREATE POLICY user_policy ON users Below is a larger example of how this feature can be used in production - environments. The table passwd emulates a Unix password + environments. The table passwd emulates a Unix password file: @@ -1820,7 +1820,7 @@ UPDATE 0 Referential integrity checks, such as unique or primary key constraints and foreign key references, always bypass row security to ensure that data integrity is maintained. Care must be taken when developing - schemas and row level policies to avoid covert channel leaks of + schemas and row level policies to avoid covert channel leaks of information through such referential integrity checks. @@ -1830,7 +1830,7 @@ UPDATE 0 disastrous if row security silently caused some rows to be omitted from the backup. In such a situation, you can set the configuration parameter - to off. This does not in itself bypass row security; + to off. This does not in itself bypass row security; what it does is throw an error if any query's results would get filtered by a policy. The reason for the error can then be investigated and fixed. @@ -1842,7 +1842,7 @@ UPDATE 0 best-performing case; when possible, it's best to design row security applications to work this way. If it is necessary to consult other rows or other tables to make a policy decision, that can be accomplished using - sub-SELECTs, or functions that contain SELECTs, + sub-SELECTs, or functions that contain SELECTs, in the policy expressions. Be aware however that such accesses can create race conditions that could allow information leakage if care is not taken. As an example, consider the following table design: @@ -1896,8 +1896,8 @@ GRANT ALL ON information TO public; - Now suppose that alice wishes to change the slightly - secret information, but decides that mallory should not + Now suppose that alice wishes to change the slightly + secret information, but decides that mallory should not be trusted with the new content of that row, so she does: @@ -1909,36 +1909,36 @@ COMMIT; - That looks safe; there is no window wherein mallory should be - able to see the secret from mallory string. However, there is - a race condition here. If mallory is concurrently doing, + That looks safe; there is no window wherein mallory should be + able to see the secret from mallory string. However, there is + a race condition here. If mallory is concurrently doing, say, SELECT * FROM information WHERE group_id = 2 FOR UPDATE; - and her transaction is in READ COMMITTED mode, it is possible - for her to see secret from mallory. That happens if her - transaction reaches the information row just - after alice's does. It blocks waiting - for alice's transaction to commit, then fetches the updated - row contents thanks to the FOR UPDATE clause. However, it - does not fetch an updated row for the - implicit SELECT from users, because that - sub-SELECT did not have FOR UPDATE; instead - the users row is read with the snapshot taken at the start + and her transaction is in READ COMMITTED mode, it is possible + for her to see secret from mallory. That happens if her + transaction reaches the information row just + after alice's does. It blocks waiting + for alice's transaction to commit, then fetches the updated + row contents thanks to the FOR UPDATE clause. However, it + does not fetch an updated row for the + implicit SELECT from users, because that + sub-SELECT did not have FOR UPDATE; instead + the users row is read with the snapshot taken at the start of the query. Therefore, the policy expression tests the old value - of mallory's privilege level and allows her to see the + of mallory's privilege level and allows her to see the updated row. There are several ways around this problem. One simple answer is to use - SELECT ... FOR SHARE in sub-SELECTs in row - security policies. However, that requires granting UPDATE - privilege on the referenced table (here users) to the + SELECT ... FOR SHARE in sub-SELECTs in row + security policies. However, that requires granting UPDATE + privilege on the referenced table (here users) to the affected users, which might be undesirable. (But another row security policy could be applied to prevent them from actually exercising that - privilege; or the sub-SELECT could be embedded into a security + privilege; or the sub-SELECT could be embedded into a security definer function.) Also, heavy concurrent use of row share locks on the referenced table could pose a performance problem, especially if updates of it are frequent. Another solution, practical if updates of the @@ -1977,19 +1977,19 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE; Users of a cluster do not necessarily have the privilege to access every database in the cluster. Sharing of user names means that there - cannot be different users named, say, joe in two databases + cannot be different users named, say, joe in two databases in the same cluster; but the system can be configured to allow - joe access to only some of the databases. + joe access to only some of the databases. - A database contains one or more named schemas, which + A database contains one or more named schemas, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators. The same object name can be used in different schemas without conflict; for - example, both schema1 and myschema can - contain tables named mytable. Unlike databases, + example, both schema1 and myschema can + contain tables named mytable. Unlike databases, schemas are not rigidly separated: a user can access objects in any of the schemas in the database they are connected to, if they have privileges to do so. @@ -2053,10 +2053,10 @@ CREATE SCHEMA myschema; To create or access objects in a schema, write a - qualified name consisting of the schema name and + qualified name consisting of the schema name and table name separated by a dot: -schema.table +schema.table This works anywhere a table name is expected, including the table modification commands and the data access commands discussed in @@ -2068,10 +2068,10 @@ CREATE SCHEMA myschema; Actually, the even more general syntax -database.schema.table +database.schema.table can be used too, but at present this is just for pro - forma compliance with the SQL standard. If you write a database name, + forma compliance with the SQL standard. If you write a database name, it must be the same as the database you are connected to. @@ -2116,7 +2116,7 @@ CREATE SCHEMA schema_name AUTHORIZATION - Schema names beginning with pg_ are reserved for + Schema names beginning with pg_ are reserved for system purposes and cannot be created by users. @@ -2163,9 +2163,9 @@ CREATE TABLE public.products ( ... ); Qualified names are tedious to write, and it's often best not to wire a particular schema name into applications anyway. Therefore - tables are often referred to by unqualified names, + tables are often referred to by unqualified names, which consist of just the table name. The system determines which table - is meant by following a search path, which is a list + is meant by following a search path, which is a list of schemas to look in. The first matching table in the search path is taken to be the one wanted. If there is no match in the search path, an error is reported, even if matching table names exist @@ -2180,7 +2180,7 @@ CREATE TABLE public.products ( ... ); The first schema named in the search path is called the current schema. Aside from being the first schema searched, it is also the schema in - which new tables will be created if the CREATE TABLE + which new tables will be created if the CREATE TABLE command does not specify a schema name. @@ -2253,7 +2253,7 @@ SET search_path TO myschema; need to write a qualified operator name in an expression, there is a special provision: you must write -OPERATOR(schema.operator) +OPERATOR(schema.operator) This is needed to avoid syntactic ambiguity. An example is: @@ -2310,28 +2310,28 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; - In addition to public and user-created schemas, each - database contains a pg_catalog schema, which contains + In addition to public and user-created schemas, each + database contains a pg_catalog schema, which contains the system tables and all the built-in data types, functions, and - operators. pg_catalog is always effectively part of + operators. pg_catalog is always effectively part of the search path. If it is not named explicitly in the path then - it is implicitly searched before searching the path's + it is implicitly searched before searching the path's schemas. This ensures that built-in names will always be findable. However, you can explicitly place - pg_catalog at the end of your search path if you + pg_catalog at the end of your search path if you prefer to have user-defined names override built-in names. - Since system table names begin with pg_, it is best to + Since system table names begin with pg_, it is best to avoid such names to ensure that you won't suffer a conflict if some future version defines a system table named the same as your table. (With the default search path, an unqualified reference to your table name would then be resolved as the system table instead.) System tables will continue to follow the convention of having - names beginning with pg_, so that they will not + names beginning with pg_, so that they will not conflict with unqualified user-table names so long as users avoid - the pg_ prefix. + the pg_ prefix. @@ -2397,15 +2397,15 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; implements only the basic schema support specified in the standard. Therefore, many users consider qualified names to really consist of - user_name.table_name. + user_name.table_name. This is how PostgreSQL will effectively behave if you create a per-user schema for every user. - Also, there is no concept of a public schema in the + Also, there is no concept of a public schema in the SQL standard. For maximum conformance to the standard, you should - not use (perhaps even remove) the public schema. + not use (perhaps even remove) the public schema. @@ -2461,9 +2461,9 @@ CREATE TABLE capitals ( ) INHERITS (cities); - In this case, the capitals table inherits - all the columns of its parent table, cities. State - capitals also have an extra column, state, that shows + In this case, the capitals table inherits + all the columns of its parent table, cities. State + capitals also have an extra column, state, that shows their state. @@ -2521,7 +2521,7 @@ SELECT name, altitude - You can also write the table name with a trailing * + You can also write the table name with a trailing * to explicitly specify that descendant tables are included: @@ -2530,7 +2530,7 @@ SELECT name, altitude WHERE altitude > 500; - Writing * is not necessary, since this behavior is always + Writing * is not necessary, since this behavior is always the default. However, this syntax is still supported for compatibility with older releases where the default could be changed. @@ -2559,7 +2559,7 @@ WHERE c.altitude > 500; (If you try to reproduce this example, you will probably get different numeric OIDs.) By doing a join with - pg_class you can see the actual table names: + pg_class you can see the actual table names: SELECT p.relname, c.name, c.altitude @@ -2579,7 +2579,7 @@ WHERE c.altitude > 500 AND c.tableoid = p.oid; - Another way to get the same effect is to use the regclass + Another way to get the same effect is to use the regclass alias type, which will print the table OID symbolically: @@ -2603,15 +2603,15 @@ VALUES ('Albany', NULL, NULL, 'NY'); INSERT always inserts into exactly the table specified. In some cases it is possible to redirect the insertion using a rule (see ). However that does not - help for the above case because the cities table - does not contain the column state, and so the + help for the above case because the cities table + does not contain the column state, and so the command will be rejected before the rule can be applied. All check constraints and not-null constraints on a parent table are automatically inherited by its children, unless explicitly specified - otherwise with NO INHERIT clauses. Other types of constraints + otherwise with NO INHERIT clauses. Other types of constraints (unique, primary key, and foreign key constraints) are not inherited. @@ -2620,7 +2620,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the union of the columns defined by the parent tables. Any columns declared in the child table's definition are added to these. If the same column name appears in multiple parent tables, or in both a parent - table and the child's definition, then these columns are merged + table and the child's definition, then these columns are merged so that there is only one such column in the child table. To be merged, columns must have the same data types, else an error is raised. Inheritable check constraints and not-null constraints are merged in a @@ -2632,7 +2632,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); Table inheritance is typically established when the child table is - created, using the INHERITS clause of the + created, using the INHERITS clause of the statement. Alternatively, a table which is already defined in a compatible way can @@ -2642,7 +2642,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the same names and types as the columns of the parent. It must also include check constraints with the same names and check expressions as those of the parent. Similarly an inheritance link can be removed from a child using the - NO INHERIT variant of ALTER TABLE. + NO INHERIT variant of ALTER TABLE. Dynamically adding and removing inheritance links like this can be useful when the inheritance relationship is being used for table partitioning (see ). @@ -2680,10 +2680,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); Inherited queries perform access permission checks on the parent table - only. Thus, for example, granting UPDATE permission on - the cities table implies permission to update rows in + only. Thus, for example, granting UPDATE permission on + the cities table implies permission to update rows in the capitals table as well, when they are - accessed through cities. This preserves the appearance + accessed through cities. This preserves the appearance that the data is (also) in the parent table. But the capitals table could not be updated directly without an additional grant. In a similar way, the parent table's row @@ -2732,33 +2732,33 @@ VALUES ('Albany', NULL, NULL, 'NY'); - If we declared cities.name to be - UNIQUE or a PRIMARY KEY, this would not stop the - capitals table from having rows with names duplicating - rows in cities. And those duplicate rows would by - default show up in queries from cities. In fact, by - default capitals would have no unique constraint at all, + If we declared cities.name to be + UNIQUE or a PRIMARY KEY, this would not stop the + capitals table from having rows with names duplicating + rows in cities. And those duplicate rows would by + default show up in queries from cities. In fact, by + default capitals would have no unique constraint at all, and so could contain multiple rows with the same name. - You could add a unique constraint to capitals, but this - would not prevent duplication compared to cities. + You could add a unique constraint to capitals, but this + would not prevent duplication compared to cities. Similarly, if we were to specify that - cities.name REFERENCES some + cities.name REFERENCES some other table, this constraint would not automatically propagate to - capitals. In this case you could work around it by - manually adding the same REFERENCES constraint to - capitals. + capitals. In this case you could work around it by + manually adding the same REFERENCES constraint to + capitals. Specifying that another table's column REFERENCES - cities(name) would allow the other table to contain city names, but + cities(name) would allow the other table to contain city names, but not capital names. There is no good workaround for this case. @@ -2825,10 +2825,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); Bulk loads and deletes can be accomplished by adding or removing partitions, if that requirement is planned into the partitioning design. - Doing ALTER TABLE DETACH PARTITION or dropping an individual - partition using DROP TABLE is far faster than a bulk + Doing ALTER TABLE DETACH PARTITION or dropping an individual + partition using DROP TABLE is far faster than a bulk operation. These commands also entirely avoid the - VACUUM overhead caused by a bulk DELETE. + VACUUM overhead caused by a bulk DELETE. @@ -2921,7 +2921,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); containing data as a partition of a partitioned table, or remove a partition from a partitioned table turning it into a standalone table; see to learn more about the - ATTACH PARTITION and DETACH PARTITION + ATTACH PARTITION and DETACH PARTITION sub-commands. @@ -2968,9 +2968,9 @@ VALUES ('Albany', NULL, NULL, 'NY'); Partitions cannot have columns that are not present in the parent. It is neither possible to specify columns when creating partitions with - CREATE TABLE nor is it possible to add columns to - partitions after-the-fact using ALTER TABLE. Tables may be - added as a partition with ALTER TABLE ... ATTACH PARTITION + CREATE TABLE nor is it possible to add columns to + partitions after-the-fact using ALTER TABLE. Tables may be + added as a partition with ALTER TABLE ... ATTACH PARTITION only if their columns exactly match the parent, including any oid column. @@ -3049,7 +3049,7 @@ CREATE TABLE measurement ( accessing the partitioned table will have to scan fewer partitions if the conditions involve some or all of these columns. For example, consider a table range partitioned using columns - lastname and firstname (in that order) + lastname and firstname (in that order) as the partition key. @@ -3067,7 +3067,7 @@ CREATE TABLE measurement ( Partitions thus created are in every way normal - PostgreSQL + PostgreSQL tables (or, possibly, foreign tables). It is possible to specify a tablespace and storage parameters for each partition separately. @@ -3111,12 +3111,12 @@ CREATE TABLE measurement_y2006m02 PARTITION OF measurement PARTITION BY RANGE (peaktemp); - After creating partitions of measurement_y2006m02, - any data inserted into measurement that is mapped to - measurement_y2006m02 (or data that is directly inserted - into measurement_y2006m02, provided it satisfies its + After creating partitions of measurement_y2006m02, + any data inserted into measurement that is mapped to + measurement_y2006m02 (or data that is directly inserted + into measurement_y2006m02, provided it satisfies its partition constraint) will be further redirected to one of its - partitions based on the peaktemp column. The partition + partitions based on the peaktemp column. The partition key specified may overlap with the parent's partition key, although care should be taken when specifying the bounds of a sub-partition such that the set of data it accepts constitutes a subset of what @@ -3147,7 +3147,7 @@ CREATE INDEX ON measurement_y2008m01 (logdate); Ensure that the - configuration parameter is not disabled in postgresql.conf. + configuration parameter is not disabled in postgresql.conf. If it is, queries will not be optimized as desired. @@ -3197,7 +3197,7 @@ ALTER TABLE measurement DETACH PARTITION measurement_y2006m02; This allows further operations to be performed on the data before it is dropped. For example, this is often a useful time to back up - the data using COPY, pg_dump, or + the data using COPY, pg_dump, or similar tools. It might also be a useful time to aggregate data into smaller formats, perform other data manipulations, or run reports. @@ -3236,14 +3236,14 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - Before running the ATTACH PARTITION command, it is - recommended to create a CHECK constraint on the table to + Before running the ATTACH PARTITION command, it is + recommended to create a CHECK constraint on the table to be attached describing the desired partition constraint. That way, the system will be able to skip the scan to validate the implicit partition constraint. Without such a constraint, the table will be scanned to validate the partition constraint while holding an ACCESS EXCLUSIVE lock on the parent table. - One may then drop the constraint after ATTACH PARTITION + One may then drop the constraint after ATTACH PARTITION is finished, because it is no longer necessary. @@ -3285,7 +3285,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - An UPDATE that causes a row to move from one partition to + An UPDATE that causes a row to move from one partition to another fails, because the new value of the row fails to satisfy the implicit partition constraint of the original partition. @@ -3376,7 +3376,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 the master table. Normally, these tables will not add any columns to the set inherited from the master. Just as with declarative partitioning, these partitions are in every way normal - PostgreSQL tables (or foreign tables). + PostgreSQL tables (or foreign tables). @@ -3460,7 +3460,7 @@ CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate); We want our application to be able to say INSERT INTO - measurement ... and have the data be redirected into the + measurement ... and have the data be redirected into the appropriate partition table. We can arrange that by attaching a suitable trigger function to the master table. If data will be added only to the latest partition, we can @@ -3567,9 +3567,9 @@ DO INSTEAD - Be aware that COPY ignores rules. If you want to - use COPY to insert data, you'll need to copy into the - correct partition table rather than into the master. COPY + Be aware that COPY ignores rules. If you want to + use COPY to insert data, you'll need to copy into the + correct partition table rather than into the master. COPY does fire triggers, so you can use it normally if you use the trigger approach. @@ -3585,7 +3585,7 @@ DO INSTEAD Ensure that the configuration parameter is not disabled in - postgresql.conf. + postgresql.conf. If it is, queries will not be optimized as desired. @@ -3666,8 +3666,8 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement; The schemes shown here assume that the partition key column(s) of a row never change, or at least do not change enough to require - it to move to another partition. An UPDATE that attempts - to do that will fail because of the CHECK constraints. + it to move to another partition. An UPDATE that attempts + to do that will fail because of the CHECK constraints. If you need to handle such cases, you can put suitable update triggers on the partition tables, but it makes management of the structure much more complicated. @@ -3688,8 +3688,8 @@ ANALYZE measurement; - INSERT statements with ON CONFLICT - clauses are unlikely to work as expected, as the ON CONFLICT + INSERT statements with ON CONFLICT + clauses are unlikely to work as expected, as the ON CONFLICT action is only taken in case of unique violations on the specified target relation, not its child relations. @@ -3717,7 +3717,7 @@ ANALYZE measurement; - Constraint exclusion is a query optimization technique + Constraint exclusion is a query optimization technique that improves performance for partitioned tables defined in the fashion described above (both declaratively partitioned tables and those implemented using inheritance). As an example: @@ -3728,17 +3728,17 @@ SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; Without constraint exclusion, the above query would scan each of - the partitions of the measurement table. With constraint + the partitions of the measurement table. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query's - WHERE clause. When the planner can prove this, it + WHERE clause. When the planner can prove this, it excludes the partition from the query plan. - You can use the EXPLAIN command to show the difference - between a plan with constraint_exclusion on and a plan + You can use the EXPLAIN command to show the difference + between a plan with constraint_exclusion on and a plan with it off. A typical unoptimized plan for this type of table setup is: @@ -3783,7 +3783,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; - Note that constraint exclusion is driven only by CHECK + Note that constraint exclusion is driven only by CHECK constraints, not by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an index needs to be created for a given partition depends on whether you @@ -3795,11 +3795,11 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; The default (and recommended) setting of is actually neither - on nor off, but an intermediate setting - called partition, which causes the technique to be + on nor off, but an intermediate setting + called partition, which causes the technique to be applied only to queries that are likely to be working on partitioned - tables. The on setting causes the planner to examine - CHECK constraints in all queries, even simple ones that + tables. The on setting causes the planner to examine + CHECK constraints in all queries, even simple ones that are unlikely to benefit. @@ -3810,7 +3810,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; - Constraint exclusion only works when the query's WHERE + Constraint exclusion only works when the query's WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized, since the @@ -3867,7 +3867,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; PostgreSQL implements portions of the SQL/MED specification, allowing you to access data that resides outside PostgreSQL using regular SQL queries. Such data is referred to as - foreign data. (Note that this usage is not to be confused + foreign data. (Note that this usage is not to be confused with foreign keys, which are a type of constraint within the database.) @@ -3876,7 +3876,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; foreign data wrapper. A foreign data wrapper is a library that can communicate with an external data source, hiding the details of connecting to the data source and obtaining data from it. - There are some foreign data wrappers available as contrib + There are some foreign data wrappers available as contrib modules; see . Other kinds of foreign data wrappers might be found as third party products. If none of the existing foreign data wrappers suit your needs, you can write your own; see - To access foreign data, you need to create a foreign server + To access foreign data, you need to create a foreign server object, which defines how to connect to a particular external data source according to the set of options used by its supporting foreign data wrapper. Then you need to create one or more foreign @@ -3899,7 +3899,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; Accessing remote data may require authenticating to the external data source. This information can be provided by a - user mapping, which can provide additional data + user mapping, which can provide additional data such as user names and passwords based on the current PostgreSQL role. @@ -4002,13 +4002,13 @@ DROP TABLE products CASCADE; that depend on them, recursively. In this case, it doesn't remove the orders table, it only removes the foreign key constraint. It stops there because nothing depends on the foreign key constraint. - (If you want to check what DROP ... CASCADE will do, - run DROP without CASCADE and read the - DETAIL output.) + (If you want to check what DROP ... CASCADE will do, + run DROP without CASCADE and read the + DETAIL output.) - Almost all DROP commands in PostgreSQL support + Almost all DROP commands in PostgreSQL support specifying CASCADE. Of course, the nature of the possible dependencies varies with the type of the object. You can also write RESTRICT instead of @@ -4020,7 +4020,7 @@ DROP TABLE products CASCADE; According to the SQL standard, specifying either RESTRICT or CASCADE is - required in a DROP command. No database system actually + required in a DROP command. No database system actually enforces that rule, but whether the default behavior is RESTRICT or CASCADE varies across systems. @@ -4028,18 +4028,18 @@ DROP TABLE products CASCADE; - If a DROP command lists multiple + If a DROP command lists multiple objects, CASCADE is only required when there are dependencies outside the specified group. For example, when saying DROP TABLE tab1, tab2 the existence of a foreign - key referencing tab1 from tab2 would not mean + key referencing tab1 from tab2 would not mean that CASCADE is needed to succeed. For user-defined functions, PostgreSQL tracks dependencies associated with a function's externally-visible properties, - such as its argument and result types, but not dependencies + such as its argument and result types, but not dependencies that could only be known by examining the function body. As an example, consider this situation: @@ -4056,11 +4056,11 @@ CREATE FUNCTION get_color_note (rainbow) RETURNS text AS (See for an explanation of SQL-language functions.) PostgreSQL will be aware that - the get_color_note function depends on the rainbow + the get_color_note function depends on the rainbow type: dropping the type would force dropping the function, because its - argument type would no longer be defined. But PostgreSQL - will not consider get_color_note to depend on - the my_colors table, and so will not drop the function if + argument type would no longer be defined. But PostgreSQL + will not consider get_color_note to depend on + the my_colors table, and so will not drop the function if the table is dropped. While there are disadvantages to this approach, there are also benefits. The function is still valid in some sense if the table is missing, though executing it would cause an error; creating a new diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml index 23af270e32..7ef996b51f 100644 --- a/doc/src/sgml/dfunc.sgml +++ b/doc/src/sgml/dfunc.sgml @@ -9,7 +9,7 @@ C, they must be compiled and linked in a special way to produce a file that can be dynamically loaded by the server. To be precise, a shared library needs to be - created.shared library + created.shared library @@ -30,7 +30,7 @@ executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as position-independent code - (PIC),PIC which + (PIC),PIC which conceptually means that they can be placed at an arbitrary location in memory when they are loaded by the executable. (Object files intended for executables are usually not compiled that way.) The @@ -57,8 +57,8 @@ - FreeBSD - FreeBSDshared library + FreeBSD + FreeBSDshared library @@ -70,15 +70,15 @@ gcc -fPIC -c foo.c gcc -shared -o foo.so foo.o This is applicable as of version 3.0 of - FreeBSD. + FreeBSD. - HP-UX - HP-UXshared library + HP-UX + HP-UXshared library @@ -97,7 +97,7 @@ gcc -fPIC -c foo.c ld -b -o foo.sl foo.o - HP-UX uses the extension + HP-UX uses the extension .sl for shared libraries, unlike most other systems. @@ -106,8 +106,8 @@ ld -b -o foo.sl foo.o - Linux - Linuxshared library + Linux + Linuxshared library @@ -125,8 +125,8 @@ cc -shared -o foo.so foo.o - macOS - macOSshared library + macOS + macOSshared library @@ -141,8 +141,8 @@ cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o - NetBSD - NetBSDshared library + NetBSD + NetBSDshared library @@ -161,8 +161,8 @@ gcc -shared -o foo.so foo.o - OpenBSD - OpenBSDshared library + OpenBSD + OpenBSDshared library @@ -179,17 +179,17 @@ ld -Bshareable -o foo.so foo.o - Solaris - Solarisshared library + Solaris + Solarisshared library The compiler flag to create PIC is with the Sun compiler and - with GCC. To + with GCC. To link shared libraries, the compiler option is with either compiler or alternatively - with GCC. + with GCC. cc -KPIC -c foo.c cc -G -o foo.so foo.o diff --git a/doc/src/sgml/dict-int.sgml b/doc/src/sgml/dict-int.sgml index d49f3e2a3a..04cf14a73d 100644 --- a/doc/src/sgml/dict-int.sgml +++ b/doc/src/sgml/dict-int.sgml @@ -8,7 +8,7 @@ - dict_int is an example of an add-on dictionary template + dict_int is an example of an add-on dictionary template for full-text search. The motivation for this example dictionary is to control the indexing of integers (signed and unsigned), allowing such numbers to be indexed while preventing excessive growth in the number of @@ -25,17 +25,17 @@ - The maxlen parameter specifies the maximum number of + The maxlen parameter specifies the maximum number of digits allowed in an integer word. The default value is 6. - The rejectlong parameter specifies whether an overlength - integer should be truncated or ignored. If rejectlong is - false (the default), the dictionary returns the first - maxlen digits of the integer. If rejectlong is - true, the dictionary treats an overlength integer as a stop + The rejectlong parameter specifies whether an overlength + integer should be truncated or ignored. If rejectlong is + false (the default), the dictionary returns the first + maxlen digits of the integer. If rejectlong is + true, the dictionary treats an overlength integer as a stop word, so that it will not be indexed. Note that this also means that such an integer cannot be searched for. @@ -47,8 +47,8 @@ Usage - Installing the dict_int extension creates a text search - template intdict_template and a dictionary intdict + Installing the dict_int extension creates a text search + template intdict_template and a dictionary intdict based on it, with the default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/dict-xsyn.sgml b/doc/src/sgml/dict-xsyn.sgml index 42362ffbc8..bf4965c36f 100644 --- a/doc/src/sgml/dict-xsyn.sgml +++ b/doc/src/sgml/dict-xsyn.sgml @@ -8,7 +8,7 @@ - dict_xsyn (Extended Synonym Dictionary) is an example of an + dict_xsyn (Extended Synonym Dictionary) is an example of an add-on dictionary template for full-text search. This dictionary type replaces words with groups of their synonyms, and so makes it possible to search for a word using any of its synonyms. @@ -18,41 +18,41 @@ Configuration - A dict_xsyn dictionary accepts the following options: + A dict_xsyn dictionary accepts the following options: - matchorig controls whether the original word is accepted by - the dictionary. Default is true. + matchorig controls whether the original word is accepted by + the dictionary. Default is true. - matchsynonyms controls whether the synonyms are - accepted by the dictionary. Default is false. + matchsynonyms controls whether the synonyms are + accepted by the dictionary. Default is false. - keeporig controls whether the original word is included in - the dictionary's output. Default is true. + keeporig controls whether the original word is included in + the dictionary's output. Default is true. - keepsynonyms controls whether the synonyms are included in - the dictionary's output. Default is true. + keepsynonyms controls whether the synonyms are included in + the dictionary's output. Default is true. - rules is the base name of the file containing the list of + rules is the base name of the file containing the list of synonyms. This file must be stored in - $SHAREDIR/tsearch_data/ (where $SHAREDIR means - the PostgreSQL installation's shared-data directory). - Its name must end in .rules (which is not to be included in - the rules parameter). + $SHAREDIR/tsearch_data/ (where $SHAREDIR means + the PostgreSQL installation's shared-data directory). + Its name must end in .rules (which is not to be included in + the rules parameter). @@ -71,15 +71,15 @@ word syn1 syn2 syn3 - The sharp (#) sign is a comment delimiter. It may appear at + The sharp (#) sign is a comment delimiter. It may appear at any position in a line. The rest of the line will be skipped. - Look at xsyn_sample.rules, which is installed in - $SHAREDIR/tsearch_data/, for an example. + Look at xsyn_sample.rules, which is installed in + $SHAREDIR/tsearch_data/, for an example. @@ -87,8 +87,8 @@ word syn1 syn2 syn3 Usage - Installing the dict_xsyn extension creates a text search - template xsyn_template and a dictionary xsyn + Installing the dict_xsyn extension creates a text search + template xsyn_template and a dictionary xsyn based on it, with default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/diskusage.sgml b/doc/src/sgml/diskusage.sgml index 461deb9dba..ba23084354 100644 --- a/doc/src/sgml/diskusage.sgml +++ b/doc/src/sgml/diskusage.sgml @@ -5,7 +5,7 @@ This chapter discusses how to monitor the disk usage of a - PostgreSQL database system. + PostgreSQL database system. @@ -18,10 +18,10 @@ Each table has a primary heap disk file where most of the data is stored. If the table has any columns with potentially-wide values, - there also might be a TOAST file associated with the table, + there also might be a TOAST file associated with the table, which is used to store values too wide to fit comfortably in the main table (see ). There will be one valid index - on the TOAST table, if present. There also might be indexes + on the TOAST table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte. Naming conventions for these files are described @@ -39,7 +39,7 @@ - Using psql on a recently vacuumed or analyzed database, + Using psql on a recently vacuumed or analyzed database, you can issue queries to see the disk usage of any table: SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'customer'; @@ -49,14 +49,14 @@ SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'custom base/16384/16806 | 60 (1 row) - Each page is typically 8 kilobytes. (Remember, relpages - is only updated by VACUUM, ANALYZE, and - a few DDL commands such as CREATE INDEX.) The file path name + Each page is typically 8 kilobytes. (Remember, relpages + is only updated by VACUUM, ANALYZE, and + a few DDL commands such as CREATE INDEX.) The file path name is of interest if you want to examine the table's disk file directly. - To show the space used by TOAST tables, use a query + To show the space used by TOAST tables, use a query like the following: SELECT relname, relpages diff --git a/doc/src/sgml/dml.sgml b/doc/src/sgml/dml.sgml index 071cdb610f..bc016d3cae 100644 --- a/doc/src/sgml/dml.sgml +++ b/doc/src/sgml/dml.sgml @@ -285,42 +285,42 @@ DELETE FROM products; Sometimes it is useful to obtain data from modified rows while they are - being manipulated. The INSERT, UPDATE, - and DELETE commands all have an - optional RETURNING clause that supports this. Use - of RETURNING avoids performing an extra database query to + being manipulated. The INSERT, UPDATE, + and DELETE commands all have an + optional RETURNING clause that supports this. Use + of RETURNING avoids performing an extra database query to collect the data, and is especially valuable when it would otherwise be difficult to identify the modified rows reliably. - The allowed contents of a RETURNING clause are the same as - a SELECT command's output list + The allowed contents of a RETURNING clause are the same as + a SELECT command's output list (see ). It can contain column names of the command's target table, or value expressions using those - columns. A common shorthand is RETURNING *, which selects + columns. A common shorthand is RETURNING *, which selects all columns of the target table in order. - In an INSERT, the data available to RETURNING is + In an INSERT, the data available to RETURNING is the row as it was inserted. This is not so useful in trivial inserts, since it would just repeat the data provided by the client. But it can be very handy when relying on computed default values. For example, - when using a serial - column to provide unique identifiers, RETURNING can return + when using a serial + column to provide unique identifiers, RETURNING can return the ID assigned to a new row: CREATE TABLE users (firstname text, lastname text, id serial primary key); INSERT INTO users (firstname, lastname) VALUES ('Joe', 'Cool') RETURNING id; - The RETURNING clause is also very useful - with INSERT ... SELECT. + The RETURNING clause is also very useful + with INSERT ... SELECT. - In an UPDATE, the data available to RETURNING is + In an UPDATE, the data available to RETURNING is the new content of the modified row. For example: UPDATE products SET price = price * 1.10 @@ -330,7 +330,7 @@ UPDATE products SET price = price * 1.10 - In a DELETE, the data available to RETURNING is + In a DELETE, the data available to RETURNING is the content of the deleted row. For example: DELETE FROM products @@ -341,9 +341,9 @@ DELETE FROM products If there are triggers () on the target table, - the data available to RETURNING is the row as modified by + the data available to RETURNING is the row as modified by the triggers. Thus, inspecting columns computed by triggers is another - common use-case for RETURNING. + common use-case for RETURNING. diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index ff58a17335..3a5b88ca1c 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -449,7 +449,7 @@ checking for fop... fop To produce HTML documentation with the stylesheet used on postgresql.org instead of the + url="https://www.postgresql.org/docs/current">postgresql.org instead of the default simple style use: doc/src/sgml$ make STYLE=website html diff --git a/doc/src/sgml/earthdistance.sgml b/doc/src/sgml/earthdistance.sgml index 6dedc4a5f4..1bdcf64629 100644 --- a/doc/src/sgml/earthdistance.sgml +++ b/doc/src/sgml/earthdistance.sgml @@ -8,18 +8,18 @@ - The earthdistance module provides two different approaches to + The earthdistance module provides two different approaches to calculating great circle distances on the surface of the Earth. The one - described first depends on the cube module (which - must be installed before earthdistance can be - installed). The second one is based on the built-in point data type, + described first depends on the cube module (which + must be installed before earthdistance can be + installed). The second one is based on the built-in point data type, using longitude and latitude for the coordinates. In this module, the Earth is assumed to be perfectly spherical. (If that's too inaccurate for you, you might want to look at the - PostGIS + PostGIS project.) @@ -29,13 +29,13 @@ Data is stored in cubes that are points (both corners are the same) using 3 coordinates representing the x, y, and z distance from the center of the - Earth. A domain earth over cube is provided, which + Earth. A domain earth over cube is provided, which includes constraint checks that the value meets these restrictions and is reasonably close to the actual surface of the Earth. - The radius of the Earth is obtained from the earth() + The radius of the Earth is obtained from the earth() function. It is given in meters. But by changing this one function you can change the module to use some other units, or to use a different value of the radius that you feel is more appropriate. @@ -43,8 +43,8 @@ This package has applications to astronomical databases as well. - Astronomers will probably want to change earth() to return a - radius of 180/pi() so that distances are in degrees. + Astronomers will probably want to change earth() to return a + radius of 180/pi() so that distances are in degrees. @@ -123,11 +123,11 @@ earth_box(earth, float8)earth_box cube Returns a box suitable for an indexed search using the cube - @> + @> operator for points within a given great circle distance of a location. Some points in this box are further than the specified great circle distance from the location, so a second check using - earth_distance should be included in the query. + earth_distance should be included in the query. @@ -141,7 +141,7 @@ The second part of the module relies on representing Earth locations as - values of type point, in which the first component is taken to + values of type point, in which the first component is taken to represent longitude in degrees, and the second component is taken to represent latitude in degrees. Points are taken as (longitude, latitude) and not vice versa because longitude is closer to the intuitive idea of @@ -165,7 +165,7 @@ - point <@> point + point <@> point float8 Gives the distance in statute miles between two points on the Earth's surface. @@ -176,15 +176,15 @@
- Note that unlike the cube-based part of the module, units - are hardwired here: changing the earth() function will + Note that unlike the cube-based part of the module, units + are hardwired here: changing the earth() function will not affect the results of this operator. One disadvantage of the longitude/latitude representation is that you need to be careful about the edge conditions near the poles - and near +/- 180 degrees of longitude. The cube-based + and near +/- 180 degrees of longitude. The cube-based representation avoids these discontinuities. diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 716a101838..0f9ff3a8eb 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -46,7 +46,7 @@ correctness. Third, embedded SQL in C is specified in the SQL standard and supported by many other SQL database systems. The - PostgreSQL implementation is designed to match this + PostgreSQL implementation is designed to match this standard as much as possible, and it is usually possible to port embedded SQL programs written for other SQL databases to PostgreSQL with relative @@ -97,19 +97,19 @@ EXEC SQL CONNECT TO target AS - dbname@hostname:port + dbname@hostname:port - tcp:postgresql://hostname:port/dbname?options + tcp:postgresql://hostname:port/dbname?options - unix:postgresql://hostname:port/dbname?options + unix:postgresql://hostname:port/dbname?options @@ -475,7 +475,7 @@ EXEC SQL COMMIT; In the default mode, statements are committed only when EXEC SQL COMMIT is issued. The embedded SQL interface also supports autocommit of transactions (similar to - psql's default behavior) via the + psql's default behavior) via the command-line option to ecpg (see ) or via the EXEC SQL SET AUTOCOMMIT TO ON statement. In autocommit mode, each command is @@ -507,7 +507,7 @@ EXEC SQL COMMIT; - EXEC SQL PREPARE TRANSACTION transaction_id + EXEC SQL PREPARE TRANSACTION transaction_id Prepare the current transaction for two-phase commit. @@ -516,7 +516,7 @@ EXEC SQL COMMIT; - EXEC SQL COMMIT PREPARED transaction_id + EXEC SQL COMMIT PREPARED transaction_id Commit a transaction that is in prepared state. @@ -525,7 +525,7 @@ EXEC SQL COMMIT; - EXEC SQL ROLLBACK PREPARED transaction_id + EXEC SQL ROLLBACK PREPARED transaction_id Roll back a transaction that is in prepared state. @@ -720,7 +720,7 @@ EXEC SQL int i = 4; The definition of a structure or union also must be listed inside - a DECLARE section. Otherwise the preprocessor cannot + a DECLARE section. Otherwise the preprocessor cannot handle these types since it does not know the definition. @@ -890,8 +890,8 @@ do - character(n), varchar(n), text - char[n+1], VARCHAR[n+1]declared in ecpglib.h + character(n), varchar(n), text + char[n+1], VARCHAR[n+1]declared in ecpglib.h @@ -955,7 +955,7 @@ EXEC SQL END DECLARE SECTION; The other way is using the VARCHAR type, which is a special type provided by ECPG. The definition on an array of type VARCHAR is converted into a - named struct for every variable. A declaration like: + named struct for every variable. A declaration like: VARCHAR var[180]; @@ -994,10 +994,10 @@ struct varchar_var { int len; char arr[180]; } var; ECPG contains some special types that help you to interact easily with some special data types from the PostgreSQL server. In particular, it has implemented support for the - numeric, decimal, date, timestamp, - and interval types. These data types cannot usefully be + numeric, decimal, date, timestamp, + and interval types. These data types cannot usefully be mapped to primitive host variable types (such - as int, long long int, + as int, long long int, or char[]), because they have a complex internal structure. Applications deal with these types by declaring host variables in special types and accessing them using functions in @@ -1942,10 +1942,10 @@ free(out); The numeric type offers to do calculations with arbitrary precision. See for the equivalent type in the - PostgreSQL server. Because of the arbitrary precision this + PostgreSQL server. Because of the arbitrary precision this variable needs to be able to expand and shrink dynamically. That's why you can only create numeric variables on the heap, by means of the - PGTYPESnumeric_new and PGTYPESnumeric_free + PGTYPESnumeric_new and PGTYPESnumeric_free functions. The decimal type, which is similar but limited in precision, can be created on the stack as well as on the heap. @@ -2092,17 +2092,17 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) - 1, if var1 is bigger than var2 + 1, if var1 is bigger than var2 - -1, if var1 is smaller than var2 + -1, if var1 is smaller than var2 - 0, if var1 and var2 are equal + 0, if var1 and var2 are equal @@ -2119,7 +2119,7 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) int PGTYPESnumeric_from_int(signed int int_val, numeric *var); This function accepts a variable of type signed int and stores it - in the numeric variable var. Upon success, 0 is returned and + in the numeric variable var. Upon success, 0 is returned and -1 in case of a failure. @@ -2134,7 +2134,7 @@ int PGTYPESnumeric_from_int(signed int int_val, numeric *var); int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); This function accepts a variable of type signed long int and stores it - in the numeric variable var. Upon success, 0 is returned and + in the numeric variable var. Upon success, 0 is returned and -1 in case of a failure. @@ -2149,7 +2149,7 @@ int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); int PGTYPESnumeric_copy(numeric *src, numeric *dst); This function copies over the value of the variable that - src points to into the variable that dst + src points to into the variable that dst points to. It returns 0 on success and -1 if an error occurs. @@ -2164,7 +2164,7 @@ int PGTYPESnumeric_copy(numeric *src, numeric *dst); int PGTYPESnumeric_from_double(double d, numeric *dst); This function accepts a variable of type double and stores the result - in the variable that dst points to. It returns 0 on success + in the variable that dst points to. It returns 0 on success and -1 if an error occurs. @@ -2179,10 +2179,10 @@ int PGTYPESnumeric_from_double(double d, numeric *dst); int PGTYPESnumeric_to_double(numeric *nv, double *dp) The function converts the numeric value from the variable that - nv points to into the double variable that dp points + nv points to into the double variable that dp points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable errno will be set - to PGTYPES_NUM_OVERFLOW additionally. + overflow. On overflow, the global variable errno will be set + to PGTYPES_NUM_OVERFLOW additionally. @@ -2196,10 +2196,10 @@ int PGTYPESnumeric_to_double(numeric *nv, double *dp) int PGTYPESnumeric_to_int(numeric *nv, int *ip); The function converts the numeric value from the variable that - nv points to into the integer variable that ip + nv points to into the integer variable that ip points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable errno will be set - to PGTYPES_NUM_OVERFLOW additionally. + overflow. On overflow, the global variable errno will be set + to PGTYPES_NUM_OVERFLOW additionally. @@ -2213,10 +2213,10 @@ int PGTYPESnumeric_to_int(numeric *nv, int *ip); int PGTYPESnumeric_to_long(numeric *nv, long *lp); The function converts the numeric value from the variable that - nv points to into the long integer variable that - lp points to. It returns 0 on success and -1 if an error + nv points to into the long integer variable that + lp points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - errno will be set to PGTYPES_NUM_OVERFLOW + errno will be set to PGTYPES_NUM_OVERFLOW additionally. @@ -2231,10 +2231,10 @@ int PGTYPESnumeric_to_long(numeric *nv, long *lp); int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); The function converts the numeric value from the variable that - src points to into the decimal variable that - dst points to. It returns 0 on success and -1 if an error + src points to into the decimal variable that + dst points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - errno will be set to PGTYPES_NUM_OVERFLOW + errno will be set to PGTYPES_NUM_OVERFLOW additionally. @@ -2249,8 +2249,8 @@ int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The function converts the decimal value from the variable that - src points to into the numeric variable that - dst points to. It returns 0 on success and -1 if an error + src points to into the numeric variable that + dst points to. It returns 0 on success and -1 if an error occurs. Since the decimal type is implemented as a limited version of the numeric type, overflow cannot occur with this conversion. @@ -2265,7 +2265,7 @@ int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The date type in C enables your programs to deal with data of the SQL type date. See for the equivalent type in the - PostgreSQL server. + PostgreSQL server. The following functions can be used to work with the date type: @@ -2292,8 +2292,8 @@ date PGTYPESdate_from_timestamp(timestamp dt); date PGTYPESdate_from_asc(char *str, char **endptr); - The function receives a C char* string str and a pointer to - a C char* string endptr. At the moment ECPG always parses + The function receives a C char* string str and a pointer to + a C char* string endptr. At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in *endptr. You can safely set endptr to NULL. @@ -2397,9 +2397,9 @@ date PGTYPESdate_from_asc(char *str, char **endptr); char *PGTYPESdate_to_asc(date dDate); - The function receives the date dDate as its only parameter. - It will output the date in the form 1999-01-18, i.e., in the - YYYY-MM-DD format. + The function receives the date dDate as its only parameter. + It will output the date in the form 1999-01-18, i.e., in the + YYYY-MM-DD format. @@ -2414,11 +2414,11 @@ char *PGTYPESdate_to_asc(date dDate); void PGTYPESdate_julmdy(date d, int *mdy); - The function receives the date d and a pointer to an array - of 3 integer values mdy. The variable name indicates - the sequential order: mdy[0] will be set to contain the - number of the month, mdy[1] will be set to the value of the - day and mdy[2] will contain the year. + The function receives the date d and a pointer to an array + of 3 integer values mdy. The variable name indicates + the sequential order: mdy[0] will be set to contain the + number of the month, mdy[1] will be set to the value of the + day and mdy[2] will contain the year. @@ -2432,7 +2432,7 @@ void PGTYPESdate_julmdy(date d, int *mdy); void PGTYPESdate_mdyjul(int *mdy, date *jdate); - The function receives the array of the 3 integers (mdy) as + The function receives the array of the 3 integers (mdy) as its first argument and as its second argument a pointer to a variable of type date that should hold the result of the operation. @@ -2447,7 +2447,7 @@ void PGTYPESdate_mdyjul(int *mdy, date *jdate); int PGTYPESdate_dayofweek(date d); - The function receives the date variable d as its only + The function receives the date variable d as its only argument and returns an integer that indicates the day of the week for this date. @@ -2499,7 +2499,7 @@ int PGTYPESdate_dayofweek(date d); void PGTYPESdate_today(date *d); - The function receives a pointer to a date variable (d) + The function receives a pointer to a date variable (d) that it sets to the current date. @@ -2514,9 +2514,9 @@ void PGTYPESdate_today(date *d); int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); - The function receives the date to convert (dDate), the - format mask (fmtstring) and the string that will hold the - textual representation of the date (outbuf). + The function receives the date to convert (dDate), the + format mask (fmtstring) and the string that will hold the + textual representation of the date (outbuf). On success, 0 is returned and a negative value if an error occurred. @@ -2637,9 +2637,9 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The function receives a pointer to the date value that should hold the - result of the operation (d), the format mask to use for - parsing the date (fmt) and the C char* string containing - the textual representation of the date (str). The textual + result of the operation (d), the format mask to use for + parsing the date (fmt) and the C char* string containing + the textual representation of the date (str). The textual representation is expected to match the format mask. However you do not need to have a 1:1 mapping of the string to the format mask. The function only analyzes the sequential order and looks for the literals @@ -2742,7 +2742,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The timestamp type in C enables your programs to deal with data of the SQL type timestamp. See for the equivalent - type in the PostgreSQL server. + type in the PostgreSQL server. The following functions can be used to work with the timestamp type: @@ -2756,8 +2756,8 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); - The function receives the string to parse (str) and a - pointer to a C char* (endptr). + The function receives the string to parse (str) and a + pointer to a C char* (endptr). At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in *endptr. @@ -2765,15 +2765,15 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); The function returns the parsed timestamp on success. On error, - PGTYPESInvalidTimestamp is returned and errno is - set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. + PGTYPESInvalidTimestamp is returned and errno is + set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. In general, the input string can contain any combination of an allowed date specification, a whitespace character and an allowed time specification. Note that time zones are not supported by ECPG. It can parse them but does not apply any calculation as the - PostgreSQL server does for example. Timezone + PostgreSQL server does for example. Timezone specifiers are silently discarded. @@ -2819,7 +2819,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); char *PGTYPEStimestamp_to_asc(timestamp tstamp); - The function receives the timestamp tstamp as + The function receives the timestamp tstamp as its only argument and returns an allocated string that contains the textual representation of the timestamp. @@ -2835,7 +2835,7 @@ char *PGTYPEStimestamp_to_asc(timestamp tstamp); void PGTYPEStimestamp_current(timestamp *ts); The function retrieves the current timestamp and saves it into the - timestamp variable that ts points to. + timestamp variable that ts points to. @@ -2849,8 +2849,8 @@ void PGTYPEStimestamp_current(timestamp *ts); int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmtstr); The function receives a pointer to the timestamp to convert as its - first argument (ts), a pointer to the output buffer - (output), the maximal length that has been allocated for + first argument (ts), a pointer to the output buffer + (output), the maximal length that has been allocated for the output buffer (str_len) and the format mask to use for the conversion (fmtstr). @@ -2861,7 +2861,7 @@ int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmt You can use the following format specifiers for the format mask. The format specifiers are the same ones that are used in the - strftime function in libc. Any + strftime function in libc. Any non-format specifier will be copied into the output buffer. + ("). The text matching the portion of the pattern between these markers is returned. - Some examples, with #" delimiting the return string: + Some examples, with #" delimiting the return string: substring('foobar' from '%#"o_b#"%' for '#') oob substring('foobar' from '#"o_b#"%' for '#') NULL @@ -4191,7 +4191,7 @@ substring('foobar' from '#"o_b#"%' for '#') NULL POSIX regular expressions provide a more powerful means for pattern matching than the LIKE and - SIMILAR TO operators. + SIMILAR TO operators. Many Unix tools such as egrep, sed, or awk use a pattern matching language that is similar to the one described here. @@ -4228,7 +4228,7 @@ substring('foobar' from '#"o_b#"%' for '#') NULL - The substring function with two parameters, + The substring function with two parameters, substring(string from pattern), provides extraction of a substring @@ -4253,30 +4253,30 @@ substring('foobar' from 'o(.)b') o - The regexp_replace function provides substitution of + The regexp_replace function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax - regexp_replace(source, - pattern, replacement - , flags ). - The source string is returned unchanged if - there is no match to the pattern. If there is a - match, the source string is returned with the - replacement string substituted for the matching - substring. The replacement string can contain - \n, where n is 1 + regexp_replace(source, + pattern, replacement + , flags ). + The source string is returned unchanged if + there is no match to the pattern. If there is a + match, the source string is returned with the + replacement string substituted for the matching + substring. The replacement string can contain + \n, where n is 1 through 9, to indicate that the source substring matching the - n'th parenthesized subexpression of the pattern should be - inserted, and it can contain \& to indicate that the + n'th parenthesized subexpression of the pattern should be + inserted, and it can contain \& to indicate that the substring matching the entire pattern should be inserted. Write - \\ if you need to put a literal backslash in the replacement + \\ if you need to put a literal backslash in the replacement text. - The flags parameter is an optional text + The flags parameter is an optional text string containing zero or more single-letter flags that change the - function's behavior. Flag i specifies case-insensitive - matching, while flag g specifies replacement of each matching + function's behavior. Flag i specifies case-insensitive + matching, while flag g specifies replacement of each matching substring rather than only the first one. Supported flags (though - not g) are + not g) are described in . @@ -4293,22 +4293,22 @@ regexp_replace('foobarbaz', 'b(..)', E'X\\1Y', 'g') - The regexp_match function returns a text array of + The regexp_match function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax - regexp_match(string, - pattern , flags ). - If there is no match, the result is NULL. - If a match is found, and the pattern contains no + regexp_match(string, + pattern , flags ). + If there is no match, the result is NULL. + If a match is found, and the pattern contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. - If a match is found, and the pattern contains + If a match is found, and the pattern contains parenthesized subexpressions, then the result is a text array - whose n'th element is the substring matching - the n'th parenthesized subexpression of - the pattern (not counting non-capturing + whose n'th element is the substring matching + the n'th parenthesized subexpression of + the pattern (not counting non-capturing parentheses; see below for details). - The flags parameter is an optional text string + The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in . @@ -4330,7 +4330,7 @@ SELECT regexp_match('foobarbequebaz', '(bar)(beque)'); (1 row) In the common case where you just want the whole matching substring - or NULL for no match, write something like + or NULL for no match, write something like SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; regexp_match @@ -4341,20 +4341,20 @@ SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; - The regexp_matches function returns a set of text arrays + The regexp_matches function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as regexp_match. This function returns no rows if there is no match, one row if there is - a match and the g flag is not given, or N - rows if there are N matches and the g flag + a match and the g flag is not given, or N + rows if there are N matches and the g flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized - subexpressions of the pattern, just as described above + subexpressions of the pattern, just as described above for regexp_match. - regexp_matches accepts all the flags shown + regexp_matches accepts all the flags shown in , plus - the g flag which commands it to return all matches, not + the g flag which commands it to return all matches, not just the first one. @@ -4377,46 +4377,46 @@ SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g'); - In most cases regexp_matches() should be used with - the g flag, since if you only want the first match, it's - easier and more efficient to use regexp_match(). - However, regexp_match() only exists - in PostgreSQL version 10 and up. When working in older - versions, a common trick is to place a regexp_matches() + In most cases regexp_matches() should be used with + the g flag, since if you only want the first match, it's + easier and more efficient to use regexp_match(). + However, regexp_match() only exists + in PostgreSQL version 10 and up. When working in older + versions, a common trick is to place a regexp_matches() call in a sub-select, for example: SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab; - This produces a text array if there's a match, or NULL if - not, the same as regexp_match() would do. Without the + This produces a text array if there's a match, or NULL if + not, the same as regexp_match() would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior. - The regexp_split_to_table function splits a string using a POSIX + The regexp_split_to_table function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax - regexp_split_to_table(string, pattern - , flags ). - If there is no match to the pattern, the function returns the - string. If there is at least one match, for each match it returns + regexp_split_to_table(string, pattern + , flags ). + If there is no match to the pattern, the function returns the + string. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. - The flags parameter is an optional text string containing + The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table supports the flags described in . - The regexp_split_to_array function behaves the same as - regexp_split_to_table, except that regexp_split_to_array - returns its result as an array of text. It has the syntax - regexp_split_to_array(string, pattern - , flags ). - The parameters are the same as for regexp_split_to_table. + The regexp_split_to_array function behaves the same as + regexp_split_to_table, except that regexp_split_to_array + returns its result as an array of text. It has the syntax + regexp_split_to_array(string, pattern + , flags ). + The parameters are the same as for regexp_split_to_table. @@ -4471,8 +4471,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by - regexp_match and - regexp_matches, but is usually the most convenient behavior + regexp_match and + regexp_matches, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions. @@ -4491,16 +4491,16 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Regular expressions (REs), as defined in POSIX 1003.2, come in two forms: - extended REs or EREs + extended REs or EREs (roughly those of egrep), and - basic REs or BREs + basic REs or BREs (roughly those of ed). PostgreSQL supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. REs using these non-POSIX extensions are called - advanced REs or AREs + advanced REs or AREs in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). @@ -4510,9 +4510,9 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - PostgreSQL always initially presumes that a regular + PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or - BRE rules can be chosen by prepending an embedded option + BRE rules can be chosen by prepending an embedded option to the RE pattern, as described in . This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules. @@ -4527,15 +4527,15 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - A branch is zero or more quantified atoms or - constraints, concatenated. + A branch is zero or more quantified atoms or + constraints, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string. - A quantified atom is an atom possibly followed - by a single quantifier. + A quantified atom is an atom possibly followed + by a single quantifier. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities @@ -4545,7 +4545,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - A constraint matches an empty string, but matches only when + A constraint matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in @@ -4567,57 +4567,57 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - (re) - (where re is any regular expression) + (re) + (where re is any regular expression) matches a match for - re, with the match noted for possible reporting + re, with the match noted for possible reporting - (?:re) + (?:re) as above, but the match is not noted for reporting - (a non-capturing set of parentheses) + (a non-capturing set of parentheses) (AREs only) - . + . matches any single character - [chars] - a bracket expression, - matching any one of the chars (see + [chars] + a bracket expression, + matching any one of the chars (see for more detail) - \k - (where k is a non-alphanumeric character) + \k + (where k is a non-alphanumeric character) matches that character taken as an ordinary character, - e.g., \\ matches a backslash character + e.g., \\ matches a backslash character - \c - where c is alphanumeric + \c + where c is alphanumeric (possibly followed by other characters) - is an escape, see - (AREs only; in EREs and BREs, this matches c) + is an escape, see + (AREs only; in EREs and BREs, this matches c) - { + { when followed by a character other than a digit, - matches the left-brace character {; + matches the left-brace character {; when followed by a digit, it is the beginning of a - bound (see below) + bound (see below) - x - where x is a single character with no other + x + where x is a single character with no other significance, matches that character @@ -4625,7 +4625,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - An RE cannot end with a backslash (\). + An RE cannot end with a backslash (\). @@ -4649,82 +4649,82 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - * + * a sequence of 0 or more matches of the atom - + + + a sequence of 1 or more matches of the atom - ? + ? a sequence of 0 or 1 matches of the atom - {m} - a sequence of exactly m matches of the atom + {m} + a sequence of exactly m matches of the atom - {m,} - a sequence of m or more matches of the atom + {m,} + a sequence of m or more matches of the atom - {m,n} - a sequence of m through n - (inclusive) matches of the atom; m cannot exceed - n + {m,n} + a sequence of m through n + (inclusive) matches of the atom; m cannot exceed + n - *? - non-greedy version of * + *? + non-greedy version of * - +? - non-greedy version of + + +? + non-greedy version of + - ?? - non-greedy version of ? + ?? + non-greedy version of ? - {m}? - non-greedy version of {m} + {m}? + non-greedy version of {m} - {m,}? - non-greedy version of {m,} + {m,}? + non-greedy version of {m,} - {m,n}? - non-greedy version of {m,n} + {m,n}? + non-greedy version of {m,n} - The forms using {...} - are known as bounds. - The numbers m and n within a bound are + The forms using {...} + are known as bounds. + The numbers m and n within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive. - Non-greedy quantifiers (available in AREs only) match the - same possibilities as their corresponding normal (greedy) + Non-greedy quantifiers (available in AREs only) match the + same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See for more detail. @@ -4733,7 +4733,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A quantifier cannot immediately follow another quantifier, e.g., - ** is invalid. + ** is invalid. A quantifier cannot begin an expression or subexpression or follow ^ or |. @@ -4753,40 +4753,40 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - ^ + ^ matches at the beginning of the string - $ + $ matches at the end of the string - (?=re) - positive lookahead matches at any point - where a substring matching re begins + (?=re) + positive lookahead matches at any point + where a substring matching re begins (AREs only) - (?!re) - negative lookahead matches at any point - where no substring matching re begins + (?!re) + negative lookahead matches at any point + where no substring matching re begins (AREs only) - (?<=re) - positive lookbehind matches at any point - where a substring matching re ends + (?<=re) + positive lookbehind matches at any point + where a substring matching re ends (AREs only) - (?<!re) - negative lookbehind matches at any point - where no substring matching re ends + (?<!re) + negative lookbehind matches at any point + where no substring matching re ends (AREs only) @@ -4795,7 +4795,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Lookahead and lookbehind constraints cannot contain back - references (see ), + references (see ), and all parentheses within them are considered non-capturing. @@ -4808,7 +4808,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; characters enclosed in []. It normally matches any single character from the list (but see below). If the list begins with ^, it matches any single character - not from the rest of the list. + not from the rest of the list. If two characters in the list are separated by -, this is shorthand for the full range of characters between those two @@ -4853,7 +4853,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - PostgreSQL currently does not support multi-character collating + PostgreSQL currently does not support multi-character collating elements. This information describes possible future behavior. @@ -4861,7 +4861,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Within a bracket expression, a collating element enclosed in [= and =] is an equivalence - class, standing for the sequences of characters of all collating + class, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [. and @@ -4896,7 +4896,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word - characters. A word character is an alnum character (as + characters. A word character is an alnum character (as defined by ctype3) or an underscore. This is an extension, compatible with but not @@ -4911,44 +4911,44 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Regular Expression Escapes - Escapes are special sequences beginning with \ + Escapes are special sequences beginning with \ followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. - A \ followed by an alphanumeric character but not constituting + A \ followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, - a \ followed by an alphanumeric character merely stands for + a \ followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, - \ is an ordinary character. + \ is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.) - Character-entry escapes exist to make it easier to specify + Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in . - Class-shorthand escapes provide shorthands for certain + Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are shown in . - A constraint escape is a constraint, + A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in . - A back reference (\n) matches the + A back reference (\n) matches the same string matched by the previous parenthesized subexpression specified - by the number n + by the number n (see ). For example, - ([bc])\1 matches bb or cc - but not bc or cb. + ([bc])\1 matches bb or cc + but not bc or cb. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions. @@ -4967,122 +4967,122 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \a + \a alert (bell) character, as in C - \b + \b backspace, as in C - \B - synonym for backslash (\) to help reduce the need for backslash + \B + synonym for backslash (\) to help reduce the need for backslash doubling - \cX - (where X is any character) the character whose + \cX + (where X is any character) the character whose low-order 5 bits are the same as those of - X, and whose other bits are all zero + X, and whose other bits are all zero - \e + \e the character whose collating-sequence name - is ESC, - or failing that, the character with octal value 033 + is ESC, + or failing that, the character with octal value 033 - \f + \f form feed, as in C - \n + \n newline, as in C - \r + \r carriage return, as in C - \t + \t horizontal tab, as in C - \uwxyz - (where wxyz is exactly four hexadecimal digits) + \uwxyz + (where wxyz is exactly four hexadecimal digits) the character whose hexadecimal value is - 0xwxyz + 0xwxyz - \Ustuvwxyz - (where stuvwxyz is exactly eight hexadecimal + \Ustuvwxyz + (where stuvwxyz is exactly eight hexadecimal digits) the character whose hexadecimal value is - 0xstuvwxyz + 0xstuvwxyz - \v + \v vertical tab, as in C - \xhhh - (where hhh is any sequence of hexadecimal + \xhhh + (where hhh is any sequence of hexadecimal digits) the character whose hexadecimal value is - 0xhhh + 0xhhh (a single character no matter how many hexadecimal digits are used) - \0 - the character whose value is 0 (the null byte) + \0 + the character whose value is 0 (the null byte) - \xy - (where xy is exactly two octal digits, - and is not a back reference) + \xy + (where xy is exactly two octal digits, + and is not a back reference) the character whose octal value is - 0xy + 0xy - \xyz - (where xyz is exactly three octal digits, - and is not a back reference) + \xyz + (where xyz is exactly three octal digits, + and is not a back reference) the character whose octal value is - 0xyz + 0xyz - Hexadecimal digits are 0-9, - a-f, and A-F. - Octal digits are 0-7. + Hexadecimal digits are 0-9, + a-f, and A-F. + Octal digits are 0-7. Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, - for example \u1234 means the character U+1234. + for example \u1234 means the character U+1234. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database @@ -5091,8 +5091,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; The character-entry escapes are always taken as ordinary characters. - For example, \135 is ] in ASCII, but - \135 does not terminate a bracket expression. + For example, \135 is ] in ASCII, but + \135 does not terminate a bracket expression. @@ -5108,34 +5108,34 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \d - [[:digit:]] + \d + [[:digit:]] - \s - [[:space:]] + \s + [[:space:]] - \w - [[:alnum:]_] + \w + [[:alnum:]_] (note underscore is included) - \D - [^[:digit:]] + \D + [^[:digit:]] - \S - [^[:space:]] + \S + [^[:space:]] - \W - [^[:alnum:]_] + \W + [^[:alnum:]_] (note underscore is included) @@ -5143,13 +5143,13 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo;
- Within bracket expressions, \d, \s, - and \w lose their outer brackets, - and \D, \S, and \W are illegal. - (So, for example, [a-c\d] is equivalent to - [a-c[:digit:]]. - Also, [a-c\D], which is equivalent to - [a-c^[:digit:]], is illegal.) + Within bracket expressions, \d, \s, + and \w lose their outer brackets, + and \D, \S, and \W are illegal. + (So, for example, [a-c\d] is equivalent to + [a-c[:digit:]]. + Also, [a-c\D], which is equivalent to + [a-c^[:digit:]], is illegal.) @@ -5165,38 +5165,38 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \A + \A matches only at the beginning of the string (see for how this differs from - ^) + ^) - \m + \m matches only at the beginning of a word - \M + \M matches only at the end of a word - \y + \y matches only at the beginning or end of a word - \Y + \Y matches only at a point that is not the beginning or end of a word - \Z + \Z matches only at the end of the string (see for how this differs from - $) + $) @@ -5204,7 +5204,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A word is defined as in the specification of - [[:<:]] and [[:>:]] above. + [[:<:]] and [[:>:]] above. Constraint escapes are illegal within bracket expressions. @@ -5221,18 +5221,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \m - (where m is a nonzero digit) - a back reference to the m'th subexpression + \m + (where m is a nonzero digit) + a back reference to the m'th subexpression - \mnn - (where m is a nonzero digit, and - nn is some more digits, and the decimal value - mnn is not greater than the number of closing capturing + \mnn + (where m is a nonzero digit, and + nn is some more digits, and the decimal value + mnn is not greater than the number of closing capturing parentheses seen so far) - a back reference to the mnn'th subexpression + a back reference to the mnn'th subexpression @@ -5263,29 +5263,29 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - An RE can begin with one of two special director prefixes. - If an RE begins with ***:, + An RE can begin with one of two special director prefixes. + If an RE begins with ***:, the rest of the RE is taken as an ARE. (This normally has no effect in - PostgreSQL, since REs are assumed to be AREs; + PostgreSQL, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by - the flags parameter to a regex function.) - If an RE begins with ***=, + the flags parameter to a regex function.) + If an RE begins with ***=, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters. - An ARE can begin with embedded options: - a sequence (?xyz) - (where xyz is one or more alphabetic characters) + An ARE can begin with embedded options: + a sequence (?xyz) + (where xyz is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by - a regex operator, or the flags parameter to a regex + a regex operator, or the flags parameter to a regex function. The available option letters are shown in . - Note that these same option letters are used in the flags + Note that these same option letters are used in the flags parameters of regex functions. @@ -5302,67 +5302,67 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - b + b rest of RE is a BRE - c + c case-sensitive matching (overrides operator type) - e + e rest of RE is an ERE - i + i case-insensitive matching (see ) (overrides operator type) - m - historical synonym for n + m + historical synonym for n - n + n newline-sensitive matching (see ) - p + p partial newline-sensitive matching (see ) - q - rest of RE is a literal (quoted) string, all ordinary + q + rest of RE is a literal (quoted) string, all ordinary characters - s + s non-newline-sensitive matching (default) - t + t tight syntax (default; see below) - w - inverse partial newline-sensitive (weird) matching + w + inverse partial newline-sensitive (weird) matching (see ) - x + x expanded syntax (see below) @@ -5370,18 +5370,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo;
- Embedded options take effect at the ) terminating the sequence. + Embedded options take effect at the ) terminating the sequence. They can appear only at the start of an ARE (after the - ***: director if any). + ***: director if any). - In addition to the usual (tight) RE syntax, in which all - characters are significant, there is an expanded syntax, - available by specifying the embedded x option. + In addition to the usual (tight) RE syntax, in which all + characters are significant, there is an expanded syntax, + available by specifying the embedded x option. In the expanded syntax, white-space characters in the RE are ignored, as are - all characters between a # + all characters between a # and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule: @@ -5389,41 +5389,41 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - a white-space character or # preceded by \ is + a white-space character or # preceded by \ is retained - white space or # within a bracket expression is retained + white space or # within a bracket expression is retained white space and comments cannot appear within multi-character symbols, - such as (?: + such as (?: For this purpose, white-space characters are blank, tab, newline, and - any character that belongs to the space character class. + any character that belongs to the space character class. Finally, in an ARE, outside bracket expressions, the sequence - (?#ttt) - (where ttt is any text not containing a )) + (?#ttt) + (where ttt is any text not containing a )) is a comment, completely ignored. Again, this is not allowed between the characters of - multi-character symbols, like (?:. + multi-character symbols, like (?:. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead. - None of these metasyntax extensions is available if - an initial ***= director + None of these metasyntax extensions is available if + an initial ***= director has specified that the user's input be treated as a literal string rather than as an RE. @@ -5437,8 +5437,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will - be taken, depending on whether the RE is greedy or - non-greedy. + be taken, depending on whether the RE is greedy or + non-greedy.
@@ -5458,39 +5458,39 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A quantified atom with a fixed-repetition quantifier - ({m} + ({m} or - {m}?) + {m}?) has the same greediness (possibly none) as the atom itself. A quantified atom with other normal quantifiers (including - {m,n} - with m equal to n) + {m,n} + with m equal to n) is greedy (prefers longest match). A quantified atom with a non-greedy quantifier (including - {m,n}? - with m equal to n) + {m,n}? + with m equal to n) is non-greedy (prefers shortest match). A branch — that is, an RE that has no top-level - | operator — has the same greediness as the first + | operator — has the same greediness as the first quantified atom in it that has a greediness attribute. An RE consisting of two or more branches connected by the - | operator is always greedy. + | operator is always greedy.
@@ -5501,7 +5501,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible - substring as a whole. Once the length of the entire match + substring as a whole. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking @@ -5516,16 +5516,16 @@ SELECT SUBSTRING('XY1234Z', 'Y*([0-9]{1,3})'); SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); Result: 1 - In the first case, the RE as a whole is greedy because Y* - is greedy. It can match beginning at the Y, and it matches - the longest possible string starting there, i.e., Y123. - The output is the parenthesized part of that, or 123. - In the second case, the RE as a whole is non-greedy because Y*? - is non-greedy. It can match beginning at the Y, and it matches - the shortest possible string starting there, i.e., Y1. - The subexpression [0-9]{1,3} is greedy but it cannot change + In the first case, the RE as a whole is greedy because Y* + is greedy. It can match beginning at the Y, and it matches + the longest possible string starting there, i.e., Y123. + The output is the parenthesized part of that, or 123. + In the second case, the RE as a whole is non-greedy because Y*? + is non-greedy. It can match beginning at the Y, and it matches + the shortest possible string starting there, i.e., Y1. + The subexpression [0-9]{1,3} is greedy but it cannot change the decision as to the overall match length; so it is forced to match - just 1. + just 1. @@ -5533,11 +5533,11 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that - match they are allowed to eat relative to each other. + match they are allowed to eat relative to each other. - The quantifiers {1,1} and {1,1}? + The quantifiers {1,1} and {1,1}? can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute @@ -5549,8 +5549,8 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); SELECT regexp_match('abc01234xyz', '(.*)(\d+)(.*)'); Result: {abc0123,4,xyz} - That didn't work: the first .* is greedy so - it eats as much as it can, leaving the \d+ to + That didn't work: the first .* is greedy so + it eats as much as it can, leaving the \d+ to match at the last possible place, the last digit. We might try to fix that by making it non-greedy: @@ -5573,14 +5573,14 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: - bb* - matches the three middle characters of abbbc; - (week|wee)(night|knights) - matches all ten characters of weeknights; - when (.*).* - is matched against abc the parenthesized subexpression + bb* + matches the three middle characters of abbbc; + (week|wee)(night|knights) + matches all ten characters of weeknights; + when (.*).* + is matched against abc the parenthesized subexpression matches all three characters; and when - (a*)* is matched against bc + (a*)* is matched against bc both the whole RE and the parenthesized subexpression match an empty string. @@ -5592,38 +5592,38 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, - e.g., x becomes [xX]. + e.g., x becomes [xX]. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., - [x] becomes [xX] - and [^x] becomes [^xX]. + [x] becomes [xX] + and [^x] becomes [^xX]. - If newline-sensitive matching is specified, . - and bracket expressions using ^ + If newline-sensitive matching is specified, . + and bracket expressions using ^ will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) - and ^ and $ + and ^ and $ will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. - But the ARE escapes \A and \Z - continue to match beginning or end of string only. + But the ARE escapes \A and \Z + continue to match beginning or end of string only. If partial newline-sensitive matching is specified, - this affects . and bracket expressions - as with newline-sensitive matching, but not ^ - and $. + this affects . and bracket expressions + as with newline-sensitive matching, but not ^ + and $. If inverse partial newline-sensitive matching is specified, - this affects ^ and $ - as with newline-sensitive matching, but not . + this affects ^ and $ + as with newline-sensitive matching, but not . and bracket expressions. This isn't very useful but is provided for symmetry. @@ -5642,18 +5642,18 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); The only feature of AREs that is actually incompatible with - POSIX EREs is that \ does not lose its special + POSIX EREs is that \ does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; - the *** syntax of directors likewise is outside the POSIX + the *** syntax of directors likewise is outside the POSIX syntax for both BREs and EREs. Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. - Incompatibilities of note include \b, \B, + Incompatibilities of note include \b, \B, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, @@ -5664,12 +5664,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Two significant incompatibilities exist between AREs and the ERE syntax - recognized by pre-7.4 releases of PostgreSQL: + recognized by pre-7.4 releases of PostgreSQL: - In AREs, \ followed by an alphanumeric character is either + In AREs, \ followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to @@ -5678,9 +5678,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - In AREs, \ remains a special character within - [], so a literal \ within a bracket - expression must be written \\. + In AREs, \ remains a special character within + [], so a literal \ within a bracket + expression must be written \\. @@ -5692,27 +5692,27 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); BREs differ from EREs in several respects. - In BREs, |, +, and ? + In BREs, |, +, and ? are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are - \{ and \}, - with { and } + \{ and \}, + with { and } by themselves ordinary characters. The parentheses for nested subexpressions are - \( and \), - with ( and ) by themselves ordinary characters. - ^ is an ordinary character except at the beginning of the + \( and \), + with ( and ) by themselves ordinary characters. + ^ is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, - $ is an ordinary character except at the end of the + $ is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, - and * is an ordinary character if it appears at the beginning + and * is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression - (after a possible leading ^). + (after a possible leading ^). Finally, single-digit back references are available, and - \< and \> + \< and \> are synonyms for - [[:<:]] and [[:>:]] + [[:<:]] and [[:>:]] respectively; no other escapes are available in BREs. @@ -5839,13 +5839,13 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier. - Similarly, to_number is unnecessary for standard numeric + Similarly, to_number is unnecessary for standard numeric representations. - In a to_char output template string, there are certain + In a to_char output template string, there are certain patterns that are recognized and replaced with appropriately-formatted data based on the given value. Any text that is not a template pattern is simply copied verbatim. Similarly, in an input template string (for the @@ -6022,11 +6022,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); D - day of the week, Sunday (1) to Saturday (7) + day of the week, Sunday (1) to Saturday (7) ID - ISO 8601 day of the week, Monday (1) to Sunday (7) + ISO 8601 day of the week, Monday (1) to Sunday (7) W @@ -6063,17 +6063,17 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TZ upper case time-zone abbreviation - (only supported in to_char) + (only supported in to_char) tz lower case time-zone abbreviation - (only supported in to_char) + (only supported in to_char) OF time-zone offset from UTC - (only supported in to_char) + (only supported in to_char) @@ -6107,12 +6107,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TH suffix upper case ordinal number suffix - DDTH, e.g., 12TH + DDTH, e.g., 12TH th suffix lower case ordinal number suffix - DDth, e.g., 12th + DDth, e.g., 12th FX prefix @@ -6153,7 +6153,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TM does not include trailing blanks. - to_timestamp and to_date ignore + to_timestamp and to_date ignore the TM modifier. @@ -6179,9 +6179,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); even if it contains pattern key words. For example, in '"Hello Year "YYYY', the YYYY will be replaced by the year data, but the single Y in Year - will not be. In to_date, to_number, - and to_timestamp, double-quoted strings skip the number of - input characters contained in the string, e.g. "XX" + will not be. In to_date, to_number, + and to_timestamp, double-quoted strings skip the number of + input characters contained in the string, e.g. "XX" skips two input characters. @@ -6198,9 +6198,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); In to_timestamp and to_date, if the year format specification is less than four digits, e.g. - YYY, and the supplied year is less than four digits, + YYY, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g. - 95 becomes 1995. + 95 becomes 1995. @@ -6269,7 +6269,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the - concept of a month or day of month has no + concept of a month or day of month has no meaning. In the context of a Gregorian year, the ISO week has no meaning. @@ -6278,8 +6278,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); While to_date will reject a mixture of Gregorian and ISO week-numbering date fields, to_char will not, since output format - specifications like YYYY-MM-DD (IYYY-IDDD) can be - useful. But avoid writing something like IYYY-MM-DD; + specifications like YYYY-MM-DD (IYYY-IDDD) can be + useful. But avoid writing something like IYYY-MM-DD; that would yield surprising results near the start of the year. (See for more information.) @@ -6323,11 +6323,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - to_char(interval) formats HH and - HH12 as shown on a 12-hour clock, for example zero hours - and 36 hours both output as 12, while HH24 + to_char(interval) formats HH and + HH12 as shown on a 12-hour clock, for example zero hours + and 36 hours both output as 12, while HH24 outputs the full hour value, which can exceed 23 in - an interval value. + an interval value. @@ -6423,19 +6423,19 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - 0 specifies a digit position that will always be printed, - even if it contains a leading/trailing zero. 9 also + 0 specifies a digit position that will always be printed, + even if it contains a leading/trailing zero. 9 also specifies a digit position, but if it is a leading zero then it will be replaced by a space, while if it is a trailing zero and fill mode - is specified then it will be deleted. (For to_number(), + is specified then it will be deleted. (For to_number(), these two pattern characters are equivalent.) - The pattern characters S, L, D, - and G represent the sign, currency symbol, decimal point, + The pattern characters S, L, D, + and G represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale (see and ). The pattern characters period @@ -6447,9 +6447,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); If no explicit provision is made for a sign - in to_char()'s pattern, one column will be reserved for + in to_char()'s pattern, one column will be reserved for the sign, and it will be anchored to (appear just left of) the - number. If S appears just left of some 9's, + number. If S appears just left of some 9's, it will likewise be anchored to the number. @@ -6742,7 +6742,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); inputs actually come in two variants: one that takes time with time zone or timestamp with time zone, and one that takes time without time zone or timestamp without time zone. For brevity, these variants are not shown separately. Also, the - + and * operators come in commutative pairs (for + + and * operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair. @@ -6899,7 +6899,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); age(timestamp, timestamp) interval - Subtract arguments, producing a symbolic result that + Subtract arguments, producing a symbolic result that uses years and months, rather than just days age(timestamp '2001-04-10', timestamp '1957-06-13') 43 years 9 mons 27 days @@ -7109,7 +7109,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); justify_interval(interval) interval - Adjust interval using justify_days and justify_hours, with additional sign adjustments + Adjust interval using justify_days and justify_hours, with additional sign adjustments justify_interval(interval '1 mon -1 hour') 29 days 23:00:00 @@ -7302,7 +7302,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); text Current date and time - (like clock_timestamp, but as a text string); + (like clock_timestamp, but as a text string); see @@ -7344,7 +7344,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); OVERLAPS - In addition to these functions, the SQL OVERLAPS operator is + In addition to these functions, the SQL OVERLAPS operator is supported: (start1, end1) OVERLAPS (start2, end2) @@ -7355,11 +7355,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written - first; OVERLAPS automatically takes the earlier value + first; OVERLAPS automatically takes the earlier value of the pair as the start. Each time period is considered to - represent the half-open interval start <= - time < end, unless - start and end are equal in which case it + represent the half-open interval start <= + time < end, unless + start and end are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap. @@ -7398,31 +7398,31 @@ SELECT (DATE '2001-10-30', DATE '2001-10-30') OVERLAPS - Note there can be ambiguity in the months field returned by - age because different months have different numbers of - days. PostgreSQL's approach uses the month from the + Note there can be ambiguity in the months field returned by + age because different months have different numbers of + days. PostgreSQL's approach uses the month from the earlier of the two dates when calculating partial months. For example, - age('2004-06-01', '2004-04-30') uses April to yield - 1 mon 1 day, while using May would yield 1 mon 2 - days because May has 31 days, while April has only 30. + age('2004-06-01', '2004-04-30') uses April to yield + 1 mon 1 day, while using May would yield 1 mon 2 + days because May has 31 days, while April has only 30. Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number - of seconds using EXTRACT(EPOCH FROM ...), then subtract the + of seconds using EXTRACT(EPOCH FROM ...), then subtract the results; this produces the - number of seconds between the two values. This will adjust + number of seconds between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp - values with the - operator + values with the - operator returns the number of days (24-hours) and hours/minutes/seconds - between the values, making the same adjustments. The age + between the values, making the same adjustments. The age function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with timezone - = 'US/Eastern'; there is a daylight saving time change between the + = 'US/Eastern'; there is a daylight saving time change between the two dates used: @@ -7534,8 +7534,8 @@ SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); dow - The day of the week as Sunday (0) to - Saturday (6) + The day of the week as Sunday (0) to + Saturday (6) @@ -7587,7 +7587,7 @@ SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); You can convert an epoch value back to a time stamp - with to_timestamp: + with to_timestamp: SELECT to_timestamp(982384720.12); @@ -7614,8 +7614,8 @@ SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); isodow - The day of the week as Monday (1) to - Sunday (7) + The day of the week as Monday (1) to + Sunday (7) @@ -7623,8 +7623,8 @@ SELECT EXTRACT(ISODOW FROM TIMESTAMP '2001-02-18 20:38:40'); Result: 7 - This is identical to dow except for Sunday. This - matches the ISO 8601 day of the week numbering. + This is identical to dow except for Sunday. This + matches the ISO 8601 day of the week numbering. @@ -7819,11 +7819,11 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. - For example, 2005-01-01 is part of the 53rd week of year - 2004, and 2006-01-01 is part of the 52nd week of year - 2005, while 2012-12-31 is part of the first week of 2013. - It's recommended to use the isoyear field together with - week to get consistent results. + For example, 2005-01-01 is part of the 53rd week of year + 2004, and 2006-01-01 is part of the 52nd week of year + 2005, while 2012-12-31 is part of the first week of 2013. + It's recommended to use the isoyear field together with + week to get consistent results. @@ -7837,8 +7837,8 @@ SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); year - The year field. Keep in mind there is no 0 AD, so subtracting - BC years from AD years should be done with care. + The year field. Keep in mind there is no 0 AD, so subtracting + BC years from AD years should be done with care. @@ -7853,11 +7853,11 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); - When the input value is +/-Infinity, extract returns - +/-Infinity for monotonically-increasing fields (epoch, - julian, year, isoyear, - decade, century, and millennium). - For other fields, NULL is returned. PostgreSQL + When the input value is +/-Infinity, extract returns + +/-Infinity for monotonically-increasing fields (epoch, + julian, year, isoyear, + decade, century, and millennium). + For other fields, NULL is returned. PostgreSQL versions before 9.6 returned zero for all cases of infinite input. @@ -7908,13 +7908,13 @@ SELECT date_part('hour', INTERVAL '4 hours 3 minutes'); date_trunc('field', source) source is a value expression of type - timestamp or interval. + timestamp or interval. (Values of type date and time are cast automatically to timestamp or - interval, respectively.) + interval, respectively.) field selects to which precision to truncate the input value. The return value is of type - timestamp or interval + timestamp or interval with all fields that are less significant than the selected one set to zero (or one, for day and month). @@ -7983,34 +7983,34 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); - timestamp without time zone AT TIME ZONE zone + timestamp without time zone AT TIME ZONE zone timestamp with time zone - Treat given time stamp without time zone as located in the specified time zone + Treat given time stamp without time zone as located in the specified time zone - timestamp with time zone AT TIME ZONE zone + timestamp with time zone AT TIME ZONE zone timestamp without time zone - Convert given time stamp with time zone to the new time + Convert given time stamp with time zone to the new time zone, with no time zone designation - time with time zone AT TIME ZONE zone + time with time zone AT TIME ZONE zone time with time zone - Convert given time with time zone to the new time zone + Convert given time with time zone to the new time zone - In these expressions, the desired time zone zone can be + In these expressions, the desired time zone zone can be specified either as a text string (e.g., 'PST') or as an interval (e.g., INTERVAL '-08:00'). In the text case, a time zone name can be specified in any of the ways @@ -8018,7 +8018,7 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); - Examples (assuming the local time zone is PST8PDT): + Examples (assuming the local time zone is PST8PDT): SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; Result: 2001-02-16 19:38:40-08 @@ -8032,10 +8032,10 @@ SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; - The function timezone(zone, - timestamp) is equivalent to the SQL-conforming construct - timestamp AT TIME ZONE - zone. + The function timezone(zone, + timestamp) is equivalent to the SQL-conforming construct + timestamp AT TIME ZONE + zone. @@ -8140,23 +8140,23 @@ now() - transaction_timestamp() is equivalent to + transaction_timestamp() is equivalent to CURRENT_TIMESTAMP, but is named to clearly reflect what it returns. - statement_timestamp() returns the start time of the current + statement_timestamp() returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). - statement_timestamp() and transaction_timestamp() + statement_timestamp() and transaction_timestamp() return the same value during the first command of a transaction, but might differ during subsequent commands. - clock_timestamp() returns the actual current time, and + clock_timestamp() returns the actual current time, and therefore its value changes even within a single SQL command. - timeofday() is a historical + timeofday() is a historical PostgreSQL function. Like - clock_timestamp(), it returns the actual current time, - but as a formatted text string rather than a timestamp - with time zone value. - now() is a traditional PostgreSQL + clock_timestamp(), it returns the actual current time, + but as a formatted text string rather than a timestamp + with time zone value. + now() is a traditional PostgreSQL equivalent to transaction_timestamp(). @@ -8174,7 +8174,7 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT - You do not want to use the third form when specifying a DEFAULT + You do not want to use the third form when specifying a DEFAULT clause while creating a table. The system will convert now to a timestamp as soon as the constant is parsed, so that when the default value is needed, @@ -8210,16 +8210,16 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT process: pg_sleep(seconds) -pg_sleep_for(interval) -pg_sleep_until(timestamp with time zone) +pg_sleep_for(interval) +pg_sleep_until(timestamp with time zone) pg_sleep makes the current session's process sleep until seconds seconds have elapsed. seconds is a value of type - double precision, so fractional-second delays can be specified. + double precision, so fractional-second delays can be specified. pg_sleep_for is a convenience function for larger - sleep times specified as an interval. + sleep times specified as an interval. pg_sleep_until is a convenience function for when a specific wake-up time is desired. For example: @@ -8341,7 +8341,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Notice that except for the two-argument form of enum_range, + Notice that except for the two-argument form of enum_range, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to @@ -8365,13 +8365,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Note that the same as operator, ~=, represents + Note that the same as operator, ~=, represents the usual notion of equality for the point, box, polygon, and circle types. - Some of these types also have an = operator, but - = compares - for equal areas only. The other scalar comparison operators - (<= and so on) likewise compare areas for these types. + Some of these types also have an = operator, but + = compares + for equal areas only. The other scalar comparison operators + (<= and so on) likewise compare areas for these types. @@ -8548,8 +8548,8 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple Before PostgreSQL 8.2, the containment - operators @> and <@ were respectively - called ~ and @. These names are still + operators @> and <@ were respectively + called ~ and @. These names are still available, but are deprecated and will eventually be removed. @@ -8604,67 +8604,67 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - area(object) + area(object) double precision area area(box '((0,0),(1,1))') - center(object) + center(object) point center center(box '((0,0),(1,2))') - diameter(circle) + diameter(circle) double precision diameter of circle diameter(circle '((0,0),2.0)') - height(box) + height(box) double precision vertical size of box height(box '((0,0),(1,1))') - isclosed(path) + isclosed(path) boolean a closed path? isclosed(path '((0,0),(1,1),(2,0))') - isopen(path) + isopen(path) boolean an open path? isopen(path '[(0,0),(1,1),(2,0)]') - length(object) + length(object) double precision length length(path '((-1,0),(1,0))') - npoints(path) + npoints(path) int number of points npoints(path '[(0,0),(1,1),(2,0)]') - npoints(polygon) + npoints(polygon) int number of points npoints(polygon '((1,1),(0,0))') - pclose(path) + pclose(path) path convert path to closed pclose(path '[(0,0),(1,1),(2,0)]') - popen(path) + popen(path) path convert path to open popen(path '((0,0),(1,1),(2,0))') @@ -8676,7 +8676,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple radius(circle '((0,0),2.0)') - width(box) + width(box) double precision horizontal size of box width(box '((0,0),(1,1))') @@ -8859,13 +8859,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - It is possible to access the two component numbers of a point + It is possible to access the two component numbers of a point as though the point were an array with indexes 0 and 1. For example, if - t.p is a point column then - SELECT p[0] FROM t retrieves the X coordinate and - UPDATE t SET p[1] = ... changes the Y coordinate. - In the same way, a value of type box or lseg can be treated - as an array of two point values. + t.p is a point column then + SELECT p[0] FROM t retrieves the X coordinate and + UPDATE t SET p[1] = ... changes the Y coordinate. + In the same way, a value of type box or lseg can be treated + as an array of two point values. @@ -9188,19 +9188,19 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Any cidr value can be cast to inet implicitly + Any cidr value can be cast to inet implicitly or explicitly; therefore, the functions shown above as operating on - inet also work on cidr values. (Where there are - separate functions for inet and cidr, it is because + inet also work on cidr values. (Where there are + separate functions for inet and cidr, it is because the behavior should be different for the two cases.) - Also, it is permitted to cast an inet value to cidr. + Also, it is permitted to cast an inet value to cidr. When this is done, any bits to the right of the netmask are silently zeroed - to create a valid cidr value. + to create a valid cidr value. In addition, - you can cast a text value to inet or cidr + you can cast a text value to inet or cidr using normal casting syntax: for example, - inet(expression) or - colname::cidr. + inet(expression) or + colname::cidr. @@ -9345,64 +9345,64 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple @@ - boolean - tsvector matches tsquery ? + boolean + tsvector matches tsquery ? to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') t @@@ - boolean - deprecated synonym for @@ + boolean + deprecated synonym for @@ to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat') t || - tsvector - concatenate tsvectors + tsvector + concatenate tsvectors 'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector 'a':1 'b':2,5 'c':3 'd':4 && - tsquery - AND tsquerys together + tsquery + AND tsquerys together 'fat | rat'::tsquery && 'cat'::tsquery ( 'fat' | 'rat' ) & 'cat' || - tsquery - OR tsquerys together + tsquery + OR tsquerys together 'fat | rat'::tsquery || 'cat'::tsquery ( 'fat' | 'rat' ) | 'cat' !! - tsquery - negate a tsquery + tsquery + negate a tsquery !! 'cat'::tsquery !'cat' <-> - tsquery - tsquery followed by tsquery + tsquery + tsquery followed by tsquery to_tsquery('fat') <-> to_tsquery('rat') 'fat' <-> 'rat' @> - boolean - tsquery contains another ? + boolean + tsquery contains another ? 'cat'::tsquery @> 'cat & rat'::tsquery f <@ - boolean - tsquery is contained in ? + boolean + tsquery is contained in ? 'cat'::tsquery <@ 'cat & rat'::tsquery t @@ -9412,15 +9412,15 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - The tsquery containment operators consider only the lexemes + The tsquery containment operators consider only the lexemes listed in the two queries, ignoring the combining operators. In addition to the operators shown in the table, the ordinary B-tree - comparison operators (=, <, etc) are defined - for types tsvector and tsquery. These are not very + comparison operators (=, <, etc) are defined + for types tsvector and tsquery. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types. @@ -9443,7 +9443,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple array_to_tsvector - array_to_tsvector(text[]) + array_to_tsvector(text[]) tsvector convert array of lexemes to tsvector @@ -9467,10 +9467,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple length - length(tsvector) + length(tsvector) integer - number of lexemes in tsvector + number of lexemes in tsvector length('fat:2,4 cat:3 rat:5A'::tsvector) 3 @@ -9479,10 +9479,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple numnode - numnode(tsquery) + numnode(tsquery) integer - number of lexemes plus operators in tsquery + number of lexemes plus operators in tsquery numnode('(fat & rat) | cat'::tsquery) 5 @@ -9491,10 +9491,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple plainto_tsquery - plainto_tsquery( config regconfig , query text) + plainto_tsquery( config regconfig , query text) tsquery - produce tsquery ignoring punctuation + produce tsquery ignoring punctuation plainto_tsquery('english', 'The Fat Rats') 'fat' & 'rat' @@ -9503,10 +9503,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple phraseto_tsquery - phraseto_tsquery( config regconfig , query text) + phraseto_tsquery( config regconfig , query text) tsquery - produce tsquery that searches for a phrase, + produce tsquery that searches for a phrase, ignoring punctuation phraseto_tsquery('english', 'The Fat Rats') 'fat' <-> 'rat' @@ -9516,10 +9516,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple querytree - querytree(query tsquery) + querytree(query tsquery) text - get indexable part of a tsquery + get indexable part of a tsquery querytree('foo & ! bar'::tsquery) 'foo' @@ -9528,7 +9528,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweight - setweight(vector tsvector, weight "char") + setweight(vector tsvector, weight "char") tsvector assign weight to each element of vector @@ -9541,7 +9541,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweight setweight for specific lexeme(s) - setweight(vector tsvector, weight "char", lexemes text[]) + setweight(vector tsvector, weight "char", lexemes text[]) tsvector assign weight to elements of vector that are listed in lexemes @@ -9553,10 +9553,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple strip - strip(tsvector) + strip(tsvector) tsvector - remove positions and weights from tsvector + remove positions and weights from tsvector strip('fat:2,4 cat:3 rat:5A'::tsvector) 'cat' 'fat' 'rat' @@ -9565,10 +9565,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsquery - to_tsquery( config regconfig , query text) + to_tsquery( config regconfig , query text) tsquery - normalize words and convert to tsquery + normalize words and convert to tsquery to_tsquery('english', 'The & Fat & Rats') 'fat' & 'rat' @@ -9577,21 +9577,21 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsvector - to_tsvector( config regconfig , document text) + to_tsvector( config regconfig , document text) tsvector - reduce document text to tsvector + reduce document text to tsvector to_tsvector('english', 'The Fat Rats') 'fat':2 'rat':3 - to_tsvector( config regconfig , document json(b)) + to_tsvector( config regconfig , document json(b)) tsvector - reduce each string value in the document to a tsvector, and then - concatenate those in document order to produce a single tsvector + reduce each string value in the document to a tsvector, and then + concatenate those in document order to produce a single tsvector to_tsvector('english', '{"a": "The Fat Rats"}'::json) 'fat':2 'rat':3 @@ -9601,7 +9601,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_delete - ts_delete(vector tsvector, lexeme text) + ts_delete(vector tsvector, lexeme text) tsvector remove given lexeme from vector @@ -9611,7 +9611,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - ts_delete(vector tsvector, lexemes text[]) + ts_delete(vector tsvector, lexemes text[]) tsvector remove any occurrence of lexemes in lexemes from vector @@ -9623,7 +9623,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_filter - ts_filter(vector tsvector, weights "char"[]) + ts_filter(vector tsvector, weights "char"[]) tsvector select only elements with given weights from vector @@ -9635,7 +9635,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_headline - ts_headline( config regconfig, document text, query tsquery , options text ) + ts_headline( config regconfig, document text, query tsquery , options text ) text display a query match @@ -9644,7 +9644,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - ts_headline( config regconfig, document json(b), query tsquery , options text ) + ts_headline( config regconfig, document json(b), query tsquery , options text ) text display a query match @@ -9656,7 +9656,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query @@ -9668,7 +9668,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query using cover density @@ -9680,18 +9680,18 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rewrite - ts_rewrite(query tsquery, target tsquery, substitute tsquery) + ts_rewrite(query tsquery, target tsquery, substitute tsquery) tsquery - replace target with substitute + replace target with substitute within query ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery) 'b' & ( 'foo' | 'bar' ) - ts_rewrite(query tsquery, select text) + ts_rewrite(query tsquery, select text) tsquery - replace using targets and substitutes from a SELECT command + replace using targets and substitutes from a SELECT command SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases') 'b' & ( 'foo' | 'bar' ) @@ -9700,22 +9700,22 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery) + tsquery_phrase(query1 tsquery, query2 tsquery) tsquery - make query that searches for query1 followed - by query2 (same as <-> + make query that searches for query1 followed + by query2 (same as <-> operator) tsquery_phrase(to_tsquery('fat'), to_tsquery('cat')) 'fat' <-> 'cat' - tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) + tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) tsquery - make query that searches for query1 followed by - query2 at distance distance + make query that searches for query1 followed by + query2 at distance distance tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10) 'fat' <10> 'cat' @@ -9724,10 +9724,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_to_array - tsvector_to_array(tsvector) + tsvector_to_array(tsvector) text[] - convert tsvector to array of lexemes + convert tsvector to array of lexemes tsvector_to_array('fat:2,4 cat:3 rat:5A'::tsvector) {cat,fat,rat} @@ -9739,7 +9739,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_update_trigger() trigger - trigger function for automatic tsvector column update + trigger function for automatic tsvector column update CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body) @@ -9751,7 +9751,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_update_trigger_column() trigger - trigger function for automatic tsvector column update + trigger function for automatic tsvector column update CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body) @@ -9761,7 +9761,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple unnest for tsvector - unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) + unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) setof record expand a tsvector to a set of rows @@ -9774,7 +9774,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - All the text search functions that accept an optional regconfig + All the text search functions that accept an optional regconfig argument will use the configuration specified by when that argument is omitted. @@ -9807,7 +9807,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_debug - ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) + ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) setof record test a configuration @@ -9819,7 +9819,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_lexize - ts_lexize(dict regdictionary, token text) + ts_lexize(dict regdictionary, token text) text[] test a dictionary @@ -9831,7 +9831,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_parse - ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) + ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) setof record test a parser @@ -9839,7 +9839,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,foo) ... - ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) + ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) setof record test a parser ts_parse(3722, 'foo - bar') @@ -9850,7 +9850,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_token_type - ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser @@ -9858,7 +9858,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,asciiword,"Word, all ASCII") ... - ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser ts_token_type(3722) @@ -9869,10 +9869,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_stat - ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) + ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) setof record - get statistics of a tsvector column + get statistics of a tsvector column ts_stat('SELECT vector from apod') (foo,10,15) ... @@ -9894,7 +9894,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple and xmlserialize for converting to and from type xml are not repeated here. Use of most of these functions requires the installation to have been built - with configure --with-libxml. + with configure --with-libxml. @@ -10246,7 +10246,7 @@ SELECT xmlagg(x) FROM test; - To determine the order of the concatenation, an ORDER BY + To determine the order of the concatenation, an ORDER BY clause may be added to the aggregate call as described in . For example: @@ -10365,18 +10365,18 @@ SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Tor - These functions check whether a text string is well-formed XML, + These functions check whether a text string is well-formed XML, returning a Boolean result. xml_is_well_formed_document checks for a well-formed document, while xml_is_well_formed_content checks for well-formed content. xml_is_well_formed does the former if the configuration - parameter is set to DOCUMENT, or the latter if it is set to - CONTENT. This means that + parameter is set to DOCUMENT, or the latter if it is set to + CONTENT. This means that xml_is_well_formed is useful for seeing whether - a simple cast to type xml will succeed, whereas the other two + a simple cast to type xml will succeed, whereas the other two functions are useful for seeing whether the corresponding variants of - XMLPARSE will succeed. + XMLPARSE will succeed. @@ -10446,7 +10446,7 @@ SELECT xml_is_well_formed_document(' The optional third argument of the function is an array of namespace - mappings. This array should be a two-dimensional text array with + mappings. This array should be a two-dimensional text array with the length of the second axis being equal to 2 (i.e., it should be an array of arrays, each of which consists of exactly 2 elements). The first element of each array entry is the namespace name (alias), the second the namespace URI. It is not required that aliases provided in this array be the same as those being used in the XML document itself (in other words, both in the XML document and in the xpath - function context, aliases are local). + function context, aliases are local). @@ -10514,7 +10514,7 @@ SELECT xpath('//mydefns:b/text()', 'testxpath function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This - function is equivalent to the standard XMLEXISTS predicate, + function is equivalent to the standard XMLEXISTS predicate, except that it also offers support for a namespace mapping argument. @@ -10560,21 +10560,21 @@ SELECT xpath_exists('/my:a/text()', 'test - The optional XMLNAMESPACES clause is a comma-separated + The optional XMLNAMESPACES clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported. - The required row_expression argument is an XPath + The required row_expression argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what - xmltable transforms into output rows. + xmltable transforms into output rows. - document_expression provides the XML document to + document_expression provides the XML document to operate on. The BY REF clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other @@ -10586,9 +10586,9 @@ SELECT xpath_exists('/my:a/text()', 'test The mandatory COLUMNS clause specifies the list of columns in the output table. - If the COLUMNS clause is omitted, the rows in the result - set contain a single column of type xml containing the - data matched by row_expression. + If the COLUMNS clause is omitted, the rows in the result + set contain a single column of type xml containing the + data matched by row_expression. If COLUMNS is specified, each entry describes a single column. See the syntax summary above for the format. @@ -10604,10 +10604,10 @@ SELECT xpath_exists('/my:a/text()', 'test - The column_expression for a column is an XPath expression + The column_expression for a column is an XPath expression that is evaluated for each row, relative to the result of the - row_expression, to find the value of the column. - If no column_expression is given, then the column name + row_expression, to find the value of the column. + If no column_expression is given, then the column name is used as an implicit path. @@ -10615,55 +10615,55 @@ SELECT xpath_exists('/my:a/text()', 'testNULL). - Any xsi:nil attributes are ignored. + empty string (not NULL). + Any xsi:nil attributes are ignored. - The text body of the XML matched by the column_expression + The text body of the XML matched by the column_expression is used as the column value. Multiple text() nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. - Note that the whitespace-only text() node between two non-text - elements is preserved, and that leading whitespace on a text() + Note that the whitespace-only text() node between two non-text + elements is preserved, and that leading whitespace on a text() node is not flattened. If the path expression does not match for a given row but - default_expression is specified, the value resulting + default_expression is specified, the value resulting from evaluating that expression is used. - If no DEFAULT clause is given for the column, - the field will be set to NULL. - It is possible for a default_expression to reference + If no DEFAULT clause is given for the column, + the field will be set to NULL. + It is possible for a default_expression to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column. - Columns may be marked NOT NULL. If the - column_expression for a NOT NULL column - does not match anything and there is no DEFAULT or the - default_expression also evaluates to null, an error + Columns may be marked NOT NULL. If the + column_expression for a NOT NULL column + does not match anything and there is no DEFAULT or the + default_expression also evaluates to null, an error is reported. - Unlike regular PostgreSQL functions, column_expression - and default_expression are not evaluated to a simple + Unlike regular PostgreSQL functions, column_expression + and default_expression are not evaluated to a simple value before calling the function. - column_expression is normally evaluated - exactly once per input row, and default_expression + column_expression is normally evaluated + exactly once per input row, and default_expression is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. - Effectively xmltable behaves more like a subquery than a + Effectively xmltable behaves more like a subquery than a function call. This means that you can usefully use volatile functions like - nextval in default_expression, and - column_expression may depend on other parts of the + nextval in default_expression, and + column_expression may depend on other parts of the XML document. @@ -11029,7 +11029,7 @@ table2-mapping - <type>json</> and <type>jsonb</> Operators + <type>json</type> and <type>jsonb</type> Operators @@ -11059,14 +11059,14 @@ table2-mapping ->> int - Get JSON array element as text + Get JSON array element as text '[1,2,3]'::json->>2 3 ->> text - Get JSON object field as text + Get JSON object field as text '{"a":1,"b":2}'::json->>'b' 2 @@ -11080,7 +11080,7 @@ table2-mapping #>> text[] - Get JSON object at specified path as text + Get JSON object at specified path as text '{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}' 3 @@ -11095,7 +11095,7 @@ table2-mapping The field/element/path extraction operators return the same type as their left-hand input (either json or jsonb), except for those specified as - returning text, which coerce the value to text. + returning text, which coerce the value to text. The field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such element exists. The @@ -11115,14 +11115,14 @@ table2-mapping Some further operators also exist only for jsonb, as shown in . Many of these operators can be indexed by - jsonb operator classes. For a full description of - jsonb containment and existence semantics, see jsonb operator classes. For a full description of + jsonb containment and existence semantics, see . describes how these operators can be used to effectively index - jsonb data. + jsonb data.
- Additional <type>jsonb</> Operators + Additional <type>jsonb</type> Operators @@ -11211,7 +11211,7 @@ table2-mapping - The || operator concatenates the elements at the top level of + The || operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand. @@ -11221,8 +11221,8 @@ table2-mapping shows the functions that are available for creating json and jsonb values. - (There are no equivalent functions for jsonb, of the row_to_json - and array_to_json functions. However, the to_jsonb + (There are no equivalent functions for jsonb, of the row_to_json + and array_to_json functions. However, the to_jsonb function supplies much the same functionality as these functions would.) @@ -11274,14 +11274,14 @@ table2-mapping to_jsonb(anyelement) - Returns the value as json or jsonb. + Returns the value as json or jsonb. Arrays and composites are converted (recursively) to arrays and objects; otherwise, if there is a cast from the type to json, the cast function will be used to perform the conversion; otherwise, a scalar value is produced. For any scalar type other than a number, a Boolean, or a null value, the text representation will be used, in such a fashion that it is a - valid json or jsonb value. + valid json or jsonb value. to_json('Fred said "Hi."'::text) "Fred said \"Hi.\"" @@ -11343,8 +11343,8 @@ table2-mapping such that each inner array has exactly two elements, which are taken as a key/value pair. - json_object('{a, 1, b, "def", c, 3.5}') - json_object('{{a, 1},{b, "def"},{c, 3.5}}') + json_object('{a, 1, b, "def", c, 3.5}') + json_object('{{a, 1},{b, "def"},{c, 3.5}}') {"a": "1", "b": "def", "c": "3.5"} @@ -11352,7 +11352,7 @@ table2-mapping jsonb_object(keys text[], values text[]) - This form of json_object takes keys and values pairwise from two separate + This form of json_object takes keys and values pairwise from two separate arrays. In all other respects it is identical to the one-argument form. json_object('{a, b}', '{1,2}') @@ -11364,9 +11364,9 @@ table2-mapping - array_to_json and row_to_json have the same - behavior as to_json except for offering a pretty-printing - option. The behavior described for to_json likewise applies + array_to_json and row_to_json have the same + behavior as to_json except for offering a pretty-printing + option. The behavior described for to_json likewise applies to each individual value converted by the other JSON creation functions. @@ -11530,7 +11530,7 @@ table2-mapping setof key text, value text Expands the outermost JSON object into a set of key/value pairs. The - returned values will be of type text. + returned values will be of type text. select * from json_each_text('{"a":"foo", "b":"bar"}') @@ -11562,7 +11562,7 @@ table2-mapping text Returns JSON value pointed to by path_elems - as text + as text (equivalent to #>> operator). json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6') @@ -11593,7 +11593,7 @@ table2-mapping anyelement Expands the object in from_json to a row - whose columns match the record type defined by base + whose columns match the record type defined by base (see note below). select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a b c"}}') @@ -11613,7 +11613,7 @@ table2-mapping Expands the outermost array of objects in from_json to a set of rows whose - columns match the record type defined by base (see + columns match the record type defined by base (see note below). select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]') @@ -11653,7 +11653,7 @@ table2-mapping setof text - Expands a JSON array to a set of text values. + Expands a JSON array to a set of text values. select * from json_array_elements_text('["foo", "bar"]') @@ -11673,8 +11673,8 @@ table2-mapping Returns the type of the outermost JSON value as a text string. Possible types are - object, array, string, number, - boolean, and null. + object, array, string, number, + boolean, and null. json_typeof('-123.4') number @@ -11686,8 +11686,8 @@ table2-mapping record Builds an arbitrary record from a JSON object (see note below). As - with all functions returning record, the caller must - explicitly define the structure of the record with an AS + with all functions returning record, the caller must + explicitly define the structure of the record with an AS clause. select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype) @@ -11706,9 +11706,9 @@ table2-mapping setof record Builds an arbitrary set of records from a JSON array of objects (see - note below). As with all functions returning record, the + note below). As with all functions returning record, the caller must explicitly define the structure of the record with - an AS clause. + an AS clause. select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]') as x(a int, b text); @@ -11743,7 +11743,7 @@ table2-mapping replaced by new_value, or with new_value added if create_missing is true ( default is - true) and the item + true) and the item designated by path does not exist. As with the path orientated operators, negative integers that appear in path count from the end @@ -11770,7 +11770,7 @@ table2-mapping path is in a JSONB array, new_value will be inserted before target or after if insert_after is true (default is - false). If target section + false). If target section designated by path is in JSONB object, new_value will be inserted only if target does not exist. As with the path @@ -11820,17 +11820,17 @@ table2-mapping Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue - if the input is type jsonb, because the conversion was already - done; but for json input, this may result in throwing an error, + if the input is type jsonb, because the conversion was already + done; but for json input, this may result in throwing an error, as noted in . - In json_populate_record, json_populate_recordset, - json_to_record and json_to_recordset, - type coercion from the JSON is best effort and may not result + In json_populate_record, json_populate_recordset, + json_to_record and json_to_recordset, + type coercion from the JSON is best effort and may not result in desired values for some types. JSON keys are matched to identical column names in the target row type. JSON fields that do not appear in the target row type will be omitted from the output, and @@ -11840,18 +11840,18 @@ table2-mapping - All the items of the path parameter of jsonb_set - as well as jsonb_insert except the last item must be present - in the target. If create_missing is false, all - items of the path parameter of jsonb_set must be - present. If these conditions are not met the target is + All the items of the path parameter of jsonb_set + as well as jsonb_insert except the last item must be present + in the target. If create_missing is false, all + items of the path parameter of jsonb_set must be + present. If these conditions are not met the target is returned unchanged. If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from - the left, and if negative by counting from the right - -1 + the left, and if negative by counting from the right - -1 designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning @@ -11862,20 +11862,20 @@ table2-mapping - The json_typeof function's null return value + The json_typeof function's null return value should not be confused with a SQL NULL. While - calling json_typeof('null'::json) will - return null, calling json_typeof(NULL::json) + calling json_typeof('null'::json) will + return null, calling json_typeof(NULL::json) will return a SQL NULL. - If the argument to json_strip_nulls contains duplicate + If the argument to json_strip_nulls contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an - issue for jsonb_strip_nulls since jsonb values never have + issue for jsonb_strip_nulls since jsonb values never have duplicate object field names. @@ -11886,7 +11886,7 @@ table2-mapping values as JSON, and the aggregate function json_object_agg which aggregates pairs of values into a JSON object, and their jsonb equivalents, - jsonb_agg and jsonb_object_agg. + jsonb_agg and jsonb_object_agg. @@ -11963,52 +11963,52 @@ table2-mapping The sequence to be operated on by a sequence function is specified by - a regclass argument, which is simply the OID of the sequence in the - pg_class system catalog. You do not have to look up the - OID by hand, however, since the regclass data type's input + a regclass argument, which is simply the OID of the sequence in the + pg_class system catalog. You do not have to look up the + OID by hand, however, since the regclass data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus: -nextval('foo') operates on sequence foo -nextval('FOO') operates on sequence foo -nextval('"Foo"') operates on sequence Foo +nextval('foo') operates on sequence foo +nextval('FOO') operates on sequence foo +nextval('"Foo"') operates on sequence Foo The sequence name can be schema-qualified if necessary: -nextval('myschema.foo') operates on myschema.foo +nextval('myschema.foo') operates on myschema.foo nextval('"myschema".foo') same as above -nextval('foo') searches search path for foo +nextval('foo') searches search path for foo See for more information about - regclass. + regclass. Before PostgreSQL 8.1, the arguments of the - sequence functions were of type text, not regclass, and + sequence functions were of type text, not regclass, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit - coercion from text to regclass before the function is + coercion from text to regclass before the function is invoked. When you write the argument of a sequence function as an unadorned - literal string, it becomes a constant of type regclass. + literal string, it becomes a constant of type regclass. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, - etc. This early binding behavior is usually desirable for + etc. This early binding behavior is usually desirable for sequence references in column defaults and views. But sometimes you might - want late binding where the sequence reference is resolved + want late binding where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be - stored as a text constant instead of regclass: + stored as a text constant instead of regclass: -nextval('foo'::text) foo is looked up at runtime +nextval('foo'::text) foo is looked up at runtime Note that late binding was the only behavior supported in PostgreSQL releases before 8.1, so you @@ -12051,14 +12051,14 @@ nextval('foo'::text) foo is looked up at rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends - up not using the value. For example an INSERT with - an ON CONFLICT clause will compute the to-be-inserted + up not using the value. For example an INSERT with + an ON CONFLICT clause will compute the to-be-inserted tuple, including doing any required nextval calls, before detecting any conflict that would cause it to follow - the ON CONFLICT rule instead. Such cases will leave + the ON CONFLICT rule instead. Such cases will leave unused holes in the sequence of assigned values. - Thus, PostgreSQL sequence objects cannot - be used to obtain gapless sequences. + Thus, PostgreSQL sequence objects cannot + be used to obtain gapless sequences. @@ -12094,7 +12094,7 @@ nextval('foo'::text) foo is looked up at Return the value most recently returned by - nextval in the current session. This function is + nextval in the current session. This function is identical to currval, except that instead of taking the sequence name as an argument it refers to whichever sequence nextval was most recently applied to @@ -12119,20 +12119,20 @@ nextval('foo'::text) foo is looked up at specified value and sets its is_called field to true, meaning that the next nextval will advance the sequence before - returning a value. The value reported by currval is + returning a value. The value reported by currval is also set to the specified value. In the three-parameter form, is_called can be set to either true - or false. true has the same effect as + or false. true has the same effect as the two-parameter form. If it is set to false, the next nextval will return exactly the specified value, and sequence advancement commences with the following nextval. Furthermore, the value reported by - currval is not changed in this case. For example, + currval is not changed in this case. For example, -SELECT setval('foo', 42); Next nextval will return 43 +SELECT setval('foo', 42); Next nextval will return 43 SELECT setval('foo', 42, true); Same as above -SELECT setval('foo', 42, false); Next nextval will return 42 +SELECT setval('foo', 42, false); Next nextval will return 42 The result returned by setval is just the value of its @@ -12183,7 +12183,7 @@ SELECT setval('foo', 42, false); Next nextval wi - <literal>CASE</> + <literal>CASE</literal> The SQL CASE expression is a @@ -12206,7 +12206,7 @@ END condition's result is not true, any subsequent WHEN clauses are examined in the same manner. If no WHEN condition yields true, the value of the - CASE expression is the result of the + CASE expression is the result of the ELSE clause. If the ELSE clause is omitted and no condition is true, the result is null. @@ -12245,7 +12245,7 @@ SELECT a, - There is a simple form of CASE expression + There is a simple form of CASE expression that is a variant of the general form above: @@ -12299,7 +12299,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; situations in which subexpressions of an expression are evaluated at different times, so that the principle that CASE evaluates only necessary subexpressions is not ironclad. For - example a constant 1/0 subexpression will usually result in + example a constant 1/0 subexpression will usually result in a division-by-zero failure at planning time, even if it's within a CASE arm that would never be entered at run time. @@ -12307,7 +12307,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; - <literal>COALESCE</> + <literal>COALESCE</literal> COALESCE @@ -12333,8 +12333,8 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; SELECT COALESCE(description, short_description, '(none)') ... - This returns description if it is not null, otherwise - short_description if it is not null, otherwise (none). + This returns description if it is not null, otherwise + short_description if it is not null, otherwise (none). @@ -12342,13 +12342,13 @@ SELECT COALESCE(description, short_description, '(none)') ... evaluates the arguments that are needed to determine the result; that is, arguments to the right of the first non-null argument are not evaluated. This SQL-standard function provides capabilities similar - to NVL and IFNULL, which are used in some other + to NVL and IFNULL, which are used in some other database systems. - <literal>NULLIF</> + <literal>NULLIF</literal> NULLIF @@ -12369,7 +12369,7 @@ SELECT NULLIF(value, '(none)') ... - In this example, if value is (none), + In this example, if value is (none), null is returned, otherwise the value of value is returned. @@ -12394,7 +12394,7 @@ SELECT NULLIF(value, '(none)') ... - The GREATEST and LEAST functions select the + The GREATEST and LEAST functions select the largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result @@ -12404,7 +12404,7 @@ SELECT NULLIF(value, '(none)') ... - Note that GREATEST and LEAST are not in + Note that GREATEST and LEAST are not in the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL. @@ -12534,7 +12534,7 @@ SELECT NULLIF(value, '(none)') ... If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions of - PostgreSQL prior to 8.2: older versions would claim + PostgreSQL prior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.) @@ -12833,7 +12833,7 @@ NULL baz(3 rows)
- In array_position and array_positions, + In array_position and array_positions, each array element is compared to the searched value using IS NOT DISTINCT FROM semantics. @@ -12868,8 +12868,8 @@ NULL baz(3 rows) - There are two differences in the behavior of string_to_array - from pre-9.1 versions of PostgreSQL. + There are two differences in the behavior of string_to_array + from pre-9.1 versions of PostgreSQL. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather @@ -13198,7 +13198,7 @@ NULL baz(3 rows) - The lower and upper functions return null + The lower and upper functions return null if the range is empty or the requested bound is infinite. The lower_inc, upper_inc, lower_inf, and upper_inf @@ -13550,7 +13550,7 @@ NULL baz(3 rows) smallint, int, bigint, real, double precision, numeric, - interval, or money + interval, or money bigint for smallint or @@ -13647,7 +13647,7 @@ SELECT count(*) FROM sometable; aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an - ORDER BY clause within the aggregate call, as shown in + ORDER BY clause within the aggregate call, as shown in . Alternatively, supplying the input values from a sorted subquery will usually work. For example: @@ -14082,9 +14082,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; shows some - aggregate functions that use the ordered-set aggregate + aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as inverse - distribution functions. + distribution functions. @@ -14249,7 +14249,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; window function of the same name defined in . In each case, the aggregate result is the value that the associated window function would have - returned for the hypothetical row constructed from + returned for the hypothetical row constructed from args, if such a row had been added to the sorted group of rows computed from the sorted_args. @@ -14280,10 +14280,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" bigint @@ -14303,10 +14303,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; dense_rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" bigint @@ -14326,10 +14326,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; percent_rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" double precision @@ -14349,10 +14349,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; cume_dist(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" double precision @@ -14360,7 +14360,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; No relative rank of the hypothetical row, ranging from - 1/N to 1 + 1/N to 1 @@ -14374,7 +14374,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; the aggregated arguments given in sorted_args. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according - to the rule specified in the ORDER BY clause. + to the rule specified in the ORDER BY clause. @@ -14413,14 +14413,14 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Grouping operations are used in conjunction with grouping sets (see ) to distinguish result rows. The - arguments to the GROUPING operation are not actually evaluated, - but they must match exactly expressions given in the GROUP BY + arguments to the GROUPING operation are not actually evaluated, + but they must match exactly expressions given in the GROUP BY clause of the associated query level. Bits are assigned with the rightmost argument being the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the result row, and 1 if it is not. For example: -=> SELECT * FROM items_sold; +=> SELECT * FROM items_sold; make | model | sales -------+-------+------- Foo | GT | 10 @@ -14429,7 +14429,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Bar | Sport | 5 (4 rows) -=> SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model); +=> SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model); make | model | grouping | sum -------+-------+----------+----- Foo | GT | 0 | 10 @@ -14464,8 +14464,8 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; The built-in window functions are listed in . Note that these functions - must be invoked using window function syntax, i.e., an - OVER clause is required. + must be invoked using window function syntax, i.e., an + OVER clause is required. @@ -14474,7 +14474,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see for a list of the built-in aggregates. - Aggregate functions act as window functions only when an OVER + Aggregate functions act as window functions only when an OVER clause follows the call; otherwise they act as non-window aggregates and return a single row for the entire set. @@ -14515,7 +14515,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; bigint - rank of the current row with gaps; same as row_number of its first peer + rank of the current row with gaps; same as row_number of its first peer @@ -14541,7 +14541,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; double precision - relative rank of the current row: (rank - 1) / (total partition rows - 1) + relative rank of the current row: (rank - 1) / (total partition rows - 1) @@ -14562,7 +14562,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; ntile - ntile(num_buckets integer) + ntile(num_buckets integer) integer @@ -14577,9 +14577,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; lag - lag(value anyelement - [, offset integer - [, default anyelement ]]) + lag(value anyelement + [, offset integer + [, default anyelement ]]) @@ -14606,9 +14606,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; lead - lead(value anyelement - [, offset integer - [, default anyelement ]]) + lead(value anyelement + [, offset integer + [, default anyelement ]]) @@ -14634,7 +14634,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; first_value - first_value(value any) + first_value(value any) same type as value @@ -14650,7 +14650,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; last_value - last_value(value any) + last_value(value any) same type as value @@ -14667,7 +14667,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; nth_value - nth_value(value any, nth integer) + nth_value(value any, nth integer) @@ -14686,22 +14686,22 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; All of the functions listed in depend on the sort ordering - specified by the ORDER BY clause of the associated window + specified by the ORDER BY clause of the associated window definition. Rows that are not distinct when considering only the - ORDER BY columns are said to be peers. - The four ranking functions (including cume_dist) are + ORDER BY columns are said to be peers. + The four ranking functions (including cume_dist) are defined so that they give the same answer for all peer rows. - Note that first_value, last_value, and - nth_value consider only the rows within the window - frame, which by default contains the rows from the start of the + Note that first_value, last_value, and + nth_value consider only the rows within the window + frame, which by default contains the rows from the start of the partition through the last peer of the current row. This is - likely to give unhelpful results for last_value and - sometimes also nth_value. You can redefine the frame by - adding a suitable frame specification (RANGE or - ROWS) to the OVER clause. + likely to give unhelpful results for last_value and + sometimes also nth_value. You can redefine the frame by + adding a suitable frame specification (RANGE or + ROWS) to the OVER clause. See for more information about frame specifications. @@ -14709,34 +14709,34 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. - An aggregate used with ORDER BY and the default window frame - definition produces a running sum type of behavior, which may or + An aggregate used with ORDER BY and the default window frame + definition produces a running sum type of behavior, which may or may not be what's wanted. To obtain - aggregation over the whole partition, omit ORDER BY or use - ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. + aggregation over the whole partition, omit ORDER BY or use + ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. Other frame specifications can be used to obtain other effects. - The SQL standard defines a RESPECT NULLS or - IGNORE NULLS option for lead, lag, - first_value, last_value, and - nth_value. This is not implemented in + The SQL standard defines a RESPECT NULLS or + IGNORE NULLS option for lead, lag, + first_value, last_value, and + nth_value. This is not implemented in PostgreSQL: the behavior is always the - same as the standard's default, namely RESPECT NULLS. - Likewise, the standard's FROM FIRST or FROM LAST - option for nth_value is not implemented: only the - default FROM FIRST behavior is supported. (You can achieve - the result of FROM LAST by reversing the ORDER BY + same as the standard's default, namely RESPECT NULLS. + Likewise, the standard's FROM FIRST or FROM LAST + option for nth_value is not implemented: only the + default FROM FIRST behavior is supported. (You can achieve + the result of FROM LAST by reversing the ORDER BY ordering.) - cume_dist computes the fraction of partition rows that + cume_dist computes the fraction of partition rows that are less than or equal to the current row and its peers, while - percent_rank computes the fraction of partition rows that + percent_rank computes the fraction of partition rows that are less than the current row, assuming the current row does not exist in the partition. @@ -14789,12 +14789,12 @@ EXISTS (subquery) - The argument of EXISTS is an arbitrary SELECT statement, + The argument of EXISTS is an arbitrary SELECT statement, or subquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of EXISTS is - true; if the subquery returns no rows, the result of EXISTS - is false. + true; if the subquery returns no rows, the result of EXISTS + is false. @@ -14814,15 +14814,15 @@ EXISTS (subquery) Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is - to write all EXISTS tests in the form + to write all EXISTS tests in the form EXISTS(SELECT 1 WHERE ...). There are exceptions to this rule however, such as subqueries that use INTERSECT. - This simple example is like an inner join on col2, but - it produces at most one output row for each tab1 row, - even if there are several matching tab2 rows: + This simple example is like an inner join on col2, but + it produces at most one output row for each tab1 row, + even if there are several matching tab2 rows: SELECT col1 FROM tab1 @@ -14842,8 +14842,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of IN is true if any equal subquery row is found. - The result is false if no equal row is found (including the + The result of IN is true if any equal subquery row is found. + The result is false if no equal row is found (including the case where the subquery returns no rows). @@ -14871,8 +14871,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of IN is true if any equal subquery row is found. - The result is false if no equal row is found (including the + The result of IN is true if any equal subquery row is found. + The result is false if no equal row is found (including the case where the subquery returns no rows). @@ -14898,9 +14898,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of NOT IN is true if only unequal subquery rows + The result of NOT IN is true if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is false if any equal row is found. + The result is false if any equal row is found. @@ -14927,9 +14927,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of NOT IN is true if only unequal subquery rows + The result of NOT IN is true if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is false if any equal row is found. + The result is false if any equal row is found. @@ -14957,8 +14957,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. - The result of ANY is true if any true result is obtained. - The result is false if no true result is found (including the + The result of ANY is true if any true result is obtained. + The result is false if no true result is found (including the case where the subquery returns no rows). @@ -14981,8 +14981,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); -row_constructor operator ANY (subquery) -row_constructor operator SOME (subquery) +row_constructor operator ANY (subquery) +row_constructor operator SOME (subquery) @@ -14993,9 +14993,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator. - The result of ANY is true if the comparison + The result of ANY is true if the comparison returns true for any subquery row. - The result is false if the comparison returns false for every + The result is false if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if the comparison does not return true for any row, @@ -15021,9 +15021,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. - The result of ALL is true if all rows yield true + The result of ALL is true if all rows yield true (including the case where the subquery returns no rows). - The result is false if any false result is found. + The result is false if any false result is found. The result is NULL if the comparison does not return false for any row, and it returns NULL for at least one row. @@ -15049,10 +15049,10 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator. - The result of ALL is true if the comparison + The result of ALL is true if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). - The result is false if the comparison returns false for any + The result is false if the comparison returns false for any subquery row. The result is NULL if the comparison does not return false for any subquery row, and it returns NULL for at least one row. @@ -15165,7 +15165,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized list - of scalar expressions. The result is true if the left-hand expression's + of scalar expressions. The result is true if the left-hand expression's result is equal to any of the right-hand expressions. This is a shorthand notation for @@ -15243,8 +15243,8 @@ AND is evaluated and compared to each element of the array using the given operator, which must yield a Boolean result. - The result of ANY is true if any true result is obtained. - The result is false if no true result is found (including the + The result of ANY is true if any true result is obtained. + The result is false if no true result is found (including the case where the array has zero elements). @@ -15279,9 +15279,9 @@ AND is evaluated and compared to each element of the array using the given operator, which must yield a Boolean result. - The result of ALL is true if all comparisons yield true + The result of ALL is true if all comparisons yield true (including the case where the array has zero elements). - The result is false if any false result is found. + The result is false if any false result is found. @@ -15310,12 +15310,12 @@ AND The two row values must have the same number of fields. Each side is evaluated and they are compared row-wise. Row constructor comparisons are allowed when the operator is - =, - <>, - <, - <=, - > or - >=. + =, + <>, + <, + <=, + > or + >=. Every row element must be of a type which has a default B-tree operator class or the attempted comparison may generate an error. @@ -15328,7 +15328,7 @@ AND - The = and <> cases work slightly differently + The = and <> cases work slightly differently from the others. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; @@ -15336,13 +15336,13 @@ AND - For the <, <=, > and - >= cases, the row elements are compared left-to-right, + For the <, <=, > and + >= cases, the row elements are compared left-to-right, stopping as soon as an unequal or null pair of elements is found. If either of this pair of elements is null, the result of the row comparison is unknown (null); otherwise comparison of this pair of elements determines the result. For example, - ROW(1,2,NULL) < ROW(1,3,0) + ROW(1,2,NULL) < ROW(1,3,0) yields true, not null, because the third pair of elements are not considered. @@ -15350,13 +15350,13 @@ AND Prior to PostgreSQL 8.2, the - <, <=, > and >= + <, <=, > and >= cases were not handled per SQL specification. A comparison like - ROW(a,b) < ROW(c,d) + ROW(a,b) < ROW(c,d) was implemented as - a < c AND b < d + a < c AND b < d whereas the correct behavior is equivalent to - a < c OR (a = c AND b < d). + a < c OR (a = c AND b < d). @@ -15409,15 +15409,15 @@ AND Each side is evaluated and they are compared row-wise. Composite type comparisons are allowed when the operator is - =, - <>, - <, - <=, - > or - >=, + =, + <>, + <, + <=, + > or + >=, or has semantics similar to one of these. (To be specific, an operator can be a row comparison operator if it is a member of a B-tree operator - class, or is the negator of the = member of a B-tree operator + class, or is the negator of the = member of a B-tree operator class.) The default behavior of the above operators is the same as for IS [ NOT ] DISTINCT FROM for row constructors (see ). @@ -15427,12 +15427,12 @@ AND To support matching of rows which include elements without a default B-tree operator class, the following operators are defined for composite type comparison: - *=, - *<>, - *<, - *<=, - *>, and - *>=. + *=, + *<>, + *<, + *<=, + *>, and + *>=. These operators compare the internal binary representation of the two rows. Two rows might have a different binary representation even though comparisons of the two rows with the equality operator is true. @@ -15501,7 +15501,7 @@ AND - generate_series(start, stop, step interval) + generate_series(start, stop, step interval) timestamp or timestamp with time zone setof timestamp or setof timestamp with time zone (same as argument type) @@ -15616,7 +15616,7 @@ SELECT * FROM generate_series('2008-03-01 00:00'::timestamp, - generate_subscripts is a convenience function that generates + generate_subscripts is a convenience function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, @@ -15681,7 +15681,7 @@ SELECT * FROM unnest2(ARRAY[[1,2],[3,4]]); by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning - functions such as unnest(). + functions such as unnest(). -- set returning function WITH ORDINALITY @@ -15825,7 +15825,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); - pg_current_logfile(text) + pg_current_logfile(text) text Primary log file name, or log in the requested format, currently in use by the logging collector @@ -15870,7 +15870,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); pg_trigger_depth() int - current nesting level of PostgreSQL triggers + current nesting level of PostgreSQL triggers (0 if not called, directly or indirectly, from inside a trigger) @@ -15889,7 +15889,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); version() text - PostgreSQL version information. See also for a machine-readable version. + PostgreSQL version information. See also for a machine-readable version. @@ -15979,7 +15979,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); current_role and user are synonyms for current_user. (The SQL standard draws a distinction between current_role - and current_user, but PostgreSQL + and current_user, but PostgreSQL does not, since it unifies users and roles into a single kind of entity.) @@ -15990,7 +15990,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); other named objects that are created without specifying a target schema. current_schemas(boolean) returns an array of the names of all schemas presently in the search path. The Boolean option determines whether or not - implicitly included system schemas such as pg_catalog are included in the + implicitly included system schemas such as pg_catalog are included in the returned search path. @@ -15998,7 +15998,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); The search path can be altered at run time. The command is: -SET search_path TO schema , schema, ... +SET search_path TO schema , schema, ... @@ -16043,7 +16043,7 @@ SET search_path TO schema , schema, .. waiting for a lock that would conflict with the blocked process's lock request and is ahead of it in the wait queue (soft block). When using parallel queries the result always lists client-visible process IDs (that - is, pg_backend_pid results) even if the actual lock is held + is, pg_backend_pid results) even if the actual lock is held or awaited by a child worker process. As a result of that, there may be duplicated PIDs in the result. Also note that when a prepared transaction holds a conflicting lock, it will be represented by a zero process ID in @@ -16095,15 +16095,15 @@ SET search_path TO schema , schema, .. is NULL. When multiple log files exist, each in a different format, pg_current_logfile called without arguments returns the path of the file having the first format - found in the ordered list: stderr, csvlog. + found in the ordered list: stderr, csvlog. NULL is returned when no log file has any of these formats. To request a specific file format supply, as text, - either csvlog or stderr as the value of the + either csvlog or stderr as the value of the optional parameter. The return value is NULL when the log format requested is not a configured . The pg_current_logfiles reflects the contents of the - current_logfiles file. + current_logfiles file. @@ -16460,7 +16460,7 @@ SET search_path TO schema , schema, .. has_table_privilege checks whether a user can access a table in a particular way. The user can be specified by name, by OID (pg_authid.oid), - public to indicate the PUBLIC pseudo-role, or if the argument is + public to indicate the PUBLIC pseudo-role, or if the argument is omitted current_user is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of @@ -16470,12 +16470,12 @@ SET search_path TO schema , schema, .. The desired access privilege type is specified by a text string, which must evaluate to one of the values SELECT, INSERT, - UPDATE, DELETE, TRUNCATE, + UPDATE, DELETE, TRUNCATE, REFERENCES, or TRIGGER. Optionally, - WITH GRANT OPTION can be added to a privilege type to test + WITH GRANT OPTION can be added to a privilege type to test whether the privilege is held with grant option. Also, multiple privilege types can be listed separated by commas, in which case the result will - be true if any of the listed privileges is held. + be true if any of the listed privileges is held. (Case of the privilege string is not significant, and extra whitespace is allowed between but not within privilege names.) Some examples: @@ -16499,7 +16499,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') has_any_column_privilege checks whether a user can access any column of a table in a particular way. Its argument possibilities - are analogous to has_table_privilege, + are analogous to has_table_privilege, except that the desired access privilege type must evaluate to some combination of SELECT, @@ -16508,8 +16508,8 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') REFERENCES. Note that having any of these privileges at the table level implicitly grants it for each column of the table, so has_any_column_privilege will always return - true if has_table_privilege does for the same - arguments. But has_any_column_privilege also succeeds if + true if has_table_privilege does for the same + arguments. But has_any_column_privilege also succeeds if there is a column-level grant of the privilege for at least one column. @@ -16547,7 +16547,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') Its argument possibilities are analogous to has_table_privilege. When specifying a function by a text string rather than by OID, - the allowed input is the same as for the regprocedure data type + the allowed input is the same as for the regprocedure data type (see ). The desired access privilege type must evaluate to EXECUTE. @@ -16609,7 +16609,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); Its argument possibilities are analogous to has_table_privilege. When specifying a type by a text string rather than by OID, - the allowed input is the same as for the regtype data type + the allowed input is the same as for the regtype data type (see ). The desired access privilege type must evaluate to USAGE. @@ -16620,14 +16620,14 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); can access a role in a particular way. Its argument possibilities are analogous to has_table_privilege, - except that public is not allowed as a user name. + except that public is not allowed as a user name. The desired access privilege type must evaluate to some combination of MEMBER or USAGE. MEMBER denotes direct or indirect membership in - the role (that is, the right to do SET ROLE), while + the role (that is, the right to do SET ROLE), while USAGE denotes whether the privileges of the role - are immediately available without doing SET ROLE. + are immediately available without doing SET ROLE. @@ -16639,7 +16639,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); shows functions that - determine whether a certain object is visible in the + determine whether a certain object is visible in the current schema search path. For example, a table is said to be visible if its containing schema is in the search path and no table of the same @@ -16793,16 +16793,16 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); pg_type_is_visible can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name - and argument data type(s) earlier in the path. For operator + and argument data type(s) earlier in the path. For operator classes, both name and associated index access method are considered. All these functions require object OIDs to identify the object to be checked. If you want to test an object by name, it is convenient to use - the OID alias types (regclass, regtype, - regprocedure, regoperator, regconfig, - or regdictionary), + the OID alias types (regclass, regtype, + regprocedure, regoperator, regconfig, + or regdictionary), for example: SELECT pg_type_is_visible('myschema.widget'::regtype); @@ -16949,7 +16949,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); - format_type(type_oid, typemod) + format_type(type_oid, typemod) text get SQL name of a data type @@ -16959,18 +16959,18 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get definition of a constraint - pg_get_constraintdef(constraint_oid, pretty_bool) + pg_get_constraintdef(constraint_oid, pretty_bool) text get definition of a constraint - pg_get_expr(pg_node_tree, relation_oid) + pg_get_expr(pg_node_tree, relation_oid) text decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter - pg_get_expr(pg_node_tree, relation_oid, pretty_bool) + pg_get_expr(pg_node_tree, relation_oid, pretty_bool) text decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter @@ -16993,19 +16993,19 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_function_result(func_oid) text - get RETURNS clause for function + get RETURNS clause for function pg_get_indexdef(index_oid) text - get CREATE INDEX command for index + get CREATE INDEX command for index - pg_get_indexdef(index_oid, column_no, pretty_bool) + pg_get_indexdef(index_oid, column_no, pretty_bool) text - get CREATE INDEX command for index, + get CREATE INDEX command for index, or definition of just one index column when - column_no is not zero + column_no is not zero pg_get_keywords() @@ -17015,12 +17015,12 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_ruledef(rule_oid) text - get CREATE RULE command for rule + get CREATE RULE command for rule - pg_get_ruledef(rule_oid, pretty_bool) + pg_get_ruledef(rule_oid, pretty_bool) text - get CREATE RULE command for rule + get CREATE RULE command for rule pg_get_serial_sequence(table_name, column_name) @@ -17030,17 +17030,17 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_statisticsobjdef(statobj_oid) text - get CREATE STATISTICS command for extended statistics object + get CREATE STATISTICS command for extended statistics object pg_get_triggerdef(trigger_oid) text - get CREATE [ CONSTRAINT ] TRIGGER command for trigger + get CREATE [ CONSTRAINT ] TRIGGER command for trigger - pg_get_triggerdef(trigger_oid, pretty_bool) + pg_get_triggerdef(trigger_oid, pretty_bool) text - get CREATE [ CONSTRAINT ] TRIGGER command for trigger + get CREATE [ CONSTRAINT ] TRIGGER command for trigger pg_get_userbyid(role_oid) @@ -17053,7 +17053,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get underlying SELECT command for view or materialized view (deprecated) - pg_get_viewdef(view_name, pretty_bool) + pg_get_viewdef(view_name, pretty_bool) text get underlying SELECT command for view or materialized view (deprecated) @@ -17063,29 +17063,29 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get underlying SELECT command for view or materialized view - pg_get_viewdef(view_oid, pretty_bool) + pg_get_viewdef(view_oid, pretty_bool) text get underlying SELECT command for view or materialized view - pg_get_viewdef(view_oid, wrap_column_int) + pg_get_viewdef(view_oid, wrap_column_int) text get underlying SELECT command for view or materialized view; lines with fields are wrapped to specified number of columns, pretty-printing is implied - pg_index_column_has_property(index_oid, column_no, prop_name) + pg_index_column_has_property(index_oid, column_no, prop_name) boolean test whether an index column has a specified property - pg_index_has_property(index_oid, prop_name) + pg_index_has_property(index_oid, prop_name) boolean test whether an index has a specified property - pg_indexam_has_property(am_oid, prop_name) + pg_indexam_has_property(am_oid, prop_name) boolean test whether an index access method has a specified property @@ -17166,11 +17166,11 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_keywords returns a set of records describing - the SQL keywords recognized by the server. The word column - contains the keyword. The catcode column contains a - category code: U for unreserved, C for column name, - T for type or function name, or R for reserved. - The catdesc column contains a possibly-localized string + the SQL keywords recognized by the server. The word column + contains the keyword. The catcode column contains a + category code: U for unreserved, C for column name, + T for type or function name, or R for reserved. + The catdesc column contains a possibly-localized string describing the category. @@ -17187,26 +17187,26 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); catalogs. If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient. pg_get_viewdef reconstructs the - SELECT query that defines a view. Most of these functions come - in two variants, one of which can optionally pretty-print the + SELECT query that defines a view. Most of these functions come + in two variants, one of which can optionally pretty-print the result. The pretty-printed format is more readable, but the default format is more likely to be interpreted the same way by future versions of - PostgreSQL; avoid using pretty-printed output for dump - purposes. Passing false for the pretty-print parameter yields + PostgreSQL; avoid using pretty-printed output for dump + purposes. Passing false for the pretty-print parameter yields the same result as the variant that does not have the parameter at all. - pg_get_functiondef returns a complete - CREATE OR REPLACE FUNCTION statement for a function. + pg_get_functiondef returns a complete + CREATE OR REPLACE FUNCTION statement for a function. pg_get_function_arguments returns the argument list of a function, in the form it would need to appear in within - CREATE FUNCTION. + CREATE FUNCTION. pg_get_function_result similarly returns the - appropriate RETURNS clause for the function. + appropriate RETURNS clause for the function. pg_get_function_identity_arguments returns the argument list necessary to identify a function, in the form it - would need to appear in within ALTER FUNCTION, for + would need to appear in within ALTER FUNCTION, for instance. This form omits default values. @@ -17219,10 +17219,10 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); (serial, smallserial, bigserial), it is the sequence created for that serial column definition. In the latter case, this association can be modified or removed with ALTER - SEQUENCE OWNED BY. (The function probably should have been called + SEQUENCE OWNED BY. (The function probably should have been called pg_get_owned_sequence; its current name reflects the - fact that it has typically been used with serial - or bigserial columns.) The first input parameter is a table name + fact that it has typically been used with serial + or bigserial columns.) The first input parameter is a table name with optional schema, and the second parameter is a column name. Because the first parameter is potentially a schema and table, it is not treated as a double-quoted identifier, meaning it is lower cased by default, while the @@ -17290,8 +17290,8 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); distance_orderable - Can the column be scanned in order by a distance - operator, for example ORDER BY col <-> constant ? + Can the column be scanned in order by a distance + operator, for example ORDER BY col <-> constant ? @@ -17301,14 +17301,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); search_array - Does the column natively support col = ANY(array) + Does the column natively support col = ANY(array) searches? search_nulls - Does the column support IS NULL and - IS NOT NULL searches? + Does the column support IS NULL and + IS NOT NULL searches? @@ -17324,7 +17324,7 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); clusterable - Can the index be used in a CLUSTER command? + Can the index be used in a CLUSTER command? @@ -17355,9 +17355,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); can_order - Does the access method support ASC, - DESC and related keywords in - CREATE INDEX? + Does the access method support ASC, + DESC and related keywords in + CREATE INDEX? @@ -17382,9 +17382,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); pg_options_to_table returns the set of storage option name/value pairs - (option_name/option_value) when passed - pg_class.reloptions or - pg_attribute.attoptions. + (option_name/option_value) when passed + pg_class.reloptions or + pg_attribute.attoptions. @@ -17394,14 +17394,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to connect to the databases identified by pg_tablespace_databases and query their - pg_class catalogs. + pg_class catalogs. pg_typeof returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as - returning regtype, which is an OID alias type (see + returning regtype, which is an OID alias type (see ); this means that it is the same as an OID for comparison purposes but displays as a type name. For example: @@ -17447,10 +17447,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); to_regoperator, to_regtype, to_regnamespace, and to_regrole functions translate relation, function, operator, type, schema, and role - names (given as text) to objects of - type regclass, regproc, regprocedure, - regoper, regoperator, regtype, - regnamespace, and regrole + names (given as text) to objects of + type regclass, regproc, regprocedure, + regoper, regoperator, regtype, + regnamespace, and regrole respectively. These functions differ from a cast from text in that they don't accept a numeric OID, and that they return null rather than throwing an error if the name is not found (or, for @@ -17493,18 +17493,18 @@ SELECT collation for ('foo' COLLATE "de_DE"); get description of a database object - pg_identify_object(catalog_id oid, object_id oid, object_sub_id integer) - type text, schema text, name text, identity text + pg_identify_object(catalog_id oid, object_id oid, object_sub_id integer) + type text, schema text, name text, identity text get identity of a database object - pg_identify_object_as_address(catalog_id oid, object_id oid, object_sub_id integer) - type text, name text[], args text[] + pg_identify_object_as_address(catalog_id oid, object_id oid, object_sub_id integer) + type text, name text[], args text[] get external representation of a database object's address - pg_get_object_address(type text, name text[], args text[]) - catalog_id oid, object_id oid, object_sub_id int32 + pg_get_object_address(type text, name text[], args text[]) + catalog_id oid, object_id oid, object_sub_id int32 get address of a database object, from its external representation @@ -17525,13 +17525,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); to uniquely identify the database object specified by catalog OID, object OID and a (possibly zero) sub-object ID. This information is intended to be machine-readable, and is never translated. - type identifies the type of database object; - schema is the schema name that the object belongs in, or - NULL for object types that do not belong to schemas; - name is the name of the object, quoted if necessary, only + type identifies the type of database object; + schema is the schema name that the object belongs in, or + NULL for object types that do not belong to schemas; + name is the name of the object, quoted if necessary, only present if it can be used (alongside schema name, if pertinent) as a unique - identifier of the object, otherwise NULL; - identity is the complete object identity, with the precise format + identifier of the object, otherwise NULL; + identity is the complete object identity, with the precise format depending on object type, and each part within the format being schema-qualified and quoted as necessary. @@ -17542,10 +17542,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); catalog OID, object OID and a (possibly zero) sub-object ID. The returned information is independent of the current server, that is, it could be used to identify an identically named object in another server. - type identifies the type of database object; - name and args are text arrays that together + type identifies the type of database object; + name and args are text arrays that together form a reference to the object. These three columns can be passed to - pg_get_object_address to obtain the internal address + pg_get_object_address to obtain the internal address of the object. This function is the inverse of pg_get_object_address. @@ -17554,13 +17554,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_get_object_address returns a row containing enough information to uniquely identify the database object specified by its type and object name and argument arrays. The returned values are the - ones that would be used in system catalogs such as pg_depend + ones that would be used in system catalogs such as pg_depend and can be passed to other system functions such as - pg_identify_object or pg_describe_object. - catalog_id is the OID of the system catalog containing the + pg_identify_object or pg_describe_object. + catalog_id is the OID of the system catalog containing the object; - object_id is the OID of the object itself, and - object_sub_id is the object sub-ID, or zero if none. + object_id is the OID of the object itself, and + object_sub_id is the object sub-ID, or zero if none. This function is the inverse of pg_identify_object_as_address. @@ -17739,9 +17739,9 @@ SELECT collation for ('foo' COLLATE "de_DE");
- The internal transaction ID type (xid) is 32 bits wide and + The internal transaction ID type (xid) is 32 bits wide and wraps around every 4 billion transactions. However, these functions - export a 64-bit format that is extended with an epoch counter + export a 64-bit format that is extended with an epoch counter so it will not wrap around during the life of an installation. The data type used by these functions, txid_snapshot, stores information about transaction ID @@ -17782,9 +17782,9 @@ SELECT collation for ('foo' COLLATE "de_DE"); xip_list Active txids at the time of the snapshot. The list - includes only those active txids between xmin - and xmax; there might be active txids higher - than xmax. A txid that is xmin <= txid < + includes only those active txids between xmin + and xmax; there might be active txids higher + than xmax. A txid that is xmin <= txid < xmax and not in this list was already completed at the time of the snapshot, and thus either visible or dead according to its commit status. The list does not @@ -17797,27 +17797,27 @@ SELECT collation for ('foo' COLLATE "de_DE"); - txid_snapshot's textual representation is - xmin:xmax:xip_list. + txid_snapshot's textual representation is + xmin:xmax:xip_list. For example 10:20:10,14,15 means xmin=10, xmax=20, xip_list=10, 14, 15. - txid_status(bigint) reports the commit status of a recent + txid_status(bigint) reports the commit status of a recent transaction. Applications may use it to determine whether a transaction committed or aborted when the application and database server become disconnected while a COMMIT is in progress. The status of a transaction will be reported as either - in progress, - committed, or aborted, provided that the + in progress, + committed, or aborted, provided that the transaction is recent enough that the system retains the commit status of that transaction. If is old enough that no references to that transaction survive in the system and the commit status information has been discarded, this function will return NULL. Note that prepared - transactions are reported as in progress; applications must + transactions are reported as in progress; applications must check pg_prepared_xacts if they + linkend="view-pg-prepared-xacts">pg_prepared_xacts if they need to determine whether the txid is a prepared transaction. @@ -17852,7 +17852,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_last_committed_xact pg_last_committed_xact() - xid xid, timestamp timestamp with time zone + xid xid, timestamp timestamp with time zone get transaction ID and commit timestamp of latest committed transaction @@ -17861,7 +17861,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); The functions shown in - print information initialized during initdb, such + print information initialized during initdb, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same @@ -17927,12 +17927,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); - pg_control_checkpoint returns a record, shown in + pg_control_checkpoint returns a record, shown in - <function>pg_control_checkpoint</> Columns + <function>pg_control_checkpoint</function> Columns @@ -18043,12 +18043,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_system returns a record, shown in + pg_control_system returns a record, shown in - <function>pg_control_system</> Columns + <function>pg_control_system</function> Columns @@ -18084,12 +18084,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_init returns a record, shown in + pg_control_init returns a record, shown in - <function>pg_control_init</> Columns + <function>pg_control_init</function> Columns @@ -18165,12 +18165,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_recovery returns a record, shown in + pg_control_recovery returns a record, shown in - <function>pg_control_recovery</> Columns + <function>pg_control_recovery</function> Columns @@ -18217,7 +18217,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); The functions described in this section are used to control and - monitor a PostgreSQL installation. + monitor a PostgreSQL installation. @@ -18357,7 +18357,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_cancel_backend(pid int) + pg_cancel_backend(pid int) boolean Cancel a backend's current query. This is also allowed if the @@ -18382,7 +18382,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_terminate_backend(pid int) + pg_terminate_backend(pid int) boolean Terminate a backend. This is also allowed if the calling role @@ -18401,28 +18401,28 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_cancel_backend and pg_terminate_backend - send signals (SIGINT or SIGTERM + pg_cancel_backend and pg_terminate_backend + send signals (SIGINT or SIGTERM respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the pid column of the pg_stat_activity view, or by listing the postgres processes on the server (using - ps on Unix or the Task - Manager on Windows). + ps on Unix or the Task + Manager on Windows). The role of an active backend can be found from the usename column of the pg_stat_activity view. - pg_reload_conf sends a SIGHUP signal + pg_reload_conf sends a SIGHUP signal to the server, causing configuration files to be reloaded by all server processes. - pg_rotate_logfile signals the log-file manager to switch + pg_rotate_logfile signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess. @@ -18492,7 +18492,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_create_restore_point(name text) + pg_create_restore_point(name text) pg_lsn Create a named point for performing restore (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18520,7 +18520,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_start_backup(label text , fast boolean , exclusive boolean ) + pg_start_backup(label text , fast boolean , exclusive boolean ) pg_lsn Prepare for performing on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18534,7 +18534,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_stop_backup(exclusive boolean , wait_for_archive boolean ) + pg_stop_backup(exclusive boolean , wait_for_archive boolean ) setof record Finish performing exclusive or non-exclusive on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18562,23 +18562,23 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_walfile_name(lsn pg_lsn) + pg_walfile_name(lsn pg_lsn) text Convert write-ahead log location to file name - pg_walfile_name_offset(lsn pg_lsn) + pg_walfile_name_offset(lsn pg_lsn) - text, integer + text, integer Convert write-ahead log location to file name and decimal byte offset within file - pg_wal_lsn_diff(lsn pg_lsn, lsn pg_lsn) + pg_wal_lsn_diff(lsn pg_lsn, lsn pg_lsn) - numeric + numeric Calculate the difference between two write-ahead log locations @@ -18586,17 +18586,17 @@ SELECT set_config('log_statement_stats', 'off', false);
- pg_start_backup accepts an arbitrary user-defined label for + pg_start_backup accepts an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) When used in exclusive mode, the function writes a - backup label file (backup_label) and, if there are any links - in the pg_tblspc/ directory, a tablespace map file - (tablespace_map) into the database cluster's data directory, + backup label file (backup_label) and, if there are any links + in the pg_tblspc/ directory, a tablespace map file + (tablespace_map) into the database cluster's data directory, performs a checkpoint, and then returns the backup's starting write-ahead log location as text. The user can ignore this result value, but it is provided in case it is useful. When used in non-exclusive mode, the contents of these files are instead returned by the - pg_stop_backup function, and should be written to the backup + pg_stop_backup function, and should be written to the backup by the caller. @@ -18606,29 +18606,29 @@ postgres=# select pg_start_backup('label_goes_here'); 0/D4445B8 (1 row) - There is an optional second parameter of type boolean. If true, - it specifies executing pg_start_backup as quickly as + There is an optional second parameter of type boolean. If true, + it specifies executing pg_start_backup as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries. - In an exclusive backup, pg_stop_backup removes the label file - and, if it exists, the tablespace_map file created by - pg_start_backup. In a non-exclusive backup, the contents of - the backup_label and tablespace_map are returned + In an exclusive backup, pg_stop_backup removes the label file + and, if it exists, the tablespace_map file created by + pg_start_backup. In a non-exclusive backup, the contents of + the backup_label and tablespace_map are returned in the result of the function, and should be written to files in the backup (and not in the data directory). There is an optional second - parameter of type boolean. If false, the pg_stop_backup + parameter of type boolean. If false, the pg_stop_backup will return immediately after the backup is completed without waiting for WAL to be archived. This behavior is only useful for backup software which independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup - useless. When this parameter is set to true, pg_stop_backup + useless. When this parameter is set to true, pg_stop_backup will wait for WAL to be archived when archiving is enabled; on the standby, - this means that it will wait only when archive_mode = always. + this means that it will wait only when archive_mode = always. If write activity on the primary is low, it may be useful to run - pg_switch_wal on the primary in order to trigger + pg_switch_wal on the primary in order to trigger an immediate segment switch. @@ -18636,7 +18636,7 @@ postgres=# select pg_start_backup('label_goes_here'); When executed on a primary, the function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to - pg_start_backup, the starting and ending write-ahead log locations for + pg_start_backup, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. The return value is the backup's ending write-ahead log location (which again can be ignored). After recording the ending location, the current @@ -18646,16 +18646,16 @@ postgres=# select pg_start_backup('label_goes_here');
- pg_switch_wal moves to the next write-ahead log file, allowing the + pg_switch_wal moves to the next write-ahead log file, allowing the current file to be archived (assuming you are using continuous archiving). The return value is the ending write-ahead log location + 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, - pg_switch_wal does nothing and returns the start location + pg_switch_wal does nothing and returns the start location of the write-ahead log file currently in use. - pg_create_restore_point creates a named write-ahead log + pg_create_restore_point creates a named write-ahead log record that can be used as recovery target, and returns the corresponding write-ahead log location. The given name can then be used with to specify the point up to which @@ -18665,11 +18665,11 @@ postgres=# select pg_start_backup('label_goes_here'); - pg_current_wal_lsn displays the current write-ahead log write + pg_current_wal_lsn displays the current write-ahead log write location in the same format used by the above functions. Similarly, - pg_current_wal_insert_lsn displays the current write-ahead log - insertion location and pg_current_wal_flush_lsn displays the - current write-ahead log flush location. The insertion location is the logical + pg_current_wal_insert_lsn displays the current write-ahead log + insertion location and pg_current_wal_flush_lsn displays the + current write-ahead log flush location. The insertion location is the logical end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers and flush location is the location guaranteed to be written to durable storage. The write @@ -18681,7 +18681,7 @@ postgres=# select pg_start_backup('label_goes_here'); - You can use pg_walfile_name_offset to extract the + You can use pg_walfile_name_offset to extract the corresponding write-ahead log file name and byte offset from the results of any of the above functions. For example: @@ -18691,7 +18691,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); 00000001000000000000000D | 4039624 (1 row) - Similarly, pg_walfile_name extracts just the write-ahead log file name. + Similarly, pg_walfile_name extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving @@ -18700,7 +18700,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_wal_lsn_diff calculates the difference in bytes + pg_wal_lsn_diff calculates the difference in bytes between two write-ahead log locations. It can be used with pg_stat_replication or some functions shown in to get the replication lag. @@ -18878,21 +18878,21 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - PostgreSQL allows database sessions to synchronize their - snapshots. A snapshot determines which data is visible to the + PostgreSQL allows database sessions to synchronize their + snapshots. A snapshot determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits - between the executions of the two START TRANSACTION commands, + between the executions of the two START TRANSACTION commands, so that one session sees the effects of that transaction and the other does not. - To solve this problem, PostgreSQL allows a transaction to - export the snapshot it is using. As long as the exporting - transaction remains open, other transactions can import its + To solve this problem, PostgreSQL allows a transaction to + export the snapshot it is using. As long as the exporting + transaction remains open, other transactions can import its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible @@ -18902,7 +18902,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - Snapshots are exported with the pg_export_snapshot function, + Snapshots are exported with the pg_export_snapshot function, shown in , and imported with the command. @@ -18928,13 +18928,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - The function pg_export_snapshot saves the current snapshot - and returns a text string identifying the snapshot. This string + The function pg_export_snapshot saves the current snapshot + and returns a text string identifying the snapshot. This string must be passed (outside the database) to clients that want to import the snapshot. The snapshot is available for import only until the end of the transaction that exported it. A transaction can export more than one snapshot, if needed. Note that doing so is only useful in READ - COMMITTED transactions, since in REPEATABLE READ and + COMMITTED transactions, since in REPEATABLE READ and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with . @@ -18989,7 +18989,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_create_physical_replication_slot - pg_create_physical_replication_slot(slot_name name , immediately_reserve boolean, temporary boolean) + pg_create_physical_replication_slot(slot_name name , immediately_reserve boolean, temporary boolean) (slot_name name, lsn pg_lsn) @@ -18997,13 +18997,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Creates a new physical replication slot named slot_name. The optional second parameter, - when true, specifies that the LSN for this + when true, specifies that the LSN for this replication slot be reserved immediately; otherwise - the LSN is reserved on first connection from a streaming + the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see . The optional third - parameter, temporary, when set to true, specifies that + parameter, temporary, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. This function corresponds @@ -19024,7 +19024,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Drops the physical or logical replication slot named slot_name. Same as replication protocol - command DROP_REPLICATION_SLOT. For logical slots, this must + command DROP_REPLICATION_SLOT. For logical slots, this must be called when connected to the same database the slot was created on. @@ -19034,7 +19034,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_create_logical_replication_slot - pg_create_logical_replication_slot(slot_name name, plugin name , temporary boolean) + pg_create_logical_replication_slot(slot_name name, plugin name , temporary boolean) (slot_name name, lsn pg_lsn) @@ -19043,7 +19043,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Creates a new logical (decoding) replication slot named slot_name using the output plugin plugin. The optional third - parameter, temporary, when set to true, specifies that + parameter, temporary, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. A call to this function has the same @@ -19065,9 +19065,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Returns changes in the slot slot_name, starting from the point at which since changes have been consumed last. If - upto_lsn and upto_nchanges are NULL, + upto_lsn and upto_nchanges are NULL, logical decoding will continue until end of WAL. If - upto_lsn is non-NULL, decoding will include only + upto_lsn is non-NULL, decoding will include only those transactions which commit prior to the specified LSN. If upto_nchanges is non-NULL, decoding will stop when the number of rows produced by decoding exceeds @@ -19155,7 +19155,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_drop(node_name text) - void + void Delete a previously created replication origin, including any @@ -19187,7 +19187,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_session_setup(node_name text) - void + void Mark the current session as replaying from the given @@ -19205,7 +19205,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_session_reset() - void + void Cancel the effects @@ -19254,7 +19254,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_xact_setup(origin_lsn pg_lsn, origin_timestamp timestamptz) - void + void Mark the current transaction as replaying a transaction that has @@ -19273,7 +19273,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_xact_reset() - void + void Cancel the effects of @@ -19289,7 +19289,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_advance(node_name text, lsn pg_lsn) - void + void Set replication progress for the given node to the given @@ -19446,7 +19446,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); bigint Disk space used by the specified fork ('main', - 'fsm', 'vm', or 'init') + 'fsm', 'vm', or 'init') of the specified table or index @@ -19519,7 +19519,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); bigint Total disk space used by the specified table, - including all indexes and TOAST data + including all indexes and TOAST data @@ -19527,48 +19527,48 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_column_size shows the space used to store any individual + pg_column_size shows the space used to store any individual data value. - pg_total_relation_size accepts the OID or name of a + pg_total_relation_size accepts the OID or name of a table or toast table, and returns the total on-disk space used for that table, including all associated indexes. This function is equivalent to pg_table_size - + pg_indexes_size. + + pg_indexes_size. - pg_table_size accepts the OID or name of a table and + pg_table_size accepts the OID or name of a table and returns the disk space needed for that table, exclusive of indexes. (TOAST space, free space map, and visibility map are included.) - pg_indexes_size accepts the OID or name of a table and + pg_indexes_size accepts the OID or name of a table and returns the total disk space used by all the indexes attached to that table. - pg_database_size and pg_tablespace_size + pg_database_size and pg_tablespace_size accept the OID or name of a database or tablespace, and return the total disk space used therein. To use pg_database_size, - you must have CONNECT permission on the specified database - (which is granted by default), or be a member of the pg_read_all_stats - role. To use pg_tablespace_size, you must have - CREATE permission on the specified tablespace, or be a member - of the pg_read_all_stats role unless it is the default tablespace for + you must have CONNECT permission on the specified database + (which is granted by default), or be a member of the pg_read_all_stats + role. To use pg_tablespace_size, you must have + CREATE permission on the specified tablespace, or be a member + of the pg_read_all_stats role unless it is the default tablespace for the current database. - pg_relation_size accepts the OID or name of a table, index + pg_relation_size accepts the OID or name of a table, index or toast table, and returns the on-disk size in bytes of one fork of that relation. (Note that for most purposes it is more convenient to - use the higher-level functions pg_total_relation_size - or pg_table_size, which sum the sizes of all forks.) + use the higher-level functions pg_total_relation_size + or pg_table_size, which sum the sizes of all forks.) With one argument, it returns the size of the main data fork of the relation. The second argument can be provided to specify which fork to examine: @@ -19601,13 +19601,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_size_pretty can be used to format the result of one of + pg_size_pretty can be used to format the result of one of the other functions in a human-readable way, using bytes, kB, MB, GB or TB as appropriate. - pg_size_bytes can be used to get the size in bytes from a + pg_size_bytes can be used to get the size in bytes from a string in human-readable format. The input may have units of bytes, kB, MB, GB or TB, and is parsed case-insensitively. If no units are specified, bytes are assumed. @@ -19616,17 +19616,17 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The units kB, MB, GB and TB used by the functions - pg_size_pretty and pg_size_bytes are defined + pg_size_pretty and pg_size_bytes are defined using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is - 10242 = 1048576 bytes, and so on. + 10242 = 1048576 bytes, and so on. The functions above that operate on tables or indexes accept a - regclass argument, which is simply the OID of the table or index - in the pg_class system catalog. You do not have to look up - the OID by hand, however, since the regclass data type's input + regclass argument, which is simply the OID of the table or index + in the pg_class system catalog. You do not have to look up + the OID by hand, however, since the regclass data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string @@ -19695,28 +19695,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_relation_filenode accepts the OID or name of a table, - index, sequence, or toast table, and returns the filenode number + pg_relation_filenode accepts the OID or name of a table, + index, sequence, or toast table, and returns the filenode number currently assigned to it. The filenode is the base component of the file name(s) used for the relation (see for more information). For most tables the result is the same as - pg_class.relfilenode, but for certain - system catalogs relfilenode is zero and this function must + pg_class.relfilenode, but for certain + system catalogs relfilenode is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view. - pg_relation_filepath is similar to - pg_relation_filenode, but it returns the entire file path name - (relative to the database cluster's data directory PGDATA) of + pg_relation_filepath is similar to + pg_relation_filenode, but it returns the entire file path name + (relative to the database cluster's data directory PGDATA) of the relation. - pg_filenode_relation is the reverse of - pg_relation_filenode. Given a tablespace OID and - a filenode, it returns the associated relation's OID. For a table + pg_filenode_relation is the reverse of + pg_relation_filenode. Given a tablespace OID and + a filenode, it returns the associated relation's OID. For a table in the database's default tablespace, the tablespace can be specified as 0. @@ -19736,7 +19736,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_collation_actual_version - pg_collation_actual_version(oid) + pg_collation_actual_version(oid) text Return actual version of collation from operating system @@ -19744,7 +19744,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_import_system_collations - pg_import_system_collations(schema regnamespace) + pg_import_system_collations(schema regnamespace) integer Import operating system collations @@ -19763,7 +19763,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_import_system_collations adds collations to the system + pg_import_system_collations adds collations to the system catalog pg_collation based on all the locales it finds in the operating system. This is what initdb uses; @@ -19818,28 +19818,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - brin_summarize_new_values(index regclass) + brin_summarize_new_values(index regclass) integer summarize page ranges not already summarized - brin_summarize_range(index regclass, blockNumber bigint) + brin_summarize_range(index regclass, blockNumber bigint) integer summarize the page range covering the given block, if not already summarized - brin_desummarize_range(index regclass, blockNumber bigint) + brin_desummarize_range(index regclass, blockNumber bigint) integer de-summarize the page range covering the given block, if summarized - gin_clean_pending_list(index regclass) + gin_clean_pending_list(index regclass) bigint move GIN pending list entries into main index structure @@ -19849,25 +19849,25 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - brin_summarize_new_values accepts the OID or name of a + brin_summarize_new_values accepts the OID or name of a BRIN index and inspects the index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning the table pages. It returns the number of new page range summaries that were inserted - into the index. brin_summarize_range does the same, except + into the index. brin_summarize_range does the same, except it only summarizes the range that covers the given block number. - gin_clean_pending_list accepts the OID or name of + gin_clean_pending_list accepts the OID or name of a GIN index and cleans up the pending list of the specified index by moving entries in it to the main GIN data structure in bulk. It returns the number of pages removed from the pending list. Note that if the argument is a GIN index built with - the fastupdate option disabled, no cleanup happens and the + the fastupdate option disabled, no cleanup happens and the return value is 0, because the index doesn't have a pending list. Please see and - for details of the pending list and fastupdate option. + for details of the pending list and fastupdate option. @@ -19879,9 +19879,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in provide native access to files on the machine hosting the server. Only files within the - database cluster directory and the log_directory can be + database cluster directory and the log_directory can be accessed. Use a relative path for files in the cluster directory, - and a path matching the log_directory configuration setting + and a path matching the log_directory configuration setting for log files. Use of these functions is restricted to superusers except where stated otherwise. @@ -19897,7 +19897,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_ls_dir(dirname text [, missing_ok boolean, include_dot_dirs boolean]) + pg_ls_dir(dirname text [, missing_ok boolean, include_dot_dirs boolean]) setof text @@ -19911,7 +19911,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); setof record List the name, size, and last modification time of files in the log - directory. Access is granted to members of the pg_monitor + directory. Access is granted to members of the pg_monitor role and may be granted to other non-superuser roles. @@ -19922,13 +19922,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); setof record List the name, size, and last modification time of files in the WAL - directory. Access is granted to members of the pg_monitor + directory. Access is granted to members of the pg_monitor role and may be granted to other non-superuser roles. - pg_read_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) + pg_read_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) text @@ -19937,7 +19937,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_read_binary_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) + pg_read_binary_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) bytea @@ -19946,7 +19946,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_stat_file(filename text[, missing_ok boolean]) + pg_stat_file(filename text[, missing_ok boolean]) record @@ -19958,23 +19958,23 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - Some of these functions take an optional missing_ok parameter, + Some of these functions take an optional missing_ok parameter, which specifies the behavior when the file or directory does not exist. If true, the function returns NULL (except - pg_ls_dir, which returns an empty result set). If - false, an error is raised. The default is false. + pg_ls_dir, which returns an empty result set). If + false, an error is raised. The default is false. pg_ls_dir - pg_ls_dir returns the names of all files (and directories + pg_ls_dir returns the names of all files (and directories and other special files) in the specified directory. The - include_dot_dirs indicates whether . and .. are + include_dot_dirs indicates whether . and .. are included in the result set. The default is to exclude them - (false), but including them can be useful when - missing_ok is true, to distinguish an + (false), but including them can be useful when + missing_ok is true, to distinguish an empty directory from an non-existent directory. @@ -19982,9 +19982,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_ls_logdir - pg_ls_logdir returns the name, size, and last modified time + pg_ls_logdir returns the name, size, and last modified time (mtime) of each file in the log directory. By default, only superusers - and members of the pg_monitor role can use this function. + and members of the pg_monitor role can use this function. Access may be granted to others using GRANT. @@ -19992,9 +19992,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_ls_waldir - pg_ls_waldir returns the name, size, and last modified time + pg_ls_waldir returns the name, size, and last modified time (mtime) of each file in the write ahead log (WAL) directory. By - default only superusers and members of the pg_monitor role + default only superusers and members of the pg_monitor role can use this function. Access may be granted to others using GRANT. @@ -20003,11 +20003,11 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_read_file - pg_read_file returns part of a text file, starting - at the given offset, returning at most length - bytes (less if the end of file is reached first). If offset + pg_read_file returns part of a text file, starting + at the given offset, returning at most length + bytes (less if the end of file is reached first). If offset is negative, it is relative to the end of the file. - If offset and length are omitted, the entire + If offset and length are omitted, the entire file is returned. The bytes read from the file are interpreted as a string in the server encoding; an error is thrown if they are not valid in that encoding. @@ -20017,10 +20017,10 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_read_binary_file - pg_read_binary_file is similar to - pg_read_file, except that the result is a bytea value; + pg_read_binary_file is similar to + pg_read_file, except that the result is a bytea value; accordingly, no encoding checks are performed. - In combination with the convert_from function, this function + In combination with the convert_from function, this function can be used to read a file in a specified encoding: SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); @@ -20031,7 +20031,7 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); pg_stat_file - pg_stat_file returns a record containing the file + pg_stat_file returns a record containing the file size, last accessed time stamp, last modified time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a boolean @@ -20064,42 +20064,42 @@ SELECT (pg_stat_file('filename')).modification; - pg_advisory_lock(key bigint) + pg_advisory_lock(key bigint) void Obtain exclusive session level advisory lock - pg_advisory_lock(key1 int, key2 int) + pg_advisory_lock(key1 int, key2 int) void Obtain exclusive session level advisory lock - pg_advisory_lock_shared(key bigint) + pg_advisory_lock_shared(key bigint) void Obtain shared session level advisory lock - pg_advisory_lock_shared(key1 int, key2 int) + pg_advisory_lock_shared(key1 int, key2 int) void Obtain shared session level advisory lock - pg_advisory_unlock(key bigint) + pg_advisory_unlock(key bigint) boolean Release an exclusive session level advisory lock - pg_advisory_unlock(key1 int, key2 int) + pg_advisory_unlock(key1 int, key2 int) boolean Release an exclusive session level advisory lock @@ -20113,98 +20113,98 @@ SELECT (pg_stat_file('filename')).modification; - pg_advisory_unlock_shared(key bigint) + pg_advisory_unlock_shared(key bigint) boolean Release a shared session level advisory lock - pg_advisory_unlock_shared(key1 int, key2 int) + pg_advisory_unlock_shared(key1 int, key2 int) boolean Release a shared session level advisory lock - pg_advisory_xact_lock(key bigint) + pg_advisory_xact_lock(key bigint) void Obtain exclusive transaction level advisory lock - pg_advisory_xact_lock(key1 int, key2 int) + pg_advisory_xact_lock(key1 int, key2 int) void Obtain exclusive transaction level advisory lock - pg_advisory_xact_lock_shared(key bigint) + pg_advisory_xact_lock_shared(key bigint) void Obtain shared transaction level advisory lock - pg_advisory_xact_lock_shared(key1 int, key2 int) + pg_advisory_xact_lock_shared(key1 int, key2 int) void Obtain shared transaction level advisory lock - pg_try_advisory_lock(key bigint) + pg_try_advisory_lock(key bigint) boolean Obtain exclusive session level advisory lock if available - pg_try_advisory_lock(key1 int, key2 int) + pg_try_advisory_lock(key1 int, key2 int) boolean Obtain exclusive session level advisory lock if available - pg_try_advisory_lock_shared(key bigint) + pg_try_advisory_lock_shared(key bigint) boolean Obtain shared session level advisory lock if available - pg_try_advisory_lock_shared(key1 int, key2 int) + pg_try_advisory_lock_shared(key1 int, key2 int) boolean Obtain shared session level advisory lock if available - pg_try_advisory_xact_lock(key bigint) + pg_try_advisory_xact_lock(key bigint) boolean Obtain exclusive transaction level advisory lock if available - pg_try_advisory_xact_lock(key1 int, key2 int) + pg_try_advisory_xact_lock(key1 int, key2 int) boolean Obtain exclusive transaction level advisory lock if available - pg_try_advisory_xact_lock_shared(key bigint) + pg_try_advisory_xact_lock_shared(key bigint) boolean Obtain shared transaction level advisory lock if available - pg_try_advisory_xact_lock_shared(key1 int, key2 int) + pg_try_advisory_xact_lock_shared(key1 int, key2 int) boolean Obtain shared transaction level advisory lock if available @@ -20217,7 +20217,7 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_lock - pg_advisory_lock locks an application-defined resource, + pg_advisory_lock locks an application-defined resource, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a lock on the same resource identifier, @@ -20231,8 +20231,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_lock_shared - pg_advisory_lock_shared works the same as - pg_advisory_lock, + pg_advisory_lock_shared works the same as + pg_advisory_lock, except the lock can be shared with other sessions requesting shared locks. Only would-be exclusive lockers are locked out. @@ -20241,10 +20241,10 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_lock - pg_try_advisory_lock is similar to - pg_advisory_lock, except the function will not wait for the + pg_try_advisory_lock is similar to + pg_advisory_lock, except the function will not wait for the lock to become available. It will either obtain the lock immediately and - return true, or return false if the lock cannot be + return true, or return false if the lock cannot be acquired immediately. @@ -20252,8 +20252,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_lock_shared - pg_try_advisory_lock_shared works the same as - pg_try_advisory_lock, except it attempts to acquire + pg_try_advisory_lock_shared works the same as + pg_try_advisory_lock, except it attempts to acquire a shared rather than an exclusive lock. @@ -20261,10 +20261,10 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock - pg_advisory_unlock will release a previously-acquired + pg_advisory_unlock will release a previously-acquired exclusive session level advisory lock. It - returns true if the lock is successfully released. - If the lock was not held, it will return false, + returns true if the lock is successfully released. + If the lock was not held, it will return false, and in addition, an SQL warning will be reported by the server. @@ -20272,8 +20272,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock_shared - pg_advisory_unlock_shared works the same as - pg_advisory_unlock, + pg_advisory_unlock_shared works the same as + pg_advisory_unlock, except it releases a shared session level advisory lock. @@ -20281,7 +20281,7 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock_all - pg_advisory_unlock_all will release all session level advisory + pg_advisory_unlock_all will release all session level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.) @@ -20290,8 +20290,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_xact_lock - pg_advisory_xact_lock works the same as - pg_advisory_lock, except the lock is automatically released + pg_advisory_xact_lock works the same as + pg_advisory_lock, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20299,8 +20299,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_xact_lock_shared - pg_advisory_xact_lock_shared works the same as - pg_advisory_lock_shared, except the lock is automatically released + pg_advisory_xact_lock_shared works the same as + pg_advisory_lock_shared, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20308,8 +20308,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_xact_lock - pg_try_advisory_xact_lock works the same as - pg_try_advisory_lock, except the lock, if acquired, + pg_try_advisory_xact_lock works the same as + pg_try_advisory_lock, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20318,8 +20318,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_xact_lock_shared - pg_try_advisory_xact_lock_shared works the same as - pg_try_advisory_lock_shared, except the lock, if acquired, + pg_try_advisory_xact_lock_shared works the same as + pg_try_advisory_lock_shared, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20336,8 +20336,8 @@ SELECT (pg_stat_file('filename')).modification; - Currently PostgreSQL provides one built in trigger - function, suppress_redundant_updates_trigger, + Currently PostgreSQL provides one built in trigger + function, suppress_redundant_updates_trigger, which will prevent any update that does not actually change the data in the row from taking place, in contrast to the normal behavior which always performs the update @@ -20354,7 +20354,7 @@ SELECT (pg_stat_file('filename')).modification; However, detecting such situations in client code is not always easy, or even possible, and writing expressions to detect them can be error-prone. An alternative is to use - suppress_redundant_updates_trigger, which will skip + suppress_redundant_updates_trigger, which will skip updates that don't change the data. You should use this with care, however. The trigger takes a small but non-trivial time for each record, so if most of the records affected by an update are actually changed, @@ -20362,7 +20362,7 @@ SELECT (pg_stat_file('filename')).modification; - The suppress_redundant_updates_trigger function can be + The suppress_redundant_updates_trigger function can be added to a table like this: CREATE TRIGGER z_min_update @@ -20384,7 +20384,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); Event Trigger Functions - PostgreSQL provides these helper functions + PostgreSQL provides these helper functions to retrieve information from event triggers. @@ -20401,12 +20401,12 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - pg_event_trigger_ddl_commands returns a list of + pg_event_trigger_ddl_commands returns a list of DDL commands executed by each user action, when invoked in a function attached to a - ddl_command_end event trigger. If called in any other + ddl_command_end event trigger. If called in any other context, an error is raised. - pg_event_trigger_ddl_commands returns one row for each + pg_event_trigger_ddl_commands returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns: @@ -20451,7 +20451,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); schema_name text - Name of the schema the object belongs in, if any; otherwise NULL. + Name of the schema the object belongs in, if any; otherwise NULL. No quoting is applied. @@ -20492,11 +20492,11 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - pg_event_trigger_dropped_objects returns a list of all objects - dropped by the command in whose sql_drop event it is called. + pg_event_trigger_dropped_objects returns a list of all objects + dropped by the command in whose sql_drop event it is called. If called in any other context, - pg_event_trigger_dropped_objects raises an error. - pg_event_trigger_dropped_objects returns the following columns: + pg_event_trigger_dropped_objects raises an error. + pg_event_trigger_dropped_objects returns the following columns: @@ -20553,7 +20553,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); schema_name text - Name of the schema the object belonged in, if any; otherwise NULL. + Name of the schema the object belonged in, if any; otherwise NULL. No quoting is applied. @@ -20562,7 +20562,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); text Name of the object, if the combination of schema and name can be - used as a unique identifier for the object; otherwise NULL. + used as a unique identifier for the object; otherwise NULL. No quoting is applied, and name is never schema-qualified. @@ -20598,7 +20598,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - The pg_event_trigger_dropped_objects function can be used + The pg_event_trigger_dropped_objects function can be used in an event trigger like this: CREATE FUNCTION test_event_trigger_for_drops() @@ -20631,7 +20631,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops The functions shown in provide information about a table for which a - table_rewrite event has just been called. + table_rewrite event has just been called. If called in any other context, an error is raised. @@ -20668,7 +20668,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops - The pg_event_trigger_table_rewrite_oid function can be used + The pg_event_trigger_table_rewrite_oid function can be used in an event trigger like this: CREATE FUNCTION test_event_trigger_table_rewrite_oid() diff --git a/doc/src/sgml/fuzzystrmatch.sgml b/doc/src/sgml/fuzzystrmatch.sgml index ff5bc08fea..373ac4891d 100644 --- a/doc/src/sgml/fuzzystrmatch.sgml +++ b/doc/src/sgml/fuzzystrmatch.sgml @@ -8,14 +8,14 @@ - The fuzzystrmatch module provides several + The fuzzystrmatch module provides several functions to determine similarities and distance between strings. - At present, the soundex, metaphone, - dmetaphone, and dmetaphone_alt functions do + At present, the soundex, metaphone, + dmetaphone, and dmetaphone_alt functions do not work well with multibyte encodings (such as UTF-8). @@ -31,7 +31,7 @@ - The fuzzystrmatch module provides two functions + The fuzzystrmatch module provides two functions for working with Soundex codes: @@ -49,12 +49,12 @@ difference(text, text) returns int - The soundex function converts a string to its Soundex code. - The difference function converts two strings to their Soundex + The soundex function converts a string to its Soundex code. + The difference function converts two strings to their Soundex codes and then reports the number of matching code positions. Since Soundex codes have four characters, the result ranges from zero to four, with zero being no match and four being an exact match. (Thus, the - function is misnamed — similarity would have been + function is misnamed — similarity would have been a better name.) @@ -115,10 +115,10 @@ levenshtein_less_equal(text source, text target, int max_d) returns int levenshtein_less_equal is an accelerated version of the Levenshtein function for use when only small distances are of interest. - If the actual distance is less than or equal to max_d, + If the actual distance is less than or equal to max_d, then levenshtein_less_equal returns the correct - distance; otherwise it returns some value greater than max_d. - If max_d is negative then the behavior is the same as + distance; otherwise it returns some value greater than max_d. + If max_d is negative then the behavior is the same as levenshtein. @@ -198,9 +198,9 @@ test=# SELECT metaphone('GUMBO', 4); Double Metaphone - The Double Metaphone system computes two sounds like strings - for a given input string — a primary and an - alternate. In most cases they are the same, but for non-English + The Double Metaphone system computes two sounds like strings + for a given input string — a primary and an + alternate. In most cases they are the same, but for non-English names especially they can be a bit different, depending on pronunciation. These functions compute the primary and alternate codes: diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl index 01fc6166bf..e655703b5b 100644 --- a/doc/src/sgml/generate-errcodes-table.pl +++ b/doc/src/sgml/generate-errcodes-table.pl @@ -30,12 +30,12 @@ while (<$errcodes>) s/-/—/; # Wrap PostgreSQL in - s/PostgreSQL/PostgreSQL<\/>/g; + s/PostgreSQL/PostgreSQL<\/productname>/g; print "\n\n"; print "\n"; print ""; - print "$_\n"; + print "$_\n"; print "\n"; next; diff --git a/doc/src/sgml/generic-wal.sgml b/doc/src/sgml/generic-wal.sgml index dfa78c5ca2..7a0284994c 100644 --- a/doc/src/sgml/generic-wal.sgml +++ b/doc/src/sgml/generic-wal.sgml @@ -13,8 +13,8 @@ The API for constructing generic WAL records is defined in - access/generic_xlog.h and implemented - in access/transam/generic_xlog.c. + access/generic_xlog.h and implemented + in access/transam/generic_xlog.c. @@ -24,24 +24,24 @@ - state = GenericXLogStart(relation) — start + state = GenericXLogStart(relation) — start construction of a generic WAL record for the given relation. - page = GenericXLogRegisterBuffer(state, buffer, flags) + page = GenericXLogRegisterBuffer(state, buffer, flags) — register a buffer to be modified within the current generic WAL record. This function returns a pointer to a temporary copy of the buffer's page, where modifications should be made. (Do not modify the buffer's contents directly.) The third argument is a bit mask of flags applicable to the operation. Currently the only such flag is - GENERIC_XLOG_FULL_IMAGE, which indicates that a full-page + GENERIC_XLOG_FULL_IMAGE, which indicates that a full-page image rather than a delta update should be included in the WAL record. Typically this flag would be set if the page is new or has been rewritten completely. - GenericXLogRegisterBuffer can be repeated if the + GenericXLogRegisterBuffer can be repeated if the WAL-logged action needs to modify multiple pages. @@ -54,7 +54,7 @@ - GenericXLogFinish(state) — apply the changes to + GenericXLogFinish(state) — apply the changes to the buffers and emit the generic WAL record. @@ -63,7 +63,7 @@ WAL record construction can be canceled between any of the above steps by - calling GenericXLogAbort(state). This will discard all + calling GenericXLogAbort(state). This will discard all changes to the page image copies. @@ -75,13 +75,13 @@ No direct modifications of buffers are allowed! All modifications must - be done in copies acquired from GenericXLogRegisterBuffer(). + be done in copies acquired from GenericXLogRegisterBuffer(). In other words, code that makes generic WAL records should never call - BufferGetPage() for itself. However, it remains the + BufferGetPage() for itself. However, it remains the caller's responsibility to pin/unpin and lock/unlock the buffers at appropriate times. Exclusive lock must be held on each target buffer - from before GenericXLogRegisterBuffer() until after - GenericXLogFinish(). + from before GenericXLogRegisterBuffer() until after + GenericXLogFinish(). @@ -97,7 +97,7 @@ The maximum number of buffers that can be registered for a generic WAL - record is MAX_GENERIC_XLOG_PAGES. An error will be thrown + record is MAX_GENERIC_XLOG_PAGES. An error will be thrown if this limit is exceeded. @@ -106,26 +106,26 @@ Generic WAL assumes that the pages to be modified have standard layout, and in particular that there is no useful data between - pd_lower and pd_upper. + pd_lower and pd_upper.
Since you are modifying copies of buffer - pages, GenericXLogStart() does not start a critical + pages, GenericXLogStart() does not start a critical section. Thus, you can safely do memory allocation, error throwing, - etc. between GenericXLogStart() and - GenericXLogFinish(). The only actual critical section is - present inside GenericXLogFinish(). There is no need to - worry about calling GenericXLogAbort() during an error + etc. between GenericXLogStart() and + GenericXLogFinish(). The only actual critical section is + present inside GenericXLogFinish(). There is no need to + worry about calling GenericXLogAbort() during an error exit, either. - GenericXLogFinish() takes care of marking buffers dirty + GenericXLogFinish() takes care of marking buffers dirty and setting their LSNs. You do not need to do this explicitly. @@ -148,7 +148,7 @@ - If GENERIC_XLOG_FULL_IMAGE is not specified for a + If GENERIC_XLOG_FULL_IMAGE is not specified for a registered buffer, the generic WAL record contains a delta between the old and the new page images. This delta is based on byte-by-byte comparison. This is not very compact for the case of moving data diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index e0f8adcd6e..99ee3ebca0 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -88,7 +88,7 @@ - According to the comp.ai.genetic FAQ it cannot be stressed too + According to the comp.ai.genetic FAQ it cannot be stressed too strongly that a GA is not a pure random search for a solution to a problem. A GA uses stochastic processes, but the result is distinctly non-random (better than random). @@ -222,7 +222,7 @@ are considered; and all the initially-determined relation scan plans are available. The estimated cost is the cheapest of these possibilities.) Join sequences with lower estimated cost are considered - more fit than those with higher cost. The genetic algorithm + more fit than those with higher cost. The genetic algorithm discards the least fit candidates. Then new candidates are generated by combining genes of more-fit candidates — that is, by using randomly-chosen portions of known low-cost join sequences to create @@ -235,20 +235,20 @@ This process is inherently nondeterministic, because of the randomized choices made during both the initial population selection and subsequent - mutation of the best candidates. To avoid surprising changes + mutation of the best candidates. To avoid surprising changes of the selected plan, each run of the GEQO algorithm restarts its random number generator with the current - parameter setting. As long as geqo_seed and the other + parameter setting. As long as geqo_seed and the other GEQO parameters are kept fixed, the same plan will be generated for a given query (and other planner inputs such as statistics). To experiment - with different search paths, try changing geqo_seed. + with different search paths, try changing geqo_seed. Future Implementation Tasks for - <productname>PostgreSQL</> <acronym>GEQO</acronym> + PostgreSQL GEQO Work is still needed to improve the genetic algorithm parameter diff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml index 7c2321ec3c..873627a210 100644 --- a/doc/src/sgml/gin.sgml +++ b/doc/src/sgml/gin.sgml @@ -21,15 +21,15 @@ - We use the word item to refer to a composite value that - is to be indexed, and the word key to refer to an element + We use the word item to refer to a composite value that + is to be indexed, and the word key to refer to an element value. GIN always stores and searches for keys, not item values per se. A GIN index stores a set of (key, posting list) pairs, - where a posting list is a set of row IDs in which the key + where a posting list is a set of row IDs in which the key occurs. The same row ID can appear in multiple posting lists, since an item can contain more than one key. Each key value is stored only once, so a GIN index is very compact for cases @@ -66,7 +66,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GIN operator classes shown in . (Some of the optional modules described in @@ -85,38 +85,38 @@ - array_ops - anyarray + array_ops + anyarray - && - <@ - = - @> + && + <@ + = + @> - jsonb_ops - jsonb + jsonb_ops + jsonb - ? - ?& - ?| - @> + ? + ?& + ?| + @> - jsonb_path_ops - jsonb + jsonb_path_ops + jsonb - @> + @> - tsvector_ops - tsvector + tsvector_ops + tsvector - @@ - @@@ + @@ + @@@ @@ -124,8 +124,8 @@ - Of the two operator classes for type jsonb, jsonb_ops - is the default. jsonb_path_ops supports fewer operators but + Of the two operator classes for type jsonb, jsonb_ops + is the default. jsonb_path_ops supports fewer operators but offers better performance for those operators. See for details. @@ -157,15 +157,15 @@ Datum *extractValue(Datum itemValue, int32 *nkeys, - bool **nullFlags) + bool **nullFlags) Returns a palloc'd array of keys given an item to be indexed. The - number of returned keys must be stored into *nkeys. + number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of - *nkeys bool fields, store its address at - *nullFlags, and set these null flags as needed. - *nullFlags can be left NULL (its initial value) + *nkeys bool fields, store its address at + *nullFlags, and set these null flags as needed. + *nullFlags can be left NULL (its initial value) if all keys are non-null. The return value can be NULL if the item contains no keys. @@ -175,40 +175,40 @@ Datum *extractQuery(Datum query, int32 *nkeys, StrategyNumber n, bool **pmatch, Pointer **extra_data, - bool **nullFlags, int32 *searchMode) + bool **nullFlags, int32 *searchMode) Returns a palloc'd array of keys given a value to be queried; that is, - query is the value on the right-hand side of an + query is the value on the right-hand side of an indexable operator whose left-hand side is the indexed column. - n is the strategy number of the operator within the + n is the strategy number of the operator within the operator class (see ). - Often, extractQuery will need - to consult n to determine the data type of - query and the method it should use to extract key values. - The number of returned keys must be stored into *nkeys. + Often, extractQuery will need + to consult n to determine the data type of + query and the method it should use to extract key values. + The number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of - *nkeys bool fields, store its address at - *nullFlags, and set these null flags as needed. - *nullFlags can be left NULL (its initial value) + *nkeys bool fields, store its address at + *nullFlags, and set these null flags as needed. + *nullFlags can be left NULL (its initial value) if all keys are non-null. - The return value can be NULL if the query contains no keys. + The return value can be NULL if the query contains no keys. - searchMode is an output argument that allows - extractQuery to specify details about how the search + searchMode is an output argument that allows + extractQuery to specify details about how the search will be done. - If *searchMode is set to - GIN_SEARCH_MODE_DEFAULT (which is the value it is + If *searchMode is set to + GIN_SEARCH_MODE_DEFAULT (which is the value it is initialized to before call), only items that match at least one of the returned keys are considered candidate matches. - If *searchMode is set to - GIN_SEARCH_MODE_INCLUDE_EMPTY, then in addition to items + If *searchMode is set to + GIN_SEARCH_MODE_INCLUDE_EMPTY, then in addition to items containing at least one matching key, items that contain no keys at all are considered candidate matches. (This mode is useful for implementing is-subset-of operators, for example.) - If *searchMode is set to GIN_SEARCH_MODE_ALL, + If *searchMode is set to GIN_SEARCH_MODE_ALL, then all non-null items in the index are considered candidate matches, whether they match any of the returned keys or not. (This mode is much slower than the other two choices, since it requires @@ -217,33 +217,33 @@ in most cases is probably not a good candidate for a GIN operator class.) The symbols to use for setting this mode are defined in - access/gin.h. + access/gin.h. - pmatch is an output argument for use when partial match - is supported. To use it, extractQuery must allocate - an array of *nkeys booleans and store its address at - *pmatch. Each element of the array should be set to TRUE + pmatch is an output argument for use when partial match + is supported. To use it, extractQuery must allocate + an array of *nkeys booleans and store its address at + *pmatch. Each element of the array should be set to TRUE if the corresponding key requires partial match, FALSE if not. - If *pmatch is set to NULL then GIN assumes partial match + If *pmatch is set to NULL then GIN assumes partial match is not required. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that do not support partial match. - extra_data is an output argument that allows - extractQuery to pass additional data to the - consistent and comparePartial methods. - To use it, extractQuery must allocate - an array of *nkeys pointers and store its address at - *extra_data, then store whatever it wants to into the + extra_data is an output argument that allows + extractQuery to pass additional data to the + consistent and comparePartial methods. + To use it, extractQuery must allocate + an array of *nkeys pointers and store its address at + *extra_data, then store whatever it wants to into the individual pointers. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that - do not require extra data. If *extra_data is set, the - whole array is passed to the consistent method, and - the appropriate element to the comparePartial method. + do not require extra data. If *extra_data is set, the + whole array is passed to the consistent method, and + the appropriate element to the comparePartial method. @@ -251,10 +251,10 @@ An operator class must also provide a function to check if an indexed item - matches the query. It comes in two flavors, a boolean consistent - function, and a ternary triConsistent function. - triConsistent covers the functionality of both, so providing - triConsistent alone is sufficient. However, if the boolean + matches the query. It comes in two flavors, a boolean consistent + function, and a ternary triConsistent function. + triConsistent covers the functionality of both, so providing + triConsistent alone is sufficient. However, if the boolean variant is significantly cheaper to calculate, it can be advantageous to provide both. If only the boolean variant is provided, some optimizations that depend on refuting index items before fetching all the keys are @@ -264,48 +264,48 @@ bool consistent(bool check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], bool *recheck, - Datum queryKeys[], bool nullFlags[]) + Datum queryKeys[], bool nullFlags[]) Returns TRUE if an indexed item satisfies the query operator with - strategy number n (or might satisfy it, if the recheck + strategy number n (or might satisfy it, if the recheck indication is returned). This function does not have direct access to the indexed item's value, since GIN does not store items explicitly. Rather, what is available is knowledge about which key values extracted from the query appear in a given - indexed item. The check array has length - nkeys, which is the same as the number of keys previously - returned by extractQuery for this query datum. + indexed item. The check array has length + nkeys, which is the same as the number of keys previously + returned by extractQuery for this query datum. Each element of the - check array is TRUE if the indexed item contains the + check array is TRUE if the indexed item contains the corresponding query key, i.e., if (check[i] == TRUE) the i-th key of the - extractQuery result array is present in the indexed item. - The original query datum is - passed in case the consistent method needs to consult it, - and so are the queryKeys[] and nullFlags[] - arrays previously returned by extractQuery. - extra_data is the extra-data array returned by - extractQuery, or NULL if none. + extractQuery result array is present in the indexed item. + The original query datum is + passed in case the consistent method needs to consult it, + and so are the queryKeys[] and nullFlags[] + arrays previously returned by extractQuery. + extra_data is the extra-data array returned by + extractQuery, or NULL if none. - When extractQuery returns a null key in - queryKeys[], the corresponding check[] element + When extractQuery returns a null key in + queryKeys[], the corresponding check[] element is TRUE if the indexed item contains a null key; that is, the - semantics of check[] are like IS NOT DISTINCT - FROM. The consistent function can examine the - corresponding nullFlags[] element if it needs to tell + semantics of check[] are like IS NOT DISTINCT + FROM. The consistent function can examine the + corresponding nullFlags[] element if it needs to tell the difference between a regular value match and a null match. - On success, *recheck should be set to TRUE if the heap + On success, *recheck should be set to TRUE if the heap tuple needs to be rechecked against the query operator, or FALSE if the index test is exact. That is, a FALSE return value guarantees that the heap tuple does not match the query; a TRUE return value with - *recheck set to FALSE guarantees that the heap tuple does + *recheck set to FALSE guarantees that the heap tuple does match the query; and a TRUE return value with - *recheck set to TRUE means that the heap tuple might match + *recheck set to TRUE means that the heap tuple might match the query, so it needs to be fetched and rechecked by evaluating the query operator directly against the originally indexed item. @@ -315,30 +315,30 @@ GinTernaryValue triConsistent(GinTernaryValue check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], - Datum queryKeys[], bool nullFlags[]) + Datum queryKeys[], bool nullFlags[]) - triConsistent is similar to consistent, - but instead of booleans in the check vector, there are + triConsistent is similar to consistent, + but instead of booleans in the check vector, there are three possible values for each - key: GIN_TRUE, GIN_FALSE and - GIN_MAYBE. GIN_FALSE and GIN_TRUE + key: GIN_TRUE, GIN_FALSE and + GIN_MAYBE. GIN_FALSE and GIN_TRUE have the same meaning as regular boolean values, while - GIN_MAYBE means that the presence of that key is not known. - When GIN_MAYBE values are present, the function should only - return GIN_TRUE if the item certainly matches whether or + GIN_MAYBE means that the presence of that key is not known. + When GIN_MAYBE values are present, the function should only + return GIN_TRUE if the item certainly matches whether or not the index item contains the corresponding query keys. Likewise, the - function must return GIN_FALSE only if the item certainly - does not match, whether or not it contains the GIN_MAYBE - keys. If the result depends on the GIN_MAYBE entries, i.e., + function must return GIN_FALSE only if the item certainly + does not match, whether or not it contains the GIN_MAYBE + keys. If the result depends on the GIN_MAYBE entries, i.e., the match cannot be confirmed or refuted based on the known query keys, - the function must return GIN_MAYBE. + the function must return GIN_MAYBE. - When there are no GIN_MAYBE values in the check - vector, a GIN_MAYBE return value is the equivalent of - setting the recheck flag in the - boolean consistent function. + When there are no GIN_MAYBE values in the check + vector, a GIN_MAYBE return value is the equivalent of + setting the recheck flag in the + boolean consistent function. @@ -352,7 +352,7 @@ - int compare(Datum a, Datum b) + int compare(Datum a, Datum b) Compares two keys (not indexed items!) and returns an integer less than @@ -364,13 +364,13 @@ - Alternatively, if the operator class does not provide a compare + Alternatively, if the operator class does not provide a compare method, GIN will look up the default btree operator class for the index key data type, and use its comparison function. It is recommended to specify the comparison function in a GIN operator class that is meant for just one data type, as looking up the btree operator class costs a few cycles. However, polymorphic GIN operator classes (such - as array_ops) typically cannot specify a single comparison + as array_ops) typically cannot specify a single comparison function. @@ -381,7 +381,7 @@ int comparePartial(Datum partial_key, Datum key, StrategyNumber n, - Pointer extra_data) + Pointer extra_data) Compare a partial-match query key to an index key. Returns an integer @@ -389,11 +389,11 @@ does not match the query, but the index scan should continue; zero means that the index key does match the query; greater than zero indicates that the index scan should stop because no more matches - are possible. The strategy number n of the operator + are possible. The strategy number n of the operator that generated the partial match query is provided, in case its semantics are needed to determine when to end the scan. Also, - extra_data is the corresponding element of the extra-data - array made by extractQuery, or NULL if none. + extra_data is the corresponding element of the extra-data + array made by extractQuery, or NULL if none. Null keys are never passed to this function. @@ -402,25 +402,25 @@ - To support partial match queries, an operator class must - provide the comparePartial method, and its - extractQuery method must set the pmatch + To support partial match queries, an operator class must + provide the comparePartial method, and its + extractQuery method must set the pmatch parameter when a partial-match query is encountered. See for details. - The actual data types of the various Datum values mentioned + The actual data types of the various Datum values mentioned above vary depending on the operator class. The item values passed to - extractValue are always of the operator class's input type, and - all key values must be of the class's STORAGE type. The type of - the query argument passed to extractQuery, - consistent and triConsistent is whatever is the + extractValue are always of the operator class's input type, and + all key values must be of the class's STORAGE type. The type of + the query argument passed to extractQuery, + consistent and triConsistent is whatever is the right-hand input type of the class member operator identified by the strategy number. This need not be the same as the indexed type, so long as key values of the correct type can be extracted from it. However, it is recommended that the SQL declarations of these three support functions use - the opclass's indexed data type for the query argument, even + the opclass's indexed data type for the query argument, even though the actual type might be something else depending on the operator. @@ -434,8 +434,8 @@ constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and where each tuple in a leaf page contains either a pointer to a B-tree of heap pointers (a - posting tree), or a simple list of heap pointers (a posting - list) when the list is small enough to fit into a single index tuple along + posting tree), or a simple list of heap pointers (a posting + list) when the list is small enough to fit into a single index tuple along with the key value. @@ -443,7 +443,7 @@ As of PostgreSQL 9.1, null key values can be included in the index. Also, placeholder nulls are included in the index for indexed items that are null or contain no keys according to - extractValue. This allows searches that should find empty + extractValue. This allows searches that should find empty items to do so. @@ -461,7 +461,7 @@ intrinsic nature of inverted indexes: inserting or updating one heap row can cause many inserts into the index (one for each key extracted from the indexed item). As of PostgreSQL 8.4, - GIN is capable of postponing much of this work by inserting + GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed or autoanalyzed, or when gin_clean_pending_list function is called, or if the @@ -479,7 +479,7 @@ of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update - that causes the pending list to become too large will incur an + that causes the pending list to become too large will incur an immediate cleanup cycle and thus be much slower than other updates. Proper use of autovacuum can minimize both of these problems. @@ -497,15 +497,15 @@ Partial Match Algorithm - GIN can support partial match queries, in which the query + GIN can support partial match queries, in which the query does not determine an exact match for one or more keys, but the possible matches fall within a reasonably narrow range of key values (within the - key sorting order determined by the compare support method). - The extractQuery method, instead of returning a key value + key sorting order determined by the compare support method). + The extractQuery method, instead of returning a key value to be matched exactly, returns a key value that is the lower bound of - the range to be searched, and sets the pmatch flag true. - The key range is then scanned using the comparePartial - method. comparePartial must return zero for a matching + the range to be searched, and sets the pmatch flag true. + The key range is then scanned using the comparePartial + method. comparePartial must return zero for a matching index key, less than zero for a non-match that is still within the range to be searched, or greater than zero if the index key is past the range that could match. @@ -542,7 +542,7 @@ Build time for a GIN index is very sensitive to - the maintenance_work_mem setting; it doesn't pay to + the maintenance_work_mem setting; it doesn't pay to skimp on work memory during index creation. @@ -553,18 +553,18 @@ During a series of insertions into an existing GIN - index that has fastupdate enabled, the system will clean up + index that has fastupdate enabled, the system will clean up the pending-entry list whenever the list grows larger than - gin_pending_list_limit. To avoid fluctuations in observed + gin_pending_list_limit. To avoid fluctuations in observed response time, it's desirable to have pending-list cleanup occur in the background (i.e., via autovacuum). Foreground cleanup operations - can be avoided by increasing gin_pending_list_limit + can be avoided by increasing gin_pending_list_limit or making autovacuum more aggressive. However, enlarging the threshold of the cleanup operation means that if a foreground cleanup does occur, it will take even longer. - gin_pending_list_limit can be overridden for individual + gin_pending_list_limit can be overridden for individual GIN indexes by changing storage parameters, and which allows each GIN index to have its own cleanup threshold. For example, it's possible to increase the threshold only for the GIN @@ -616,7 +616,7 @@ GIN assumes that indexable operators are strict. This - means that extractValue will not be called at all on a null + means that extractValue will not be called at all on a null item value (instead, a placeholder index entry is created automatically), and extractQuery will not be called on a null query value either (instead, the query is presumed to be unsatisfiable). Note @@ -629,36 +629,36 @@ Examples - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GIN operator classes previously shown in . - The following contrib modules also contain + The following contrib modules also contain GIN operator classes: - btree_gin + btree_gin B-tree equivalent functionality for several data types - hstore + hstore Module for storing (key, value) pairs - intarray + intarray Enhanced support for int[] - pg_trgm + pg_trgm Text similarity using trigram matching diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index 1648eb3672..4e4470d439 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -44,7 +44,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GiST operator classes shown in . (Some of the optional modules described in @@ -64,142 +64,142 @@ - box_ops - box + box_ops + box - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - circle_ops - circle + circle_ops + circle - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - <-> + <-> - inet_ops - inet, cidr + inet_ops + inet, cidr - && - >> - >>= - > - >= - <> - << - <<= - < - <= - = + && + >> + >>= + > + >= + <> + << + <<= + < + <= + = - point_ops - point + point_ops + point - >> - >^ - << - <@ - <@ - <@ - <^ - ~= + >> + >^ + << + <@ + <@ + <@ + <^ + ~= - <-> + <-> - poly_ops - polygon + poly_ops + polygon - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - <-> + <-> - range_ops + range_ops any range type - && - &> - &< - >> - << - <@ - -|- - = - @> - @> + && + &> + &< + >> + << + <@ + -|- + = + @> + @> - tsquery_ops - tsquery + tsquery_ops + tsquery - <@ - @> + <@ + @> - tsvector_ops - tsvector + tsvector_ops + tsvector - @@ + @@ @@ -209,9 +209,9 @@ - For historical reasons, the inet_ops operator class is - not the default class for types inet and cidr. - To use it, mention the class name in CREATE INDEX, + For historical reasons, the inet_ops operator class is + not the default class for types inet and cidr. + To use it, mention the class name in CREATE INDEX, for example CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); @@ -270,53 +270,53 @@ CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); There are five methods that an index operator class for GiST must provide, and four that are optional. Correctness of the index is ensured - by proper implementation of the same, consistent - and union methods, while efficiency (size and speed) of the - index will depend on the penalty and picksplit + by proper implementation of the same, consistent + and union methods, while efficiency (size and speed) of the + index will depend on the penalty and picksplit methods. - Two optional methods are compress and - decompress, which allow an index to have internal tree data of + Two optional methods are compress and + decompress, which allow an index to have internal tree data of a different type than the data it indexes. The leaves are to be of the indexed data type, while the other tree nodes can be of any C struct (but - you still have to follow PostgreSQL data type rules here, - see about varlena for variable sized data). If the tree's - internal data type exists at the SQL level, the STORAGE option - of the CREATE OPERATOR CLASS command can be used. - The optional eighth method is distance, which is needed + you still have to follow PostgreSQL data type rules here, + see about varlena for variable sized data). If the tree's + internal data type exists at the SQL level, the STORAGE option + of the CREATE OPERATOR CLASS command can be used. + The optional eighth method is distance, which is needed if the operator class wishes to support ordered scans (nearest-neighbor - searches). The optional ninth method fetch is needed if the + searches). The optional ninth method fetch is needed if the operator class wishes to support index-only scans, except when the - compress method is omitted. + compress method is omitted. - consistent + consistent - Given an index entry p and a query value q, + Given an index entry p and a query value q, this function determines whether the index entry is - consistent with the query; that is, could the predicate - indexed_column - indexable_operator q be true for + consistent with the query; that is, could the predicate + indexed_column + indexable_operator q be true for any row represented by the index entry? For a leaf index entry this is equivalent to testing the indexable condition, while for an internal tree node this determines whether it is necessary to scan the subtree of the index represented by the tree node. When the result is - true, a recheck flag must also be returned. + true, a recheck flag must also be returned. This indicates whether the predicate is certainly true or only possibly - true. If recheck = false then the index has - tested the predicate condition exactly, whereas if recheck - = true the row is only a candidate match. In that case the + true. If recheck = false then the index has + tested the predicate condition exactly, whereas if recheck + = true the row is only a candidate match. In that case the system will automatically evaluate the - indexable_operator against the actual row value to see + indexable_operator against the actual row value to see if it is really a match. This convention allows GiST to support both lossless and lossy index structures. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_consistent(internal, data_type, smallint, oid, internal) @@ -356,23 +356,23 @@ my_consistent(PG_FUNCTION_ARGS) } - Here, key is an element in the index and query - the value being looked up in the index. The StrategyNumber + Here, key is an element in the index and query + the value being looked up in the index. The StrategyNumber parameter indicates which operator of your operator class is being applied — it matches one of the operator numbers in the - CREATE OPERATOR CLASS command. + CREATE OPERATOR CLASS command. Depending on which operators you have included in the class, the data - type of query could vary with the operator, since it will + type of query could vary with the operator, since it will be whatever type is on the righthand side of the operator, which might be different from the indexed data type appearing on the lefthand side. (The above code skeleton assumes that only one type is possible; if - not, fetching the query argument value would have to depend + not, fetching the query argument value would have to depend on the operator.) It is recommended that the SQL declaration of - the consistent function use the opclass's indexed data - type for the query argument, even though the actual type + the consistent function use the opclass's indexed data + type for the query argument, even though the actual type might be something else depending on the operator. @@ -380,7 +380,7 @@ my_consistent(PG_FUNCTION_ARGS) - union + union This method consolidates information in the tree. Given a set of @@ -389,7 +389,7 @@ my_consistent(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_union(internal, internal) @@ -439,44 +439,44 @@ my_union(PG_FUNCTION_ARGS) As you can see, in this skeleton we're dealing with a data type - where union(X, Y, Z) = union(union(X, Y), Z). It's easy + where union(X, Y, Z) = union(union(X, Y), Z). It's easy enough to support data types where this is not the case, by implementing the proper union algorithm in this - GiST support method. + GiST support method. - The result of the union function must be a value of the + The result of the union function must be a value of the index's storage type, whatever that is (it might or might not be - different from the indexed column's type). The union - function should return a pointer to newly palloc()ed + different from the indexed column's type). The union + function should return a pointer to newly palloc()ed memory. You can't just return the input value as-is, even if there is no type change. - As shown above, the union function's - first internal argument is actually - a GistEntryVector pointer. The second argument is a + As shown above, the union function's + first internal argument is actually + a GistEntryVector pointer. The second argument is a pointer to an integer variable, which can be ignored. (It used to be - required that the union function store the size of its + required that the union function store the size of its result value into that variable, but this is no longer necessary.) - compress + compress Converts a data item into a format suitable for physical storage in an index page. - If the compress method is omitted, data items are stored + If the compress method is omitted, data items are stored in the index without modification. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_compress(internal) @@ -519,7 +519,7 @@ my_compress(PG_FUNCTION_ARGS) - You have to adapt compressed_data_type to the specific + You have to adapt compressed_data_type to the specific type you're converting to in order to compress your leaf nodes, of course. @@ -527,24 +527,24 @@ my_compress(PG_FUNCTION_ARGS) - decompress + decompress Converts the stored representation of a data item into a format that can be manipulated by the other GiST methods in the operator class. - If the decompress method is omitted, it is assumed that + If the decompress method is omitted, it is assumed that the other GiST methods can work directly on the stored data format. - (decompress is not necessarily the reverse of + (decompress is not necessarily the reverse of the compress method; in particular, if compress is lossy then it's impossible - for decompress to exactly reconstruct the original - data. decompress is not necessarily equivalent - to fetch, either, since the other GiST methods might not + for decompress to exactly reconstruct the original + data. decompress is not necessarily equivalent + to fetch, either, since the other GiST methods might not require full reconstruction of the data.) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_decompress(internal) @@ -573,7 +573,7 @@ my_decompress(PG_FUNCTION_ARGS) - penalty + penalty Returns a value indicating the cost of inserting the new @@ -584,7 +584,7 @@ my_decompress(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_penalty(internal, internal, internal) @@ -612,15 +612,15 @@ my_penalty(PG_FUNCTION_ARGS) } - For historical reasons, the penalty function doesn't - just return a float result; instead it has to store the value + For historical reasons, the penalty function doesn't + just return a float result; instead it has to store the value at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the address of that argument. - The penalty function is crucial to good performance of + The penalty function is crucial to good performance of the index. It'll get used at insertion time to determine which branch to follow when choosing where to add the new entry in the tree. At query time, the more balanced the index, the quicker the lookup. @@ -629,7 +629,7 @@ my_penalty(PG_FUNCTION_ARGS) - picksplit + picksplit When an index page split is necessary, this function decides which @@ -638,7 +638,7 @@ my_penalty(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_picksplit(internal, internal) @@ -725,33 +725,33 @@ my_picksplit(PG_FUNCTION_ARGS) } - Notice that the picksplit function's result is delivered - by modifying the passed-in v structure. The return + Notice that the picksplit function's result is delivered + by modifying the passed-in v structure. The return value per se is ignored, though it's conventional to pass back the - address of v. + address of v. - Like penalty, the picksplit function + Like penalty, the picksplit function is crucial to good performance of the index. Designing suitable - penalty and picksplit implementations + penalty and picksplit implementations is where the challenge of implementing well-performing - GiST indexes lies. + GiST indexes lies. - same + same Returns true if two index entries are identical, false otherwise. - (An index entry is a value of the index's storage type, + (An index entry is a value of the index's storage type, not necessarily the original indexed column's type.) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_same(storage_type, storage_type, internal) @@ -777,7 +777,7 @@ my_same(PG_FUNCTION_ARGS) } - For historical reasons, the same function doesn't + For historical reasons, the same function doesn't just return a Boolean result; instead it has to store the flag at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the @@ -787,15 +787,15 @@ my_same(PG_FUNCTION_ARGS) - distance + distance - Given an index entry p and a query value q, + Given an index entry p and a query value q, this function determines the index entry's - distance from the query value. This function must be + distance from the query value. This function must be supplied if the operator class contains any ordering operators. A query using the ordering operator will be implemented by returning - index entries with the smallest distance values first, + index entries with the smallest distance values first, so the results must be consistent with the operator's semantics. For a leaf index entry the result just represents the distance to the index entry; for an internal tree node, the result must be the @@ -803,7 +803,7 @@ my_same(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_distance(internal, data_type, smallint, oid, internal) @@ -836,8 +836,8 @@ my_distance(PG_FUNCTION_ARGS) } - The arguments to the distance function are identical to - the arguments of the consistent function. + The arguments to the distance function are identical to + the arguments of the consistent function. @@ -847,31 +847,31 @@ my_distance(PG_FUNCTION_ARGS) geometric applications. For an internal tree node, the distance returned must not be greater than the distance to any of the child nodes. If the returned distance is not exact, the function must set - *recheck to true. (This is not necessary for internal tree + *recheck to true. (This is not necessary for internal tree nodes; for them, the calculation is always assumed to be inexact.) In this case the executor will calculate the accurate distance after fetching the tuple from the heap, and reorder the tuples if necessary. - If the distance function returns *recheck = true for any + If the distance function returns *recheck = true for any leaf node, the original ordering operator's return type must - be float8 or float4, and the distance function's + be float8 or float4, and the distance function's result values must be comparable to those of the original ordering operator, since the executor will sort using both distance function results and recalculated ordering-operator results. Otherwise, the - distance function's result values can be any finite float8 + distance function's result values can be any finite float8 values, so long as the relative order of the result values matches the order returned by the ordering operator. (Infinity and minus infinity are used internally to handle cases such as nulls, so it is not - recommended that distance functions return these values.) + recommended that distance functions return these values.) - fetch + fetch Converts the compressed index representation of a data item into the @@ -880,7 +880,7 @@ my_distance(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_fetch(internal) @@ -889,14 +889,14 @@ AS 'MODULE_PATHNAME' LANGUAGE C STRICT; - The argument is a pointer to a GISTENTRY struct. On - entry, its key field contains a non-NULL leaf datum in - compressed form. The return value is another GISTENTRY - struct, whose key field contains the same datum in its + The argument is a pointer to a GISTENTRY struct. On + entry, its key field contains a non-NULL leaf datum in + compressed form. The return value is another GISTENTRY + struct, whose key field contains the same datum in its original, uncompressed form. If the opclass's compress function does - nothing for leaf entries, the fetch method can return the + nothing for leaf entries, the fetch method can return the argument as-is. Or, if the opclass does not have a compress function, - the fetch method can be omitted as well, since it would + the fetch method can be omitted as well, since it would necessarily be a no-op. @@ -933,7 +933,7 @@ my_fetch(PG_FUNCTION_ARGS) If the compress method is lossy for leaf entries, the operator class cannot support index-only scans, and must not define - a fetch function. + a fetch function. @@ -942,15 +942,15 @@ my_fetch(PG_FUNCTION_ARGS) All the GiST support methods are normally called in short-lived memory - contexts; that is, CurrentMemoryContext will get reset after + contexts; that is, CurrentMemoryContext will get reset after each tuple is processed. It is therefore not very important to worry about pfree'ing everything you palloc. However, in some cases it's useful for a support method to cache data across repeated calls. To do that, allocate - the longer-lived data in fcinfo->flinfo->fn_mcxt, and - keep a pointer to it in fcinfo->flinfo->fn_extra. Such + the longer-lived data in fcinfo->flinfo->fn_mcxt, and + keep a pointer to it in fcinfo->flinfo->fn_extra. Such data will survive for the life of the index operation (e.g., a single GiST index scan, index build, or index tuple insertion). Be careful to pfree - the previous value when replacing a fn_extra value, or the leak + the previous value when replacing a fn_extra value, or the leak will accumulate for the duration of the operation. @@ -974,7 +974,7 @@ my_fetch(PG_FUNCTION_ARGS) - However, buffering index build needs to call the penalty + However, buffering index build needs to call the penalty function more often, which consumes some extra CPU resources. Also, the buffers used in the buffering build need temporary disk space, up to the size of the resulting index. Buffering can also influence the quality @@ -1002,57 +1002,57 @@ my_fetch(PG_FUNCTION_ARGS) The PostgreSQL source distribution includes several examples of index methods implemented using GiST. The core system currently provides text search - support (indexing for tsvector and tsquery) as well as + support (indexing for tsvector and tsquery) as well as R-Tree equivalent functionality for some of the built-in geometric data types - (see src/backend/access/gist/gistproc.c). The following - contrib modules also contain GiST + (see src/backend/access/gist/gistproc.c). The following + contrib modules also contain GiST operator classes: - btree_gist + btree_gist B-tree equivalent functionality for several data types - cube + cube Indexing for multidimensional cubes - hstore + hstore Module for storing (key, value) pairs - intarray + intarray RD-Tree for one-dimensional array of int4 values - ltree + ltree Indexing for tree-like structures - pg_trgm + pg_trgm Text similarity using trigram matching - seg + seg Indexing for float ranges diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 6c54fbd40d..086d6abb30 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -3,12 +3,12 @@ High Availability, Load Balancing, and Replication - high availability - failover - replication - load balancing - clustering - data partitioning + high availability + failover + replication + load balancing + clustering + data partitioning Database servers can work together to allow a second server to @@ -38,12 +38,12 @@ Some solutions deal with synchronization by allowing only one server to modify the data. Servers that can modify data are - called read/write, master or primary servers. - Servers that track changes in the master are called standby - or secondary servers. A standby server that cannot be connected + called read/write, master or primary servers. + Servers that track changes in the master are called standby + or secondary servers. A standby server that cannot be connected to until it is promoted to a master server is called a warm - standby server, and one that can accept connections and serves read-only - queries is called a hot standby server. + standby server, and one that can accept connections and serves read-only + queries is called a hot standby server. @@ -99,7 +99,7 @@ Shared hardware functionality is common in network storage devices. Using a network file system is also possible, though care must be - taken that the file system has full POSIX behavior (see POSIX behavior (see ). One significant limitation of this method is that if the shared disk array fails or becomes corrupt, the primary and standby servers are both nonfunctional. Another issue is @@ -121,7 +121,7 @@ the mirroring must be done in a way that ensures the standby server has a consistent copy of the file system — specifically, writes to the standby must be done in the same order as those on the master. - DRBD is a popular file system replication solution + DRBD is a popular file system replication solution for Linux. @@ -143,7 +143,7 @@ protocol to make nodes agree on a serializable transactional order. Warm and hot standby servers can be kept current by reading a - stream of write-ahead log (WAL) + stream of write-ahead log (WAL) records. If the main server fails, the standby contains almost all of the data of the main server, and can be quickly made the new master database server. This can be synchronous or @@ -189,7 +189,7 @@ protocol to make nodes agree on a serializable transactional order. - Slony-I is an example of this type of replication, with per-table + Slony-I is an example of this type of replication, with per-table granularity, and support for multiple standby servers. Because it updates the standby server asynchronously (in batches), there is possible data loss during fail over. @@ -212,7 +212,7 @@ protocol to make nodes agree on a serializable transactional order. If queries are simply broadcast unmodified, functions like - random(), CURRENT_TIMESTAMP, and + random(), CURRENT_TIMESTAMP, and sequences can have different values on different servers. This is because each server operates independently, and because SQL queries are broadcast (and not actual modified rows). If @@ -226,7 +226,7 @@ protocol to make nodes agree on a serializable transactional order. transactions either commit or abort on all servers, perhaps using two-phase commit ( and ). - Pgpool-II and Continuent Tungsten + Pgpool-II and Continuent Tungsten are examples of this type of replication. @@ -266,12 +266,12 @@ protocol to make nodes agree on a serializable transactional order. there is no need to partition workloads between master and standby servers, and because the data changes are sent from one server to another, there is no problem with non-deterministic - functions like random(). + functions like random(). - PostgreSQL does not offer this type of replication, - though PostgreSQL two-phase commit (PostgreSQL does not offer this type of replication, + though PostgreSQL two-phase commit ( and ) can be used to implement this in application code or middleware. @@ -284,8 +284,8 @@ protocol to make nodes agree on a serializable transactional order. - Because PostgreSQL is open source and easily - extended, a number of companies have taken PostgreSQL + Because PostgreSQL is open source and easily + extended, a number of companies have taken PostgreSQL and created commercial closed-source solutions with unique failover, replication, and load balancing capabilities. @@ -475,9 +475,9 @@ protocol to make nodes agree on a serializable transactional order. concurrently on a single query. It is usually accomplished by splitting the data among servers and having each server execute its part of the query and return results to a central server where they - are combined and returned to the user. Pgpool-II + are combined and returned to the user. Pgpool-II has this capability. Also, this can be implemented using the - PL/Proxy tool set. + PL/Proxy tool set. @@ -494,10 +494,10 @@ protocol to make nodes agree on a serializable transactional order. Continuous archiving can be used to create a high - availability (HA) cluster configuration with one or more - standby servers ready to take over operations if the + availability (HA) cluster configuration with one or more + standby servers ready to take over operations if the primary server fails. This capability is widely referred to as - warm standby or log shipping. + warm standby or log shipping. @@ -513,7 +513,7 @@ protocol to make nodes agree on a serializable transactional order. Directly moving WAL records from one database server to another - is typically described as log shipping. PostgreSQL + is typically described as log shipping. PostgreSQL implements file-based log shipping by transferring WAL records one file (WAL segment) at a time. WAL files (16MB) can be shipped easily and cheaply over any distance, whether it be to an @@ -597,7 +597,7 @@ protocol to make nodes agree on a serializable transactional order. In general, log shipping between servers running different major - PostgreSQL release + PostgreSQL release levels is not possible. It is the policy of the PostgreSQL Global Development Group not to make changes to disk formats during minor release upgrades, so it is likely that running different minor release levels @@ -621,32 +621,32 @@ protocol to make nodes agree on a serializable transactional order. (see ) or directly from the master over a TCP connection (streaming replication). The standby server will also attempt to restore any WAL found in the standby cluster's - pg_wal directory. That typically happens after a server + pg_wal directory. That typically happens after a server restart, when the standby replays again WAL that was streamed from the master before the restart, but you can also manually copy files to - pg_wal at any time to have them replayed. + pg_wal at any time to have them replayed. At startup, the standby begins by restoring all WAL available in the - archive location, calling restore_command. Once it - reaches the end of WAL available there and restore_command - fails, it tries to restore any WAL available in the pg_wal directory. + archive location, calling restore_command. Once it + reaches the end of WAL available there and restore_command + fails, it tries to restore any WAL available in the pg_wal directory. If that fails, and streaming replication has been configured, the standby tries to connect to the primary server and start streaming WAL - from the last valid record found in archive or pg_wal. If that fails + from the last valid record found in archive or pg_wal. If that fails or streaming replication is not configured, or if the connection is later disconnected, the standby goes back to step 1 and tries to restore the file from the archive again. This loop of retries from the - archive, pg_wal, and via streaming replication goes on until the server + archive, pg_wal, and via streaming replication goes on until the server is stopped or failover is triggered by a trigger file. Standby mode is exited and the server switches to normal operation - when pg_ctl promote is run or a trigger file is found - (trigger_file). Before failover, - any WAL immediately available in the archive or in pg_wal will be + when pg_ctl promote is run or a trigger file is found + (trigger_file). Before failover, + any WAL immediately available in the archive or in pg_wal will be restored, but no attempt is made to connect to the master. @@ -667,8 +667,8 @@ protocol to make nodes agree on a serializable transactional order. If you want to use streaming replication, set up authentication on the primary server to allow replication connections from the standby server(s); that is, create a role and provide a suitable entry or - entries in pg_hba.conf with the database field set to - replication. Also ensure max_wal_senders is set + entries in pg_hba.conf with the database field set to + replication. Also ensure max_wal_senders is set to a sufficiently large value in the configuration file of the primary server. If replication slots will be used, ensure that max_replication_slots is set sufficiently @@ -687,19 +687,19 @@ protocol to make nodes agree on a serializable transactional order. To set up the standby server, restore the base backup taken from primary server (see ). Create a recovery - command file recovery.conf in the standby's cluster data - directory, and turn on standby_mode. Set - restore_command to a simple command to copy files from + command file recovery.conf in the standby's cluster data + directory, and turn on standby_mode. Set + restore_command to a simple command to copy files from the WAL archive. If you plan to have multiple standby servers for high - availability purposes, set recovery_target_timeline to - latest, to make the standby server follow the timeline change + availability purposes, set recovery_target_timeline to + latest, to make the standby server follow the timeline change that occurs at failover to another standby. Do not use pg_standby or similar tools with the built-in standby mode - described here. restore_command should return immediately + described here. restore_command should return immediately if the file does not exist; the server will retry the command again if necessary. See for using tools like pg_standby. @@ -708,11 +708,11 @@ protocol to make nodes agree on a serializable transactional order. If you want to use streaming replication, fill in - primary_conninfo with a libpq connection string, including + primary_conninfo with a libpq connection string, including the host name (or IP address) and any additional details needed to connect to the primary server. If the primary needs a password for authentication, the password needs to be specified in - primary_conninfo as well. + primary_conninfo as well. @@ -726,8 +726,8 @@ protocol to make nodes agree on a serializable transactional order. If you're using a WAL archive, its size can be minimized using the parameter to remove files that are no longer required by the standby server. - The pg_archivecleanup utility is designed specifically to - be used with archive_cleanup_command in typical single-standby + The pg_archivecleanup utility is designed specifically to + be used with archive_cleanup_command in typical single-standby configurations, see . Note however, that if you're using the archive for backup purposes, you need to retain files needed to recover from at least the latest base @@ -735,7 +735,7 @@ protocol to make nodes agree on a serializable transactional order. - A simple example of a recovery.conf is: + A simple example of a recovery.conf is: standby_mode = 'on' primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' @@ -746,7 +746,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' You can have any number of standby servers, but if you use streaming - replication, make sure you set max_wal_senders high enough in + replication, make sure you set max_wal_senders high enough in the primary to allow them to be connected simultaneously. @@ -773,7 +773,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' changes becoming visible in the standby. This delay is however much smaller than with file-based log shipping, typically under one second assuming the standby is powerful enough to keep up with the load. With - streaming replication, archive_timeout is not required to + streaming replication, archive_timeout is not required to reduce the data loss window. @@ -782,7 +782,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting - wal_keep_segments to a value large enough to ensure that + wal_keep_segments to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that's accessible from the standby, these solutions are not required, since the standby can @@ -793,11 +793,11 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' To use streaming replication, set up a file-based log-shipping standby server as described in . The step that turns a file-based log-shipping standby into streaming replication - standby is setting primary_conninfo setting in the - recovery.conf file to point to the primary server. Set + standby is setting primary_conninfo setting in the + recovery.conf file to point to the primary server. Set and authentication options - (see pg_hba.conf) on the primary so that the standby server - can connect to the replication pseudo-database on the primary + (see pg_hba.conf) on the primary so that the standby server + can connect to the replication pseudo-database on the primary server (see ). @@ -815,7 +815,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' - When the standby is started and primary_conninfo is set + When the standby is started and primary_conninfo is set correctly, the standby will connect to the primary after replaying all WAL files available in the archive. If the connection is established successfully, you will see a walreceiver process in the standby, and @@ -829,20 +829,20 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' so that only trusted users can read the WAL stream, because it is easy to extract privileged information from it. Standby servers must authenticate to the primary as a superuser or an account that has the - REPLICATION privilege. It is recommended to create a - dedicated user account with REPLICATION and LOGIN - privileges for replication. While REPLICATION privilege gives + REPLICATION privilege. It is recommended to create a + dedicated user account with REPLICATION and LOGIN + privileges for replication. While REPLICATION privilege gives very high permissions, it does not allow the user to modify any data on - the primary system, which the SUPERUSER privilege does. + the primary system, which the SUPERUSER privilege does. Client authentication for replication is controlled by a - pg_hba.conf record specifying replication in the - database field. For example, if the standby is running on - host IP 192.168.1.100 and the account name for replication - is foo, the administrator can add the following line to the - pg_hba.conf file on the primary: + pg_hba.conf record specifying replication in the + database field. For example, if the standby is running on + host IP 192.168.1.100 and the account name for replication + is foo, the administrator can add the following line to the + pg_hba.conf file on the primary: # Allow the user "foo" from host 192.168.1.100 to connect to the primary @@ -854,14 +854,14 @@ host replication foo 192.168.1.100/32 md5 The host name and port number of the primary, connection user name, - and password are specified in the recovery.conf file. - The password can also be set in the ~/.pgpass file on the - standby (specify replication in the database + and password are specified in the recovery.conf file. + The password can also be set in the ~/.pgpass file on the + standby (specify replication in the database field). - For example, if the primary is running on host IP 192.168.1.50, + For example, if the primary is running on host IP 192.168.1.50, port 5432, the account name for replication is - foo, and the password is foopass, the administrator - can add the following line to the recovery.conf file on the + foo, and the password is foopass, the administrator + can add the following line to the recovery.conf file on the standby: @@ -880,22 +880,22 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' standby. You can calculate this lag by comparing the current WAL write location on the primary with the last WAL location received by the standby. These locations can be retrieved using - pg_current_wal_lsn on the primary and - pg_last_wal_receive_lsn on the standby, + pg_current_wal_lsn on the primary and + pg_last_wal_receive_lsn on the standby, respectively (see and for details). The last WAL receive location in the standby is also displayed in the process status of the WAL receiver process, displayed using the - ps command (see for details). + ps command (see for details). You can retrieve a list of WAL sender processes via the - pg_stat_replication view. Large differences between - pg_current_wal_lsn and the view's sent_lsn field + pg_stat_replication view. Large differences between + pg_current_wal_lsn and the view's sent_lsn field might indicate that the master server is under heavy load, while - differences between sent_lsn and - pg_last_wal_receive_lsn on the standby might indicate + differences between sent_lsn and + pg_last_wal_receive_lsn on the standby might indicate network delay, or that the standby is under heavy load. @@ -911,7 +911,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a - recovery conflict even when the + recovery conflict even when the standby is disconnected. @@ -922,7 +922,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' However, these methods often result in retaining more WAL segments than required, whereas replication slots retain only the number of segments known to be needed. An advantage of these methods is that they bound - the space requirement for pg_wal; there is currently no way + the space requirement for pg_wal; there is currently no way to do this using replication slots. @@ -966,8 +966,8 @@ postgres=# SELECT * FROM pg_replication_slots; node_a_slot | physical | | | f | | | (1 row) - To configure the standby to use this slot, primary_slot_name - should be configured in the standby's recovery.conf. + To configure the standby to use this slot, primary_slot_name + should be configured in the standby's recovery.conf. Here is a simple example: standby_mode = 'on' @@ -1022,7 +1022,7 @@ primary_slot_name = 'node_a_slot' If an upstream standby server is promoted to become new master, downstream servers will continue to stream from the new master if - recovery_target_timeline is set to 'latest'. + recovery_target_timeline is set to 'latest'. @@ -1031,7 +1031,7 @@ primary_slot_name = 'node_a_slot' and , and configure host-based authentication). - You will also need to set primary_conninfo in the downstream + You will also need to set primary_conninfo in the downstream standby to point to the cascading standby. @@ -1044,7 +1044,7 @@ primary_slot_name = 'node_a_slot' - PostgreSQL streaming replication is asynchronous by + PostgreSQL streaming replication is asynchronous by default. If the primary server crashes then some transactions that were committed may not have been replicated to the standby server, causing data loss. The amount @@ -1058,8 +1058,8 @@ primary_slot_name = 'node_a_slot' standby servers. This extends that standard level of durability offered by a transaction commit. This level of protection is referred to as 2-safe replication in computer science theory, and group-1-safe - (group-safe and 1-safe) when synchronous_commit is set to - remote_write. + (group-safe and 1-safe) when synchronous_commit is set to + remote_write. @@ -1104,14 +1104,14 @@ primary_slot_name = 'node_a_slot' Once streaming replication has been configured, configuring synchronous replication requires only one additional configuration step: must be set to - a non-empty value. synchronous_commit must also be set to - on, but since this is the default value, typically no change is + a non-empty value. synchronous_commit must also be set to + on, but since this is the default value, typically no change is required. (See and .) This configuration will cause each commit to wait for confirmation that the standby has written the commit record to durable storage. - synchronous_commit can be set by individual + synchronous_commit can be set by individual users, so it can be configured in the configuration file, for particular users or databases, or dynamically by applications, in order to control the durability guarantee on a per-transaction basis. @@ -1121,12 +1121,12 @@ primary_slot_name = 'node_a_slot' After a commit record has been written to disk on the primary, the WAL record is then sent to the standby. The standby sends reply messages each time a new batch of WAL data is written to disk, unless - wal_receiver_status_interval is set to zero on the standby. - In the case that synchronous_commit is set to - remote_apply, the standby sends reply messages when the commit + wal_receiver_status_interval is set to zero on the standby. + In the case that synchronous_commit is set to + remote_apply, the standby sends reply messages when the commit record is replayed, making the transaction visible. If the standby is chosen as a synchronous standby, according to the setting - of synchronous_standby_names on the primary, the reply + of synchronous_standby_names on the primary, the reply messages from that standby will be considered along with those from other synchronous standbys to decide when to release transactions waiting for confirmation that the commit record has been received. These parameters @@ -1138,13 +1138,13 @@ primary_slot_name = 'node_a_slot' - Setting synchronous_commit to remote_write will + Setting synchronous_commit to remote_write will cause each commit to wait for confirmation that the standby has received the commit record and written it out to its own operating system, but not for the data to be flushed to disk on the standby. This - setting provides a weaker guarantee of durability than on + setting provides a weaker guarantee of durability than on does: the standby could lose the data in the event of an operating system - crash, though not a PostgreSQL crash. + crash, though not a PostgreSQL crash. However, it's a useful setting in practice because it can decrease the response time for the transaction. Data loss could only occur if both the primary and the standby crash and @@ -1152,7 +1152,7 @@ primary_slot_name = 'node_a_slot' - Setting synchronous_commit to remote_apply will + Setting synchronous_commit to remote_apply will cause each commit to wait until the current synchronous standbys report that they have replayed the transaction, making it visible to user queries. In simple cases, this allows for load balancing with causal @@ -1176,12 +1176,12 @@ primary_slot_name = 'node_a_slot' transactions will wait until all the standby servers which are considered as synchronous confirm receipt of their data. The number of synchronous standbys that transactions must wait for replies from is specified in - synchronous_standby_names. This parameter also specifies - a list of standby names and the method (FIRST and - ANY) to choose synchronous standbys from the listed ones. + synchronous_standby_names. This parameter also specifies + a list of standby names and the method (FIRST and + ANY) to choose synchronous standbys from the listed ones. - The method FIRST specifies a priority-based synchronous + The method FIRST specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to the requested number of synchronous standbys chosen based on their priorities. The standbys whose names appear earlier in the list are @@ -1192,36 +1192,36 @@ primary_slot_name = 'node_a_slot' next-highest-priority standby. - An example of synchronous_standby_names for + An example of synchronous_standby_names for a priority-based multiple synchronous standbys is: synchronous_standby_names = 'FIRST 2 (s1, s2, s3)' - In this example, if four standby servers s1, s2, - s3 and s4 are running, the two standbys - s1 and s2 will be chosen as synchronous standbys + In this example, if four standby servers s1, s2, + s3 and s4 are running, the two standbys + s1 and s2 will be chosen as synchronous standbys because their names appear early in the list of standby names. - s3 is a potential synchronous standby and will take over - the role of synchronous standby when either of s1 or - s2 fails. s4 is an asynchronous standby since + s3 is a potential synchronous standby and will take over + the role of synchronous standby when either of s1 or + s2 fails. s4 is an asynchronous standby since its name is not in the list. - The method ANY specifies a quorum-based synchronous + The method ANY specifies a quorum-based synchronous replication and makes transaction commits wait until their WAL records - are replicated to at least the requested number of + are replicated to at least the requested number of synchronous standbys in the list. - An example of synchronous_standby_names for + An example of synchronous_standby_names for a quorum-based multiple synchronous standbys is: synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - In this example, if four standby servers s1, s2, - s3 and s4 are running, transaction commits will - wait for replies from at least any two standbys of s1, - s2 and s3. s4 is an asynchronous + In this example, if four standby servers s1, s2, + s3 and s4 are running, transaction commits will + wait for replies from at least any two standbys of s1, + s2 and s3. s4 is an asynchronous standby since its name is not in the list. @@ -1243,7 +1243,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - PostgreSQL allows the application developer + PostgreSQL allows the application developer to specify the durability level required via replication. This can be specified for the system overall, though it can also be specified for specific users or connections, or even individual transactions. @@ -1275,10 +1275,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' Planning for High Availability - synchronous_standby_names specifies the number and + synchronous_standby_names specifies the number and names of synchronous standbys that transaction commits made when - synchronous_commit is set to on, - remote_apply or remote_write will wait for + synchronous_commit is set to on, + remote_apply or remote_write will wait for responses from. Such transaction commits may never be completed if any one of synchronous standbys should crash. @@ -1286,7 +1286,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' The best solution for high availability is to ensure you keep as many synchronous standbys as requested. This can be achieved by naming multiple - potential synchronous standbys using synchronous_standby_names. + potential synchronous standbys using synchronous_standby_names. @@ -1305,14 +1305,14 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' When a standby first attaches to the primary, it will not yet be properly - synchronized. This is described as catchup mode. Once + synchronized. This is described as catchup mode. Once the lag between standby and primary reaches zero for the first time - we move to real-time streaming state. + we move to real-time streaming state. The catch-up duration may be long immediately after the standby has been created. If the standby is shut down, then the catch-up period will increase according to the length of time the standby has been down. The standby is only able to become a synchronous standby - once it has reached streaming state. + once it has reached streaming state. This state can be viewed using the pg_stat_replication view. @@ -1334,7 +1334,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you really cannot keep as many synchronous standbys as requested then you should decrease the number of synchronous standbys that transaction commits must wait for responses from - in synchronous_standby_names (or disable it) and + in synchronous_standby_names (or disable it) and reload the configuration file on the primary server. @@ -1347,7 +1347,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you need to re-create a standby server while transactions are waiting, make sure that the commands pg_start_backup() and pg_stop_backup() are run in a session with - synchronous_commit = off, otherwise those + synchronous_commit = off, otherwise those requests will wait forever for the standby to appear. @@ -1381,7 +1381,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - If archive_mode is set to on, the + If archive_mode is set to on, the archiver is not enabled during recovery or standby mode. If the standby server is promoted, it will start archiving after the promotion, but will not archive any WAL it did not generate itself. To get a complete @@ -1415,7 +1415,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If the primary server fails and the standby server becomes the new primary, and then the old primary restarts, you must have a mechanism for informing the old primary that it is no longer the primary. This is - sometimes known as STONITH (Shoot The Other Node In The Head), which is + sometimes known as STONITH (Shoot The Other Node In The Head), which is necessary to avoid situations where both systems think they are the primary, which will lead to confusion and ultimately data loss. @@ -1466,10 +1466,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' To trigger failover of a log-shipping standby server, - run pg_ctl promote or create a trigger - file with the file name and path specified by the trigger_file - setting in recovery.conf. If you're planning to use - pg_ctl promote to fail over, trigger_file is + run pg_ctl promote or create a trigger + file with the file name and path specified by the trigger_file + setting in recovery.conf. If you're planning to use + pg_ctl promote to fail over, trigger_file is not required. If you're setting up the reporting servers that are only used to offload read-only queries from the primary, not for high availability purposes, you don't need to promote it. @@ -1481,9 +1481,9 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' An alternative to the built-in standby mode described in the previous - sections is to use a restore_command that polls the archive location. + sections is to use a restore_command that polls the archive location. This was the only option available in versions 8.4 and below. In this - setup, set standby_mode off, because you are implementing + setup, set standby_mode off, because you are implementing the polling required for standby operation yourself. See the module for a reference implementation of this. @@ -1494,7 +1494,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' time, so if you use the standby server for queries (see Hot Standby), there is a delay between an action in the master and when the action becomes visible in the standby, corresponding the time it takes - to fill up the WAL file. archive_timeout can be used to make that delay + to fill up the WAL file. archive_timeout can be used to make that delay shorter. Also note that you can't combine streaming replication with this method. @@ -1511,25 +1511,25 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' The magic that makes the two loosely coupled servers work together is - simply a restore_command used on the standby that, + simply a restore_command used on the standby that, when asked for the next WAL file, waits for it to become available from - the primary. The restore_command is specified in the - recovery.conf file on the standby server. Normal recovery + the primary. The restore_command is specified in the + recovery.conf file on the standby server. Normal recovery processing would request a file from the WAL archive, reporting failure if the file was unavailable. For standby processing it is normal for the next WAL file to be unavailable, so the standby must wait for - it to appear. For files ending in .backup or - .history there is no need to wait, and a non-zero return - code must be returned. A waiting restore_command can be + it to appear. For files ending in .backup or + .history there is no need to wait, and a non-zero return + code must be returned. A waiting restore_command can be written as a custom script that loops after polling for the existence of the next WAL file. There must also be some way to trigger failover, which - should interrupt the restore_command, break the loop and + should interrupt the restore_command, break the loop and return a file-not-found error to the standby server. This ends recovery and the standby will then come up as a normal server. - Pseudocode for a suitable restore_command is: + Pseudocode for a suitable restore_command is: triggered = false; while (!NextWALFileReady() && !triggered) @@ -1544,7 +1544,7 @@ if (!triggered) - A working example of a waiting restore_command is provided + A working example of a waiting restore_command is provided in the module. It should be used as a reference on how to correctly implement the logic described above. It can also be extended as needed to support specific @@ -1553,14 +1553,14 @@ if (!triggered) The method for triggering failover is an important part of planning - and design. One potential option is the restore_command + and design. One potential option is the restore_command command. It is executed once for each WAL file, but the process - running the restore_command is created and dies for + running the restore_command is created and dies for each file, so there is no daemon or server process, and signals or a signal handler cannot be used. Therefore, the - restore_command is not suitable to trigger failover. + restore_command is not suitable to trigger failover. It is possible to use a simple timeout facility, especially if - used in conjunction with a known archive_timeout + used in conjunction with a known archive_timeout setting on the primary. However, this is somewhat error prone since a network problem or busy primary server might be sufficient to initiate failover. A notification mechanism such as the explicit @@ -1579,7 +1579,7 @@ if (!triggered) Set up primary and standby systems as nearly identical as possible, including two identical copies of - PostgreSQL at the same release level. + PostgreSQL at the same release level. @@ -1602,8 +1602,8 @@ if (!triggered) Begin recovery on the standby server from the local WAL - archive, using a recovery.conf that specifies a - restore_command that waits as described + archive, using a recovery.conf that specifies a + restore_command that waits as described previously (see ). @@ -1637,7 +1637,7 @@ if (!triggered) - An external program can call the pg_walfile_name_offset() + An external program can call the pg_walfile_name_offset() function (see ) to find out the file name and the exact byte offset within it of the current end of WAL. It can then access the WAL file directly @@ -1646,17 +1646,17 @@ if (!triggered) loss is the polling cycle time of the copying program, which can be very small, and there is no wasted bandwidth from forcing partially-used segment files to be archived. Note that the standby servers' - restore_command scripts can only deal with whole WAL files, + restore_command scripts can only deal with whole WAL files, so the incrementally copied data is not ordinarily made available to the standby servers. It is of use only when the primary dies — then the last partial WAL file is fed to the standby before allowing it to come up. The correct implementation of this process requires - cooperation of the restore_command script with the data + cooperation of the restore_command script with the data copying program. - Starting with PostgreSQL version 9.0, you can use + Starting with PostgreSQL version 9.0, you can use streaming replication (see ) to achieve the same benefits with less effort. @@ -1716,17 +1716,17 @@ if (!triggered) - Query access - SELECT, COPY TO + Query access - SELECT, COPY TO - Cursor commands - DECLARE, FETCH, CLOSE + Cursor commands - DECLARE, FETCH, CLOSE - Parameters - SHOW, SET, RESET + Parameters - SHOW, SET, RESET @@ -1735,17 +1735,17 @@ if (!triggered) - BEGIN, END, ABORT, START TRANSACTION + BEGIN, END, ABORT, START TRANSACTION - SAVEPOINT, RELEASE, ROLLBACK TO SAVEPOINT + SAVEPOINT, RELEASE, ROLLBACK TO SAVEPOINT - EXCEPTION blocks and other internal subtransactions + EXCEPTION blocks and other internal subtransactions @@ -1753,19 +1753,19 @@ if (!triggered) - LOCK TABLE, though only when explicitly in one of these modes: - ACCESS SHARE, ROW SHARE or ROW EXCLUSIVE. + LOCK TABLE, though only when explicitly in one of these modes: + ACCESS SHARE, ROW SHARE or ROW EXCLUSIVE. - Plans and resources - PREPARE, EXECUTE, - DEALLOCATE, DISCARD + Plans and resources - PREPARE, EXECUTE, + DEALLOCATE, DISCARD - Plugins and extensions - LOAD + Plugins and extensions - LOAD @@ -1779,9 +1779,9 @@ if (!triggered) - Data Manipulation Language (DML) - INSERT, - UPDATE, DELETE, COPY FROM, - TRUNCATE. + Data Manipulation Language (DML) - INSERT, + UPDATE, DELETE, COPY FROM, + TRUNCATE. Note that there are no allowed actions that result in a trigger being executed during recovery. This restriction applies even to temporary tables, because table rows cannot be read or written without @@ -1791,31 +1791,31 @@ if (!triggered) - Data Definition Language (DDL) - CREATE, - DROP, ALTER, COMMENT. + Data Definition Language (DDL) - CREATE, + DROP, ALTER, COMMENT. This restriction applies even to temporary tables, because carrying out these operations would require updating the system catalog tables. - SELECT ... FOR SHARE | UPDATE, because row locks cannot be + SELECT ... FOR SHARE | UPDATE, because row locks cannot be taken without updating the underlying data files. - Rules on SELECT statements that generate DML commands. + Rules on SELECT statements that generate DML commands. - LOCK that explicitly requests a mode higher than ROW EXCLUSIVE MODE. + LOCK that explicitly requests a mode higher than ROW EXCLUSIVE MODE. - LOCK in short default form, since it requests ACCESS EXCLUSIVE MODE. + LOCK in short default form, since it requests ACCESS EXCLUSIVE MODE. @@ -1824,19 +1824,19 @@ if (!triggered) - BEGIN READ WRITE, - START TRANSACTION READ WRITE + BEGIN READ WRITE, + START TRANSACTION READ WRITE - SET TRANSACTION READ WRITE, - SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE + SET TRANSACTION READ WRITE, + SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE - SET transaction_read_only = off + SET transaction_read_only = off @@ -1844,35 +1844,35 @@ if (!triggered) - Two-phase commit commands - PREPARE TRANSACTION, - COMMIT PREPARED, ROLLBACK PREPARED + Two-phase commit commands - PREPARE TRANSACTION, + COMMIT PREPARED, ROLLBACK PREPARED because even read-only transactions need to write WAL in the prepare phase (the first phase of two phase commit). - Sequence updates - nextval(), setval() + Sequence updates - nextval(), setval() - LISTEN, UNLISTEN, NOTIFY + LISTEN, UNLISTEN, NOTIFY - In normal operation, read-only transactions are allowed to - use LISTEN, UNLISTEN, and - NOTIFY, so Hot Standby sessions operate under slightly tighter + In normal operation, read-only transactions are allowed to + use LISTEN, UNLISTEN, and + NOTIFY, so Hot Standby sessions operate under slightly tighter restrictions than ordinary read-only sessions. It is possible that some of these restrictions might be loosened in a future release. - During hot standby, the parameter transaction_read_only is always + During hot standby, the parameter transaction_read_only is always true and may not be changed. But as long as no attempt is made to modify the database, connections during hot standby will act much like any other database connection. If failover or switchover occurs, the database will @@ -1884,7 +1884,7 @@ if (!triggered) Users will be able to tell whether their session is read-only by - issuing SHOW transaction_read_only. In addition, a set of + issuing SHOW transaction_read_only. In addition, a set of functions () allow users to access information about the standby server. These allow you to write programs that are aware of the current state of the database. These @@ -1907,7 +1907,7 @@ if (!triggered) There are also additional types of conflict that can occur with Hot Standby. - These conflicts are hard conflicts in the sense that queries + These conflicts are hard conflicts in the sense that queries might need to be canceled and, in some cases, sessions disconnected to resolve them. The user is provided with several ways to handle these conflicts. Conflict cases include: @@ -1916,7 +1916,7 @@ if (!triggered) Access Exclusive locks taken on the primary server, including both - explicit LOCK commands and various DDL + explicit LOCK commands and various DDL actions, conflict with table accesses in standby queries. @@ -1935,7 +1935,7 @@ if (!triggered) Application of a vacuum cleanup record from WAL conflicts with - standby transactions whose snapshots can still see any of + standby transactions whose snapshots can still see any of the rows to be removed. @@ -1962,18 +1962,18 @@ if (!triggered) An example of the problem situation is an administrator on the primary - server running DROP TABLE on a table that is currently being + server running DROP TABLE on a table that is currently being queried on the standby server. Clearly the standby query cannot continue - if the DROP TABLE is applied on the standby. If this situation - occurred on the primary, the DROP TABLE would wait until the - other query had finished. But when DROP TABLE is run on the + if the DROP TABLE is applied on the standby. If this situation + occurred on the primary, the DROP TABLE would wait until the + other query had finished. But when DROP TABLE is run on the primary, the primary doesn't have information about what queries are running on the standby, so it will not wait for any such standby queries. The WAL change records come through to the standby while the standby query is still running, causing a conflict. The standby server must either delay application of the WAL records (and everything after them, too) or else cancel the conflicting query so that the DROP - TABLE can be applied. + TABLE can be applied. @@ -1986,7 +1986,7 @@ if (!triggered) once it has taken longer than the relevant delay setting to apply any newly-received WAL data. There are two parameters so that different delay values can be specified for the case of reading WAL data from an archive - (i.e., initial recovery from a base backup or catching up a + (i.e., initial recovery from a base backup or catching up a standby server that has fallen far behind) versus reading WAL data via streaming replication. @@ -2003,10 +2003,10 @@ if (!triggered) - Once the delay specified by max_standby_archive_delay or - max_standby_streaming_delay has been exceeded, conflicting + Once the delay specified by max_standby_archive_delay or + max_standby_streaming_delay has been exceeded, conflicting queries will be canceled. This usually results just in a cancellation - error, although in the case of replaying a DROP DATABASE + error, although in the case of replaying a DROP DATABASE the entire conflicting session will be terminated. Also, if the conflict is over a lock held by an idle transaction, the conflicting session is terminated (this behavior might change in the future). @@ -2030,7 +2030,7 @@ if (!triggered) The most common reason for conflict between standby queries and WAL replay - is early cleanup. Normally, PostgreSQL allows + is early cleanup. Normally, PostgreSQL allows cleanup of old row versions when there are no transactions that need to see them to ensure correct visibility of data according to MVCC rules. However, this rule can only be applied for transactions executing on the @@ -2041,7 +2041,7 @@ if (!triggered) Experienced users should note that both row version cleanup and row version freezing will potentially conflict with standby queries. Running a manual - VACUUM FREEZE is likely to cause conflicts even on tables with + VACUUM FREEZE is likely to cause conflicts even on tables with no updated or deleted rows. @@ -2049,15 +2049,15 @@ if (!triggered) Users should be clear that tables that are regularly and heavily updated on the primary server will quickly cause cancellation of longer running queries on the standby. In such cases the setting of a finite value for - max_standby_archive_delay or - max_standby_streaming_delay can be considered similar to - setting statement_timeout. + max_standby_archive_delay or + max_standby_streaming_delay can be considered similar to + setting statement_timeout. Remedial possibilities exist if the number of standby-query cancellations is found to be unacceptable. The first option is to set the parameter - hot_standby_feedback, which prevents VACUUM from + hot_standby_feedback, which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur. If you do this, you should note that this will delay cleanup of dead rows on the primary, @@ -2067,11 +2067,11 @@ if (!triggered) off-loading execution onto the standby. If standby servers connect and disconnect frequently, you might want to make adjustments to handle the period when - hot_standby_feedback feedback is not being provided. - For example, consider increasing max_standby_archive_delay + hot_standby_feedback feedback is not being provided. + For example, consider increasing max_standby_archive_delay so that queries are not rapidly canceled by conflicts in WAL archive files during disconnected periods. You should also consider increasing - max_standby_streaming_delay to avoid rapid cancellations + max_standby_streaming_delay to avoid rapid cancellations by newly-arrived streaming WAL entries after reconnection. @@ -2080,16 +2080,16 @@ if (!triggered) on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set - a high max_standby_streaming_delay. However it is + a high max_standby_streaming_delay. However it is difficult to guarantee any specific execution-time window with this - approach, since vacuum_defer_cleanup_age is measured in + approach, since vacuum_defer_cleanup_age is measured in transactions executed on the primary server. The number of query cancels and the reason for them can be viewed using - the pg_stat_database_conflicts system view on the standby - server. The pg_stat_database system view also contains + the pg_stat_database_conflicts system view on the standby + server. The pg_stat_database system view also contains summary information. @@ -2098,8 +2098,8 @@ if (!triggered) Administrator's Overview - If hot_standby is on in postgresql.conf - (the default value) and there is a recovery.conf + If hot_standby is on in postgresql.conf + (the default value) and there is a recovery.conf file present, the server will run in Hot Standby mode. However, it may take some time for Hot Standby connections to be allowed, because the server will not accept connections until it has completed @@ -2120,8 +2120,8 @@ LOG: database system is ready to accept read only connections Consistency information is recorded once per checkpoint on the primary. It is not possible to enable hot standby when reading WAL - written during a period when wal_level was not set to - replica or logical on the primary. Reaching + written during a period when wal_level was not set to + replica or logical on the primary. Reaching a consistent state can also be delayed in the presence of both of these conditions: @@ -2140,7 +2140,7 @@ LOG: database system is ready to accept read only connections If you are running file-based log shipping ("warm standby"), you might need to wait until the next WAL file arrives, which could be as long as the - archive_timeout setting on the primary. + archive_timeout setting on the primary. @@ -2155,22 +2155,22 @@ LOG: database system is ready to accept read only connections - max_connections + max_connections - max_prepared_transactions + max_prepared_transactions - max_locks_per_transaction + max_locks_per_transaction - max_worker_processes + max_worker_processes @@ -2209,19 +2209,19 @@ LOG: database system is ready to accept read only connections - Data Definition Language (DDL) - e.g. CREATE INDEX + Data Definition Language (DDL) - e.g. CREATE INDEX - Privilege and Ownership - GRANT, REVOKE, - REASSIGN + Privilege and Ownership - GRANT, REVOKE, + REASSIGN - Maintenance commands - ANALYZE, VACUUM, - CLUSTER, REINDEX + Maintenance commands - ANALYZE, VACUUM, + CLUSTER, REINDEX @@ -2241,14 +2241,14 @@ LOG: database system is ready to accept read only connections - pg_cancel_backend() - and pg_terminate_backend() will work on user backends, + pg_cancel_backend() + and pg_terminate_backend() will work on user backends, but not the Startup process, which performs recovery. pg_stat_activity does not show recovering transactions as active. As a result, pg_prepared_xacts is always empty during recovery. If you wish to resolve in-doubt prepared transactions, view - pg_prepared_xacts on the primary and issue commands to + pg_prepared_xacts on the primary and issue commands to resolve transactions there or resolve them after the end of recovery. @@ -2256,17 +2256,17 @@ LOG: database system is ready to accept read only connections pg_locks will show locks held by backends, as normal. pg_locks also shows a virtual transaction managed by the Startup process that owns all - AccessExclusiveLocks held by transactions being replayed by recovery. + AccessExclusiveLocks held by transactions being replayed by recovery. Note that the Startup process does not acquire locks to - make database changes, and thus locks other than AccessExclusiveLocks + make database changes, and thus locks other than AccessExclusiveLocks do not show in pg_locks for the Startup process; they are just presumed to exist. - The Nagios plugin check_pgsql will + The Nagios plugin check_pgsql will work, because the simple information it checks for exists. - The check_postgres monitoring script will also work, + The check_postgres monitoring script will also work, though some reported values could give different or confusing results. For example, last vacuum time will not be maintained, since no vacuum occurs on the standby. Vacuums running on the primary @@ -2275,11 +2275,11 @@ LOG: database system is ready to accept read only connections WAL file control commands will not work during recovery, - e.g. pg_start_backup, pg_switch_wal etc. + e.g. pg_start_backup, pg_switch_wal etc. - Dynamically loadable modules work, including pg_stat_statements. + Dynamically loadable modules work, including pg_stat_statements. @@ -2292,8 +2292,8 @@ LOG: database system is ready to accept read only connections - Trigger-based replication systems such as Slony, - Londiste and Bucardo won't run on the + Trigger-based replication systems such as Slony, + Londiste and Bucardo won't run on the standby at all, though they will run happily on the primary server as long as the changes are not sent to standby servers to be applied. WAL replay is not trigger-based so you cannot relay from the @@ -2302,7 +2302,7 @@ LOG: database system is ready to accept read only connections - New OIDs cannot be assigned, though some UUID generators may still + New OIDs cannot be assigned, though some UUID generators may still work as long as they do not rely on writing new status to the database. @@ -2314,32 +2314,32 @@ LOG: database system is ready to accept read only connections - DROP TABLESPACE can only succeed if the tablespace is empty. + DROP TABLESPACE can only succeed if the tablespace is empty. Some standby users may be actively using the tablespace via their - temp_tablespaces parameter. If there are temporary files in the + temp_tablespaces parameter. If there are temporary files in the tablespace, all active queries are canceled to ensure that temporary files are removed, so the tablespace can be removed and WAL replay can continue. - Running DROP DATABASE or ALTER DATABASE ... SET - TABLESPACE on the primary + Running DROP DATABASE or ALTER DATABASE ... SET + TABLESPACE on the primary will generate a WAL entry that will cause all users connected to that database on the standby to be forcibly disconnected. This action occurs immediately, whatever the setting of - max_standby_streaming_delay. Note that - ALTER DATABASE ... RENAME does not disconnect users, which + max_standby_streaming_delay. Note that + ALTER DATABASE ... RENAME does not disconnect users, which in most cases will go unnoticed, though might in some cases cause a program confusion if it depends in some way upon database name. - In normal (non-recovery) mode, if you issue DROP USER or DROP ROLE + In normal (non-recovery) mode, if you issue DROP USER or DROP ROLE for a role with login capability while that user is still connected then nothing happens to the connected user - they remain connected. The user cannot reconnect however. This behavior applies in recovery also, so a - DROP USER on the primary does not disconnect that user on the standby. + DROP USER on the primary does not disconnect that user on the standby. @@ -2361,7 +2361,7 @@ LOG: database system is ready to accept read only connections restartpoints (similar to checkpoints on the primary) and normal block cleaning activities. This can include updates of the hint bit information stored on the standby server. - The CHECKPOINT command is accepted during recovery, + The CHECKPOINT command is accepted during recovery, though it performs a restartpoint rather than a new checkpoint. @@ -2427,15 +2427,15 @@ LOG: database system is ready to accept read only connections - At the end of recovery, AccessExclusiveLocks held by prepared transactions + At the end of recovery, AccessExclusiveLocks held by prepared transactions will require twice the normal number of lock table entries. If you plan on running either a large number of concurrent prepared transactions - that normally take AccessExclusiveLocks, or you plan on having one - large transaction that takes many AccessExclusiveLocks, you are - advised to select a larger value of max_locks_per_transaction, + that normally take AccessExclusiveLocks, or you plan on having one + large transaction that takes many AccessExclusiveLocks, you are + advised to select a larger value of max_locks_per_transaction, perhaps as much as twice the value of the parameter on the primary server. You need not consider this at all if - your setting of max_prepared_transactions is 0. + your setting of max_prepared_transactions is 0. diff --git a/doc/src/sgml/history.sgml b/doc/src/sgml/history.sgml index a7f4b701ea..d1535469f9 100644 --- a/doc/src/sgml/history.sgml +++ b/doc/src/sgml/history.sgml @@ -132,7 +132,7 @@ (psql) was provided for interactive SQL queries, which used GNU Readline. This largely superseded - the old monitor program. + the old monitor program. @@ -215,7 +215,7 @@ - Details about what has happened in PostgreSQL since + Details about what has happened in PostgreSQL since then can be found in . diff --git a/doc/src/sgml/hstore.sgml b/doc/src/sgml/hstore.sgml index db5d4409a6..0264e4e532 100644 --- a/doc/src/sgml/hstore.sgml +++ b/doc/src/sgml/hstore.sgml @@ -8,21 +8,21 @@ - This module implements the hstore data type for storing sets of - key/value pairs within a single PostgreSQL value. + This module implements the hstore data type for storing sets of + key/value pairs within a single PostgreSQL value. This can be useful in various scenarios, such as rows with many attributes that are rarely examined, or semi-structured data. Keys and values are simply text strings. - <type>hstore</> External Representation + <type>hstore</type> External Representation - The text representation of an hstore, used for input and output, - includes zero or more key => - value pairs separated by commas. Some examples: + The text representation of an hstore, used for input and output, + includes zero or more key => + value pairs separated by commas. Some examples: k => v @@ -31,15 +31,15 @@ foo => bar, baz => whatever The order of the pairs is not significant (and may not be reproduced on - output). Whitespace between pairs or around the => sign is + output). Whitespace between pairs or around the => sign is ignored. Double-quote keys and values that include whitespace, commas, - =s or >s. To include a double quote or a + =s or >s. To include a double quote or a backslash in a key or value, escape it with a backslash. - Each key in an hstore is unique. If you declare an hstore - with duplicate keys, only one will be stored in the hstore and + Each key in an hstore is unique. If you declare an hstore + with duplicate keys, only one will be stored in the hstore and there is no guarantee as to which will be kept: @@ -51,24 +51,24 @@ SELECT 'a=>1,a=>2'::hstore; - A value (but not a key) can be an SQL NULL. For example: + A value (but not a key) can be an SQL NULL. For example: key => NULL - The NULL keyword is case-insensitive. Double-quote the - NULL to treat it as the ordinary string NULL. + The NULL keyword is case-insensitive. Double-quote the + NULL to treat it as the ordinary string NULL. - Keep in mind that the hstore text format, when used for input, - applies before any required quoting or escaping. If you are - passing an hstore literal via a parameter, then no additional + Keep in mind that the hstore text format, when used for input, + applies before any required quoting or escaping. If you are + passing an hstore literal via a parameter, then no additional processing is needed. But if you're passing it as a quoted literal constant, then any single-quote characters and (depending on the setting of - the standard_conforming_strings configuration parameter) + the standard_conforming_strings configuration parameter) backslash characters need to be escaped correctly. See for more on the handling of string constants. @@ -83,7 +83,7 @@ key => NULL - <type>hstore</> Operators and Functions + <type>hstore</type> Operators and Functions The operators provided by the hstore module are @@ -92,7 +92,7 @@ key => NULL - <type>hstore</> Operators + <type>hstore</type> Operators @@ -106,99 +106,99 @@ key => NULL - hstore -> text - get value for key (NULL if not present) + hstore -> text + get value for key (NULL if not present) 'a=>x, b=>y'::hstore -> 'a' x - hstore -> text[] - get values for keys (NULL if not present) + hstore -> text[] + get values for keys (NULL if not present) 'a=>x, b=>y, c=>z'::hstore -> ARRAY['c','a'] {"z","x"} - hstore || hstore - concatenate hstores + hstore || hstore + concatenate hstores 'a=>b, c=>d'::hstore || 'c=>x, d=>q'::hstore "a"=>"b", "c"=>"x", "d"=>"q" - hstore ? text - does hstore contain key? + hstore ? text + does hstore contain key? 'a=>1'::hstore ? 'a' t - hstore ?& text[] - does hstore contain all specified keys? + hstore ?& text[] + does hstore contain all specified keys? 'a=>1,b=>2'::hstore ?& ARRAY['a','b'] t - hstore ?| text[] - does hstore contain any of the specified keys? + hstore ?| text[] + does hstore contain any of the specified keys? 'a=>1,b=>2'::hstore ?| ARRAY['b','c'] t - hstore @> hstore + hstore @> hstore does left operand contain right? 'a=>b, b=>1, c=>NULL'::hstore @> 'b=>1' t - hstore <@ hstore + hstore <@ hstore is left operand contained in right? 'a=>c'::hstore <@ 'a=>b, b=>1, c=>NULL' f - hstore - text + hstore - text delete key from left operand 'a=>1, b=>2, c=>3'::hstore - 'b'::text "a"=>"1", "c"=>"3" - hstore - text[] + hstore - text[] delete keys from left operand 'a=>1, b=>2, c=>3'::hstore - ARRAY['a','b'] "c"=>"3" - hstore - hstore + hstore - hstore delete matching pairs from left operand 'a=>1, b=>2, c=>3'::hstore - 'a=>4, b=>2'::hstore "a"=>"1", "c"=>"3" - record #= hstore - replace fields in record with matching values from hstore + record #= hstore + replace fields in record with matching values from hstore see Examples section - %% hstore - convert hstore to array of alternating keys and values + %% hstore + convert hstore to array of alternating keys and values %% 'a=>foo, b=>bar'::hstore {a,foo,b,bar} - %# hstore - convert hstore to two-dimensional key/value array + %# hstore + convert hstore to two-dimensional key/value array %# 'a=>foo, b=>bar'::hstore {{a,foo},{b,bar}} @@ -209,8 +209,8 @@ key => NULL - Prior to PostgreSQL 8.2, the containment operators @> - and <@ were called @ and ~, + Prior to PostgreSQL 8.2, the containment operators @> + and <@ were called @ and ~, respectively. These names are still available, but are deprecated and will eventually be removed. Notice that the old names are reversed from the convention formerly followed by the core geometric data types! @@ -218,7 +218,7 @@ key => NULL
- <type>hstore</> Functions + <type>hstore</type> Functions @@ -235,7 +235,7 @@ key => NULL hstore(record)hstore hstore - construct an hstore from a record or row + construct an hstore from a record or row hstore(ROW(1,2)) f1=>1,f2=>2 @@ -243,7 +243,7 @@ key => NULL hstore(text[]) hstore - construct an hstore from an array, which may be either + construct an hstore from an array, which may be either a key/value array, or a two-dimensional array hstore(ARRAY['a','1','b','2']) || hstore(ARRAY[['c','3'],['d','4']]) a=>1, b=>2, c=>3, d=>4 @@ -252,7 +252,7 @@ key => NULL hstore(text[], text[]) hstore - construct an hstore from separate key and value arrays + construct an hstore from separate key and value arrays hstore(ARRAY['a','b'], ARRAY['1','2']) "a"=>"1","b"=>"2" @@ -260,7 +260,7 @@ key => NULL hstore(text, text) hstore - make single-item hstore + make single-item hstore hstore('a', 'b') "a"=>"b" @@ -268,7 +268,7 @@ key => NULL akeys(hstore)akeys text[] - get hstore's keys as an array + get hstore's keys as an array akeys('a=>1,b=>2') {a,b} @@ -276,7 +276,7 @@ key => NULL skeys(hstore)skeys setof text - get hstore's keys as a set + get hstore's keys as a set skeys('a=>1,b=>2') @@ -288,7 +288,7 @@ b avals(hstore)avals text[] - get hstore's values as an array + get hstore's values as an array avals('a=>1,b=>2') {1,2} @@ -296,7 +296,7 @@ b svals(hstore)svals setof text - get hstore's values as a set + get hstore's values as a set svals('a=>1,b=>2') @@ -308,7 +308,7 @@ b hstore_to_array(hstore)hstore_to_array text[] - get hstore's keys and values as an array of alternating + get hstore's keys and values as an array of alternating keys and values hstore_to_array('a=>1,b=>2') {a,1,b,2} @@ -317,7 +317,7 @@ b hstore_to_matrix(hstore)hstore_to_matrix text[] - get hstore's keys and values as a two-dimensional array + get hstore's keys and values as a two-dimensional array hstore_to_matrix('a=>1,b=>2') {{a,1},{b,2}} @@ -359,7 +359,7 @@ b slice(hstore, text[])slice hstore - extract a subset of an hstore + extract a subset of an hstore slice('a=>1,b=>2,c=>3'::hstore, ARRAY['b','c','x']) "b"=>"2", "c"=>"3" @@ -367,7 +367,7 @@ b each(hstore)each setof(key text, value text) - get hstore's keys and values as a set + get hstore's keys and values as a set select * from each('a=>1,b=>2') @@ -381,7 +381,7 @@ b exist(hstore,text)exist boolean - does hstore contain key? + does hstore contain key? exist('a=>1','a') t @@ -389,7 +389,7 @@ b defined(hstore,text)defined boolean - does hstore contain non-NULL value for key? + does hstore contain non-NULL value for key? defined('a=>NULL','a') f @@ -421,7 +421,7 @@ b populate_record(record,hstore)populate_record record - replace fields in record with matching values from hstore + replace fields in record with matching values from hstore see Examples section @@ -442,7 +442,7 @@ b The function populate_record is actually declared - with anyelement, not record, as its first argument, + with anyelement, not record, as its first argument, but it will reject non-record types with a run-time error. @@ -452,8 +452,8 @@ b Indexes - hstore has GiST and GIN index support for the @>, - ?, ?& and ?| operators. For example: + hstore has GiST and GIN index support for the @>, + ?, ?& and ?| operators. For example: CREATE INDEX hidx ON testhstore USING GIST (h); @@ -462,12 +462,12 @@ CREATE INDEX hidx ON testhstore USING GIN (h); - hstore also supports btree or hash indexes for - the = operator. This allows hstore columns to be - declared UNIQUE, or to be used in GROUP BY, - ORDER BY or DISTINCT expressions. The sort ordering - for hstore values is not particularly useful, but these indexes - may be useful for equivalence lookups. Create indexes for = + hstore also supports btree or hash indexes for + the = operator. This allows hstore columns to be + declared UNIQUE, or to be used in GROUP BY, + ORDER BY or DISTINCT expressions. The sort ordering + for hstore values is not particularly useful, but these indexes + may be useful for equivalence lookups. Create indexes for = comparisons as follows: @@ -495,7 +495,7 @@ UPDATE tab SET h = delete(h, 'k1'); - Convert a record to an hstore: + Convert a record to an hstore: CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -509,7 +509,7 @@ SELECT hstore(t) FROM test AS t; - Convert an hstore to a predefined record type: + Convert an hstore to a predefined record type: CREATE TABLE test (col1 integer, col2 text, col3 text); @@ -523,7 +523,7 @@ SELECT * FROM populate_record(null::test, - Modify an existing record using the values from an hstore: + Modify an existing record using the values from an hstore: CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -541,7 +541,7 @@ SELECT (r).* FROM (SELECT t #= '"col3"=>"baz"' AS r FROM test t) s; Statistics - The hstore type, because of its intrinsic liberality, could + The hstore type, because of its intrinsic liberality, could contain a lot of different keys. Checking for valid keys is the task of the application. The following examples demonstrate several techniques for checking keys and obtaining statistics. @@ -588,7 +588,7 @@ SELECT key, count(*) FROM Compatibility - As of PostgreSQL 9.0, hstore uses a different internal + As of PostgreSQL 9.0, hstore uses a different internal representation than previous versions. This presents no obstacle for dump/restore upgrades since the text representation (used in the dump) is unchanged. @@ -599,7 +599,7 @@ SELECT key, count(*) FROM having the new code recognize old-format data. This will entail a slight performance penalty when processing data that has not yet been modified by the new code. It is possible to force an upgrade of all values in a table - column by doing an UPDATE statement as follows: + column by doing an UPDATE statement as follows: UPDATE tablename SET hstorecol = hstorecol || ''; @@ -610,7 +610,7 @@ UPDATE tablename SET hstorecol = hstorecol || ''; ALTER TABLE tablename ALTER hstorecol TYPE hstore USING hstorecol || ''; - The ALTER TABLE method requires an exclusive lock on the table, + The ALTER TABLE method requires an exclusive lock on the table, but does not result in bloating the table with old row versions. diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml index aa3d371d2e..b06ffcdbff 100644 --- a/doc/src/sgml/indexam.sgml +++ b/doc/src/sgml/indexam.sgml @@ -6,17 +6,17 @@ This chapter defines the interface between the core PostgreSQL system and index access - methods, which manage individual index types. The core system + methods, which manage individual index types. The core system knows nothing about indexes beyond what is specified here, so it is possible to develop entirely new index types by writing add-on code. All indexes in PostgreSQL are what are known - technically as secondary indexes; that is, the index is + technically as secondary indexes; that is, the index is physically separate from the table file that it describes. Each index - is stored as its own physical relation and so is described - by an entry in the pg_class catalog. The contents of an + is stored as its own physical relation and so is described + by an entry in the pg_class catalog. The contents of an index are entirely under the control of its index access method. In practice, all index access methods divide indexes into standard-size pages so that they can use the regular storage manager and buffer manager @@ -28,7 +28,7 @@ An index is effectively a mapping from some data key values to - tuple identifiers, or TIDs, of row versions + tuple identifiers, or TIDs, of row versions (tuples) in the index's parent table. A TID consists of a block number and an item number within that block (see ). This is sufficient @@ -50,7 +50,7 @@ Each index access method is described by a row in the pg_am system catalog. The pg_am entry - specifies a name and a handler function for the access + specifies a name and a handler function for the access method. These entries can be created and deleted using the and SQL commands. @@ -58,14 +58,14 @@ An index access method handler function must be declared to accept a - single argument of type internal and to return the - pseudo-type index_am_handler. The argument is a dummy value that + single argument of type internal and to return the + pseudo-type index_am_handler. The argument is a dummy value that simply serves to prevent handler functions from being called directly from SQL commands. The result of the function must be a palloc'd struct of type IndexAmRoutine, which contains everything that the core code needs to know to make use of the index access method. The IndexAmRoutine struct, also called the access - method's API struct, includes fields specifying assorted + method's API struct, includes fields specifying assorted fixed properties of the access method, such as whether it can support multicolumn indexes. More importantly, it contains pointers to support functions for the access method, which do all of the real work to access @@ -144,8 +144,8 @@ typedef struct IndexAmRoutine To be useful, an index access method must also have one or more - operator families and - operator classes defined in + operator families and + operator classes defined in pg_opfamily, pg_opclass, pg_amop, and @@ -170,12 +170,12 @@ typedef struct IndexAmRoutine key values come from (it is always handed precomputed key values) but it will be very interested in the operator class information in pg_index. Both of these catalog entries can be - accessed as part of the Relation data structure that is + accessed as part of the Relation data structure that is passed to all operations on the index. - Some of the flag fields of IndexAmRoutine have nonobvious + Some of the flag fields of IndexAmRoutine have nonobvious implications. The requirements of amcanunique are discussed in . The amcanmulticol flag asserts that the @@ -185,7 +185,7 @@ typedef struct IndexAmRoutine When amcanmulticol is false, amoptionalkey essentially says whether the access method supports full-index scans without any restriction clause. - Access methods that support multiple index columns must + Access methods that support multiple index columns must support scans that omit restrictions on any or all of the columns after the first; however they are permitted to require some restriction to appear for the first index column, and this is signaled by setting @@ -201,17 +201,17 @@ typedef struct IndexAmRoutine indexes that have amoptionalkey true must index nulls, since the planner might decide to use such an index with no scan keys at all. A related restriction is that an index - access method that supports multiple index columns must + access method that supports multiple index columns must support indexing null values in columns after the first, because the planner will assume the index can be used for queries that do not restrict these columns. For example, consider an index on (a,b) and a query with WHERE a = 4. The system will assume the index can be used to scan for rows with a = 4, which is wrong if the - index omits rows where b is null. + index omits rows where b is null. It is, however, OK to omit rows where the first indexed column is null. An index access method that does index nulls may also set amsearchnulls, indicating that it supports - IS NULL and IS NOT NULL clauses as search + IS NULL and IS NOT NULL clauses as search conditions. @@ -235,8 +235,8 @@ ambuild (Relation heapRelation, Build a new index. The index relation has been physically created, but is empty. It must be filled in with whatever fixed data the access method requires, plus entries for all tuples already existing - in the table. Ordinarily the ambuild function will call - IndexBuildHeapScan() to scan the table for existing tuples + in the table. Ordinarily the ambuild function will call + IndexBuildHeapScan() to scan the table for existing tuples and compute the keys that need to be inserted into the index. The function must return a palloc'd struct containing statistics about the new index. @@ -264,22 +264,22 @@ aminsert (Relation indexRelation, IndexUniqueCheck checkUnique, IndexInfo *indexInfo); - Insert a new tuple into an existing index. The values and - isnull arrays give the key values to be indexed, and - heap_tid is the TID to be indexed. + Insert a new tuple into an existing index. The values and + isnull arrays give the key values to be indexed, and + heap_tid is the TID to be indexed. If the access method supports unique indexes (its - amcanunique flag is true) then - checkUnique indicates the type of uniqueness check to + amcanunique flag is true) then + checkUnique indicates the type of uniqueness check to perform. This varies depending on whether the unique constraint is deferrable; see for details. - Normally the access method only needs the heapRelation + Normally the access method only needs the heapRelation parameter when performing uniqueness checking (since then it will have to look into the heap to verify tuple liveness). The function's Boolean result value is significant only when - checkUnique is UNIQUE_CHECK_PARTIAL. + checkUnique is UNIQUE_CHECK_PARTIAL. In this case a TRUE result means the new entry is known unique, whereas FALSE means it might be non-unique (and a deferred uniqueness check must be scheduled). For other cases a constant FALSE result is recommended. @@ -287,7 +287,7 @@ aminsert (Relation indexRelation, Some indexes might not index all tuples. If the tuple is not to be - indexed, aminsert should just return without doing anything. + indexed, aminsert should just return without doing anything. @@ -306,26 +306,26 @@ ambulkdelete (IndexVacuumInfo *info, IndexBulkDeleteCallback callback, void *callback_state); - Delete tuple(s) from the index. This is a bulk delete operation + Delete tuple(s) from the index. This is a bulk delete operation that is intended to be implemented by scanning the whole index and checking each entry to see if it should be deleted. - The passed-in callback function must be called, in the style - callback(TID, callback_state) returns bool, + The passed-in callback function must be called, in the style + callback(TID, callback_state) returns bool, to determine whether any particular index entry, as identified by its referenced TID, is to be deleted. Must return either NULL or a palloc'd struct containing statistics about the effects of the deletion operation. It is OK to return NULL if no information needs to be passed on to - amvacuumcleanup. + amvacuumcleanup. - Because of limited maintenance_work_mem, - ambulkdelete might need to be called more than once when many - tuples are to be deleted. The stats argument is the result + Because of limited maintenance_work_mem, + ambulkdelete might need to be called more than once when many + tuples are to be deleted. The stats argument is the result of the previous call for this index (it is NULL for the first call within a - VACUUM operation). This allows the AM to accumulate statistics - across the whole operation. Typically, ambulkdelete will - modify and return the same struct if the passed stats is not + VACUUM operation). This allows the AM to accumulate statistics + across the whole operation. Typically, ambulkdelete will + modify and return the same struct if the passed stats is not null. @@ -336,14 +336,14 @@ amvacuumcleanup (IndexVacuumInfo *info, IndexBulkDeleteResult *stats); Clean up after a VACUUM operation (zero or more - ambulkdelete calls). This does not have to do anything + ambulkdelete calls). This does not have to do anything beyond returning index statistics, but it might perform bulk cleanup - such as reclaiming empty index pages. stats is whatever the - last ambulkdelete call returned, or NULL if - ambulkdelete was not called because no tuples needed to be + such as reclaiming empty index pages. stats is whatever the + last ambulkdelete call returned, or NULL if + ambulkdelete was not called because no tuples needed to be deleted. If the result is not NULL it must be a palloc'd struct. - The statistics it contains will be used to update pg_class, - and will be reported by VACUUM if VERBOSE is given. + The statistics it contains will be used to update pg_class, + and will be reported by VACUUM if VERBOSE is given. It is OK to return NULL if the index was not changed at all during the VACUUM operation, but otherwise correct stats should be returned. @@ -351,8 +351,8 @@ amvacuumcleanup (IndexVacuumInfo *info, As of PostgreSQL 8.4, - amvacuumcleanup will also be called at completion of an - ANALYZE operation. In this case stats is always + amvacuumcleanup will also be called at completion of an + ANALYZE operation. In this case stats is always NULL and any return value will be ignored. This case can be distinguished by checking info->analyze_only. It is recommended that the access method do nothing except post-insert cleanup in such a @@ -365,12 +365,12 @@ bool amcanreturn (Relation indexRelation, int attno); Check whether the index can support index-only scans on + linkend="indexes-index-only-scans">index-only scans on the given column, by returning the indexed column values for an index entry in the form of an IndexTuple. The attribute number is 1-based, i.e. the first column's attno is 1. Returns TRUE if supported, else FALSE. If the access method does not support index-only scans at all, - the amcanreturn field in its IndexAmRoutine + the amcanreturn field in its IndexAmRoutine struct can be set to NULL. @@ -397,18 +397,18 @@ amoptions (ArrayType *reloptions, Parse and validate the reloptions array for an index. This is called only when a non-null reloptions array exists for the index. - reloptions is a text array containing entries of the - form name=value. - The function should construct a bytea value, which will be copied - into the rd_options field of the index's relcache entry. - The data contents of the bytea value are open for the access + reloptions is a text array containing entries of the + form name=value. + The function should construct a bytea value, which will be copied + into the rd_options field of the index's relcache entry. + The data contents of the bytea value are open for the access method to define; most of the standard access methods use struct - StdRdOptions. - When validate is true, the function should report a suitable + StdRdOptions. + When validate is true, the function should report a suitable error message if any of the options are unrecognized or have invalid - values; when validate is false, invalid entries should be - silently ignored. (validate is false when loading options - already stored in pg_catalog; an invalid entry could only + values; when validate is false, invalid entries should be + silently ignored. (validate is false when loading options + already stored in pg_catalog; an invalid entry could only be found if the access method has changed its rules for options, and in that case ignoring obsolete entries is appropriate.) It is OK to return NULL if default behavior is wanted. @@ -421,44 +421,44 @@ amproperty (Oid index_oid, int attno, IndexAMProperty prop, const char *propname, bool *res, bool *isnull); - The amproperty method allows index access methods to override + The amproperty method allows index access methods to override the default behavior of pg_index_column_has_property and related functions. If the access method does not have any special behavior for index property - inquiries, the amproperty field in - its IndexAmRoutine struct can be set to NULL. - Otherwise, the amproperty method will be called with - index_oid and attno both zero for + inquiries, the amproperty field in + its IndexAmRoutine struct can be set to NULL. + Otherwise, the amproperty method will be called with + index_oid and attno both zero for pg_indexam_has_property calls, - or with index_oid valid and attno zero for + or with index_oid valid and attno zero for pg_index_has_property calls, - or with index_oid valid and attno greater than + or with index_oid valid and attno greater than zero for pg_index_column_has_property calls. - prop is an enum value identifying the property being tested, - while propname is the original property name string. + prop is an enum value identifying the property being tested, + while propname is the original property name string. If the core code does not recognize the property name - then prop is AMPROP_UNKNOWN. + then prop is AMPROP_UNKNOWN. Access methods can define custom property names by - checking propname for a match (use pg_strcasecmp + checking propname for a match (use pg_strcasecmp to match, for consistency with the core code); for names known to the core - code, it's better to inspect prop. - If the amproperty method returns true then - it has determined the property test result: it must set *res - to the boolean value to return, or set *isnull - to true to return a NULL. (Both of the referenced variables - are initialized to false before the call.) - If the amproperty method returns false then + code, it's better to inspect prop. + If the amproperty method returns true then + it has determined the property test result: it must set *res + to the boolean value to return, or set *isnull + to true to return a NULL. (Both of the referenced variables + are initialized to false before the call.) + If the amproperty method returns false then the core code will proceed with its normal logic for determining the property test result. Access methods that support ordering operators should - implement AMPROP_DISTANCE_ORDERABLE property testing, as the + implement AMPROP_DISTANCE_ORDERABLE property testing, as the core code does not know how to do that and will return NULL. It may - also be advantageous to implement AMPROP_RETURNABLE testing, + also be advantageous to implement AMPROP_RETURNABLE testing, if that can be done more cheaply than by opening the index and calling - amcanreturn, which is the core code's default behavior. + amcanreturn, which is the core code's default behavior. The default behavior should be satisfactory for all other standard properties. @@ -471,18 +471,18 @@ amvalidate (Oid opclassoid); Validate the catalog entries for the specified operator class, so far as the access method can reasonably do that. For example, this might include testing that all required support functions are provided. - The amvalidate function must return false if the opclass is - invalid. Problems should be reported with ereport messages. + The amvalidate function must return false if the opclass is + invalid. Problems should be reported with ereport messages. The purpose of an index, of course, is to support scans for tuples matching - an indexable WHERE condition, often called a - qualifier or scan key. The semantics of + an indexable WHERE condition, often called a + qualifier or scan key. The semantics of index scanning are described more fully in , - below. An index access method can support plain index scans, - bitmap index scans, or both. The scan-related functions that an + below. An index access method can support plain index scans, + bitmap index scans, or both. The scan-related functions that an index access method must or may provide are: @@ -493,17 +493,17 @@ ambeginscan (Relation indexRelation, int nkeys, int norderbys); - Prepare for an index scan. The nkeys and norderbys + Prepare for an index scan. The nkeys and norderbys parameters indicate the number of quals and ordering operators that will be used in the scan; these may be useful for space allocation purposes. Note that the actual values of the scan keys aren't provided yet. The result must be a palloc'd struct. For implementation reasons the index access method - must create this struct by calling - RelationGetIndexScan(). In most cases - ambeginscan does little beyond making that call and perhaps + must create this struct by calling + RelationGetIndexScan(). In most cases + ambeginscan does little beyond making that call and perhaps acquiring locks; - the interesting parts of index-scan startup are in amrescan. + the interesting parts of index-scan startup are in amrescan. @@ -516,10 +516,10 @@ amrescan (IndexScanDesc scan, int norderbys); Start or restart an index scan, possibly with new scan keys. (To restart - using previously-passed keys, NULL is passed for keys and/or - orderbys.) Note that it is not allowed for + using previously-passed keys, NULL is passed for keys and/or + orderbys.) Note that it is not allowed for the number of keys or order-by operators to be larger than - what was passed to ambeginscan. In practice the restart + what was passed to ambeginscan. In practice the restart feature is used when a new outer tuple is selected by a nested-loop join and so a new key comparison value is needed, but the scan key structure remains the same. @@ -534,42 +534,42 @@ amgettuple (IndexScanDesc scan, Fetch the next tuple in the given scan, moving in the given direction (forward or backward in the index). Returns TRUE if a tuple was obtained, FALSE if no matching tuples remain. In the TRUE case the tuple - TID is stored into the scan structure. Note that - success means only that the index contains an entry that matches + TID is stored into the scan structure. Note that + success means only that the index contains an entry that matches the scan keys, not that the tuple necessarily still exists in the heap or - will pass the caller's snapshot test. On success, amgettuple - must also set scan->xs_recheck to TRUE or FALSE. + will pass the caller's snapshot test. On success, amgettuple + must also set scan->xs_recheck to TRUE or FALSE. FALSE means it is certain that the index entry matches the scan keys. TRUE means this is not certain, and the conditions represented by the scan keys must be rechecked against the heap tuple after fetching it. - This provision supports lossy index operators. + This provision supports lossy index operators. Note that rechecking will extend only to the scan conditions; a partial - index predicate (if any) is never rechecked by amgettuple + index predicate (if any) is never rechecked by amgettuple callers. If the index supports index-only scans (i.e., amcanreturn returns TRUE for it), - then on success the AM must also check scan->xs_want_itup, + then on success the AM must also check scan->xs_want_itup, and if that is true it must return the originally indexed data for the index entry. The data can be returned in the form of an - IndexTuple pointer stored at scan->xs_itup, - with tuple descriptor scan->xs_itupdesc; or in the form of - a HeapTuple pointer stored at scan->xs_hitup, - with tuple descriptor scan->xs_hitupdesc. (The latter + IndexTuple pointer stored at scan->xs_itup, + with tuple descriptor scan->xs_itupdesc; or in the form of + a HeapTuple pointer stored at scan->xs_hitup, + with tuple descriptor scan->xs_hitupdesc. (The latter format should be used when reconstructing data that might possibly not fit - into an IndexTuple.) In either case, + into an IndexTuple.) In either case, management of the data referenced by the pointer is the access method's responsibility. The data must remain good at least until the next - amgettuple, amrescan, or amendscan + amgettuple, amrescan, or amendscan call for the scan. - The amgettuple function need only be provided if the access - method supports plain index scans. If it doesn't, the - amgettuple field in its IndexAmRoutine + The amgettuple function need only be provided if the access + method supports plain index scans. If it doesn't, the + amgettuple field in its IndexAmRoutine struct must be set to NULL. @@ -583,24 +583,24 @@ amgetbitmap (IndexScanDesc scan, TIDBitmap (that is, OR the set of tuple IDs into whatever set is already in the bitmap). The number of tuples fetched is returned (this might be just an approximate count, for instance some AMs do not detect duplicates). - While inserting tuple IDs into the bitmap, amgetbitmap can + While inserting tuple IDs into the bitmap, amgetbitmap can indicate that rechecking of the scan conditions is required for specific - tuple IDs. This is analogous to the xs_recheck output parameter - of amgettuple. Note: in the current implementation, support + tuple IDs. This is analogous to the xs_recheck output parameter + of amgettuple. Note: in the current implementation, support for this feature is conflated with support for lossy storage of the bitmap itself, and therefore callers recheck both the scan conditions and the partial index predicate (if any) for recheckable tuples. That might not always be true, however. - amgetbitmap and - amgettuple cannot be used in the same index scan; there - are other restrictions too when using amgetbitmap, as explained + amgetbitmap and + amgettuple cannot be used in the same index scan; there + are other restrictions too when using amgetbitmap, as explained in . - The amgetbitmap function need only be provided if the access - method supports bitmap index scans. If it doesn't, the - amgetbitmap field in its IndexAmRoutine + The amgetbitmap function need only be provided if the access + method supports bitmap index scans. If it doesn't, the + amgetbitmap field in its IndexAmRoutine struct must be set to NULL. @@ -609,7 +609,7 @@ amgetbitmap (IndexScanDesc scan, void amendscan (IndexScanDesc scan); - End a scan and release resources. The scan struct itself + End a scan and release resources. The scan struct itself should not be freed, but any locks or pins taken internally by the access method must be released. @@ -624,9 +624,9 @@ ammarkpos (IndexScanDesc scan); - The ammarkpos function need only be provided if the access + The ammarkpos function need only be provided if the access method supports ordered scans. If it doesn't, - the ammarkpos field in its IndexAmRoutine + the ammarkpos field in its IndexAmRoutine struct may be set to NULL. @@ -639,15 +639,15 @@ amrestrpos (IndexScanDesc scan); - The amrestrpos function need only be provided if the access + The amrestrpos function need only be provided if the access method supports ordered scans. If it doesn't, - the amrestrpos field in its IndexAmRoutine + the amrestrpos field in its IndexAmRoutine struct may be set to NULL. In addition to supporting ordinary index scans, some types of index - may wish to support parallel index scans, which allow + may wish to support parallel index scans, which allow multiple backends to cooperate in performing an index scan. The index access method should arrange things so that each cooperating process returns a subset of the tuples that would be performed by @@ -668,7 +668,7 @@ amestimateparallelscan (void); Estimate and return the number of bytes of dynamic shared memory which the access method will be needed to perform a parallel scan. (This number is in addition to, not in lieu of, the amount of space needed for - AM-independent data in ParallelIndexScanDescData.) + AM-independent data in ParallelIndexScanDescData.) @@ -683,9 +683,9 @@ void aminitparallelscan (void *target); This function will be called to initialize dynamic shared memory at the - beginning of a parallel scan. target will point to at least + beginning of a parallel scan. target will point to at least the number of bytes previously returned by - amestimateparallelscan, and this function may use that + amestimateparallelscan, and this function may use that amount of space to store whatever data it wishes. @@ -702,7 +702,7 @@ amparallelrescan (IndexScanDesc scan); This function, if implemented, will be called when a parallel index scan must be restarted. It should reset any shared state set up by - aminitparallelscan such that the scan will be restarted from + aminitparallelscan such that the scan will be restarted from the beginning. @@ -714,16 +714,16 @@ amparallelrescan (IndexScanDesc scan); In an index scan, the index access method is responsible for regurgitating the TIDs of all the tuples it has been told about that match the - scan keys. The access method is not involved in + scan keys. The access method is not involved in actually fetching those tuples from the index's parent table, nor in determining whether they pass the scan's time qualification test or other conditions. - A scan key is the internal representation of a WHERE clause of - the form index_key operator - constant, where the index key is one of the columns of the + A scan key is the internal representation of a WHERE clause of + the form index_key operator + constant, where the index key is one of the columns of the index and the operator is one of the members of the operator family associated with that index column. An index scan has zero or more scan keys, which are implicitly ANDed — the returned tuples are expected @@ -731,7 +731,7 @@ amparallelrescan (IndexScanDesc scan); - The access method can report that the index is lossy, or + The access method can report that the index is lossy, or requires rechecks, for a particular query. This implies that the index scan will return all the entries that pass the scan key, plus possibly additional entries that do not. The core system's index-scan machinery @@ -743,16 +743,16 @@ amparallelrescan (IndexScanDesc scan); Note that it is entirely up to the access method to ensure that it correctly finds all and only the entries passing all the given scan keys. - Also, the core system will simply hand off all the WHERE + Also, the core system will simply hand off all the WHERE clauses that match the index keys and operator families, without any semantic analysis to determine whether they are redundant or contradictory. As an example, given - WHERE x > 4 AND x > 14 where x is a b-tree - indexed column, it is left to the b-tree amrescan function + WHERE x > 4 AND x > 14 where x is a b-tree + indexed column, it is left to the b-tree amrescan function to realize that the first scan key is redundant and can be discarded. - The extent of preprocessing needed during amrescan will + The extent of preprocessing needed during amrescan will depend on the extent to which the index access method needs to reduce - the scan keys to a normalized form. + the scan keys to a normalized form. @@ -765,7 +765,7 @@ amparallelrescan (IndexScanDesc scan); Access methods that always return entries in the natural ordering of their data (such as btree) should set - amcanorder to true. + amcanorder to true. Currently, such access methods must use btree-compatible strategy numbers for their equality and ordering operators. @@ -773,11 +773,11 @@ amparallelrescan (IndexScanDesc scan); Access methods that support ordering operators should set - amcanorderbyop to true. + amcanorderbyop to true. This indicates that the index is capable of returning entries in - an order satisfying ORDER BY index_key - operator constant. Scan modifiers - of that form can be passed to amrescan as described + an order satisfying ORDER BY index_key + operator constant. Scan modifiers + of that form can be passed to amrescan as described previously. @@ -785,29 +785,29 @@ amparallelrescan (IndexScanDesc scan); - The amgettuple function has a direction argument, - which can be either ForwardScanDirection (the normal case) - or BackwardScanDirection. If the first call after - amrescan specifies BackwardScanDirection, then the + The amgettuple function has a direction argument, + which can be either ForwardScanDirection (the normal case) + or BackwardScanDirection. If the first call after + amrescan specifies BackwardScanDirection, then the set of matching index entries is to be scanned back-to-front rather than in - the normal front-to-back direction, so amgettuple must return + the normal front-to-back direction, so amgettuple must return the last matching tuple in the index, rather than the first one as it normally would. (This will only occur for access - methods that set amcanorder to true.) After the - first call, amgettuple must be prepared to advance the scan in + methods that set amcanorder to true.) After the + first call, amgettuple must be prepared to advance the scan in either direction from the most recently returned entry. (But if - amcanbackward is false, all subsequent + amcanbackward is false, all subsequent calls will have the same direction as the first one.) - Access methods that support ordered scans must support marking a + Access methods that support ordered scans must support marking a position in a scan and later returning to the marked position. The same position might be restored multiple times. However, only one position need - be remembered per scan; a new ammarkpos call overrides the + be remembered per scan; a new ammarkpos call overrides the previously marked position. An access method that does not support ordered - scans need not provide ammarkpos and amrestrpos - functions in IndexAmRoutine; set those pointers to NULL + scans need not provide ammarkpos and amrestrpos + functions in IndexAmRoutine; set those pointers to NULL instead. @@ -835,29 +835,29 @@ amparallelrescan (IndexScanDesc scan); - Instead of using amgettuple, an index scan can be done with - amgetbitmap to fetch all tuples in one call. This can be - noticeably more efficient than amgettuple because it allows + Instead of using amgettuple, an index scan can be done with + amgetbitmap to fetch all tuples in one call. This can be + noticeably more efficient than amgettuple because it allows avoiding lock/unlock cycles within the access method. In principle - amgetbitmap should have the same effects as repeated - amgettuple calls, but we impose several restrictions to - simplify matters. First of all, amgetbitmap returns all + amgetbitmap should have the same effects as repeated + amgettuple calls, but we impose several restrictions to + simplify matters. First of all, amgetbitmap returns all tuples at once and marking or restoring scan positions isn't supported. Secondly, the tuples are returned in a bitmap which doesn't - have any specific ordering, which is why amgetbitmap doesn't - take a direction argument. (Ordering operators will never be + have any specific ordering, which is why amgetbitmap doesn't + take a direction argument. (Ordering operators will never be supplied for such a scan, either.) Also, there is no provision for index-only scans with - amgetbitmap, since there is no way to return the contents of + amgetbitmap, since there is no way to return the contents of index tuples. - Finally, amgetbitmap + Finally, amgetbitmap does not guarantee any locking of the returned tuples, with implications spelled out in . Note that it is permitted for an access method to implement only - amgetbitmap and not amgettuple, or vice versa, + amgetbitmap and not amgettuple, or vice versa, if its internal implementation is unsuited to one API or the other. @@ -870,26 +870,26 @@ amparallelrescan (IndexScanDesc scan); Index access methods must handle concurrent updates of the index by multiple processes. The core PostgreSQL system obtains - AccessShareLock on the index during an index scan, and - RowExclusiveLock when updating the index (including plain - VACUUM). Since these lock types do not conflict, the access + AccessShareLock on the index during an index scan, and + RowExclusiveLock when updating the index (including plain + VACUUM). Since these lock types do not conflict, the access method is responsible for handling any fine-grained locking it might need. An exclusive lock on the index as a whole will be taken only during index - creation, destruction, or REINDEX. + creation, destruction, or REINDEX. Building an index type that supports concurrent updates usually requires extensive and subtle analysis of the required behavior. For the b-tree and hash index types, you can read about the design decisions involved in - src/backend/access/nbtree/README and - src/backend/access/hash/README. + src/backend/access/nbtree/README and + src/backend/access/hash/README. Aside from the index's own internal consistency requirements, concurrent updates create issues about consistency between the parent table (the - heap) and the index. Because + heap) and the index. Because PostgreSQL separates accesses and updates of the heap from those of the index, there are windows in which the index might be inconsistent with the heap. We handle this problem @@ -906,7 +906,7 @@ amparallelrescan (IndexScanDesc scan); - When a heap entry is to be deleted (by VACUUM), all its + When a heap entry is to be deleted (by VACUUM), all its index entries must be removed first. @@ -914,7 +914,7 @@ amparallelrescan (IndexScanDesc scan); An index scan must maintain a pin on the index page holding the item last returned by - amgettuple, and ambulkdelete cannot delete + amgettuple, and ambulkdelete cannot delete entries from pages that are pinned by other backends. The need for this rule is explained below. @@ -922,33 +922,33 @@ amparallelrescan (IndexScanDesc scan); Without the third rule, it is possible for an index reader to - see an index entry just before it is removed by VACUUM, and + see an index entry just before it is removed by VACUUM, and then to arrive at the corresponding heap entry after that was removed by - VACUUM. + VACUUM. This creates no serious problems if that item number is still unused when the reader reaches it, since an empty - item slot will be ignored by heap_fetch(). But what if a + item slot will be ignored by heap_fetch(). But what if a third backend has already re-used the item slot for something else? When using an MVCC-compliant snapshot, there is no problem because the new occupant of the slot is certain to be too new to pass the snapshot test. However, with a non-MVCC-compliant snapshot (such as - SnapshotAny), it would be possible to accept and return + SnapshotAny), it would be possible to accept and return a row that does not in fact match the scan keys. We could defend against this scenario by requiring the scan keys to be rechecked against the heap row in all cases, but that is too expensive. Instead, we use a pin on an index page as a proxy to indicate that the reader - might still be in flight from the index entry to the matching - heap entry. Making ambulkdelete block on such a pin ensures - that VACUUM cannot delete the heap entry before the reader + might still be in flight from the index entry to the matching + heap entry. Making ambulkdelete block on such a pin ensures + that VACUUM cannot delete the heap entry before the reader is done with it. This solution costs little in run time, and adds blocking overhead only in the rare cases where there actually is a conflict. - This solution requires that index scans be synchronous: we have + This solution requires that index scans be synchronous: we have to fetch each heap tuple immediately after scanning the corresponding index entry. This is expensive for a number of reasons. An - asynchronous scan in which we collect many TIDs from the index, + asynchronous scan in which we collect many TIDs from the index, and only visit the heap tuples sometime later, requires much less index locking overhead and can allow a more efficient heap access pattern. Per the above analysis, we must use the synchronous approach for @@ -957,13 +957,13 @@ amparallelrescan (IndexScanDesc scan); - In an amgetbitmap index scan, the access method does not + In an amgetbitmap index scan, the access method does not keep an index pin on any of the returned tuples. Therefore it is only safe to use such scans with MVCC-compliant snapshots. - When the ampredlocks flag is not set, any scan using that + When the ampredlocks flag is not set, any scan using that index access method within a serializable transaction will acquire a nonblocking predicate lock on the full index. This will generate a read-write conflict with the insert of any tuple into that index by a @@ -982,9 +982,9 @@ amparallelrescan (IndexScanDesc scan); PostgreSQL enforces SQL uniqueness constraints - using unique indexes, which are indexes that disallow + using unique indexes, which are indexes that disallow multiple entries with identical keys. An access method that supports this - feature sets amcanunique true. + feature sets amcanunique true. (At present, only b-tree supports it.) @@ -1032,7 +1032,7 @@ amparallelrescan (IndexScanDesc scan); no violation should be reported. (This case cannot occur during the ordinary scenario of inserting a row that's just been created by the current transaction. It can happen during - CREATE UNIQUE INDEX CONCURRENTLY, however.) + CREATE UNIQUE INDEX CONCURRENTLY, however.) @@ -1057,32 +1057,32 @@ amparallelrescan (IndexScanDesc scan); are done. Otherwise, we schedule a recheck to occur when it is time to enforce the constraint. If, at the time of the recheck, both the inserted tuple and some other tuple with the same key are live, then the error - must be reported. (Note that for this purpose, live actually - means any tuple in the index entry's HOT chain is live.) - To implement this, the aminsert function is passed a - checkUnique parameter having one of the following values: + must be reported. (Note that for this purpose, live actually + means any tuple in the index entry's HOT chain is live.) + To implement this, the aminsert function is passed a + checkUnique parameter having one of the following values: - UNIQUE_CHECK_NO indicates that no uniqueness checking + UNIQUE_CHECK_NO indicates that no uniqueness checking should be done (this is not a unique index). - UNIQUE_CHECK_YES indicates that this is a non-deferrable + UNIQUE_CHECK_YES indicates that this is a non-deferrable unique index, and the uniqueness check must be done immediately, as described above. - UNIQUE_CHECK_PARTIAL indicates that the unique + UNIQUE_CHECK_PARTIAL indicates that the unique constraint is deferrable. PostgreSQL will use this mode to insert each row's index entry. The access method must allow duplicate entries into the index, and report any - potential duplicates by returning FALSE from aminsert. + potential duplicates by returning FALSE from aminsert. For each row for which FALSE is returned, a deferred recheck will be scheduled. @@ -1098,21 +1098,21 @@ amparallelrescan (IndexScanDesc scan); - UNIQUE_CHECK_EXISTING indicates that this is a deferred + UNIQUE_CHECK_EXISTING indicates that this is a deferred recheck of a row that was reported as a potential uniqueness violation. - Although this is implemented by calling aminsert, the - access method must not insert a new index entry in this + Although this is implemented by calling aminsert, the + access method must not insert a new index entry in this case. The index entry is already present. Rather, the access method must check to see if there is another live index entry. If so, and if the target row is also still live, report error. - It is recommended that in a UNIQUE_CHECK_EXISTING call, + It is recommended that in a UNIQUE_CHECK_EXISTING call, the access method further verify that the target row actually does have an existing entry in the index, and report error if not. This is a good idea because the index tuple values passed to - aminsert will have been recomputed. If the index + aminsert will have been recomputed. If the index definition involves functions that are not really immutable, we might be checking the wrong area of the index. Checking that the target row is found in the recheck verifies that we are scanning @@ -1128,20 +1128,20 @@ amparallelrescan (IndexScanDesc scan); Index Cost Estimation Functions - The amcostestimate function is given information describing + The amcostestimate function is given information describing a possible index scan, including lists of WHERE and ORDER BY clauses that have been determined to be usable with the index. It must return estimates of the cost of accessing the index and the selectivity of the WHERE clauses (that is, the fraction of parent-table rows that will be retrieved during the index scan). For simple cases, nearly all the work of the cost estimator can be done by calling standard routines - in the optimizer; the point of having an amcostestimate function is + in the optimizer; the point of having an amcostestimate function is to allow index access methods to provide index-type-specific knowledge, in case it is possible to improve on the standard estimates. - Each amcostestimate function must have the signature: + Each amcostestimate function must have the signature: void @@ -1158,7 +1158,7 @@ amcostestimate (PlannerInfo *root, - root + root The planner's information about the query being processed. @@ -1167,7 +1167,7 @@ amcostestimate (PlannerInfo *root, - path + path The index access path being considered. All fields except cost and @@ -1177,14 +1177,14 @@ amcostestimate (PlannerInfo *root, - loop_count + loop_count The number of repetitions of the index scan that should be factored into the cost estimates. This will typically be greater than one when considering a parameterized scan for use in the inside of a nestloop join. Note that the cost estimates should still be for just one scan; - a larger loop_count means that it may be appropriate + a larger loop_count means that it may be appropriate to allow for some caching effects across multiple scans. @@ -1197,7 +1197,7 @@ amcostestimate (PlannerInfo *root, - *indexStartupCost + *indexStartupCost Set to cost of index start-up processing @@ -1206,7 +1206,7 @@ amcostestimate (PlannerInfo *root, - *indexTotalCost + *indexTotalCost Set to total cost of index processing @@ -1215,7 +1215,7 @@ amcostestimate (PlannerInfo *root, - *indexSelectivity + *indexSelectivity Set to index selectivity @@ -1224,7 +1224,7 @@ amcostestimate (PlannerInfo *root, - *indexCorrelation + *indexCorrelation Set to correlation coefficient between index scan order and @@ -1244,17 +1244,17 @@ amcostestimate (PlannerInfo *root, The index access costs should be computed using the parameters used by src/backend/optimizer/path/costsize.c: a sequential - disk block fetch has cost seq_page_cost, a nonsequential fetch - has cost random_page_cost, and the cost of processing one index - row should usually be taken as cpu_index_tuple_cost. In - addition, an appropriate multiple of cpu_operator_cost should + disk block fetch has cost seq_page_cost, a nonsequential fetch + has cost random_page_cost, and the cost of processing one index + row should usually be taken as cpu_index_tuple_cost. In + addition, an appropriate multiple of cpu_operator_cost should be charged for any comparison operators invoked during index processing (especially evaluation of the indexquals themselves). The access costs should include all disk and CPU costs associated with - scanning the index itself, but not the costs of retrieving or + scanning the index itself, but not the costs of retrieving or processing the parent-table rows that are identified by the index. @@ -1266,21 +1266,21 @@ amcostestimate (PlannerInfo *root, - The indexSelectivity should be set to the estimated fraction of the parent + The indexSelectivity should be set to the estimated fraction of the parent table rows that will be retrieved during the index scan. In the case of a lossy query, this will typically be higher than the fraction of rows that actually pass the given qual conditions. - The indexCorrelation should be set to the correlation (ranging between + The indexCorrelation should be set to the correlation (ranging between -1.0 and 1.0) between the index order and the table order. This is used to adjust the estimate for the cost of fetching rows from the parent table. - When loop_count is greater than one, the returned numbers + When loop_count is greater than one, the returned numbers should be averages expected for any one scan of the index. @@ -1307,17 +1307,17 @@ amcostestimate (PlannerInfo *root, Estimate the number of index rows that will be visited during the - scan. For many index types this is the same as indexSelectivity times + scan. For many index types this is the same as indexSelectivity times the number of rows in the index, but it might be more. (Note that the index's size in pages and rows is available from the - path->indexinfo struct.) + path->indexinfo struct.) Estimate the number of index pages that will be retrieved during the scan. - This might be just indexSelectivity times the index's size in pages. + This might be just indexSelectivity times the index's size in pages. diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index e40750e8ec..4cdd387b7b 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -147,14 +147,14 @@ CREATE INDEX test1_id_index ON test1 (id); Constructs equivalent to combinations of these operators, such as - BETWEEN and IN, can also be implemented with - a B-tree index search. Also, an IS NULL or IS NOT - NULL condition on an index column can be used with a B-tree index. + BETWEEN and IN, can also be implemented with + a B-tree index search. Also, an IS NULL or IS NOT + NULL condition on an index column can be used with a B-tree index. The optimizer can also use a B-tree index for queries involving the - pattern matching operators LIKE and ~ + pattern matching operators LIKE and ~ if the pattern is a constant and is anchored to the beginning of the string — for example, col LIKE 'foo%' or col ~ '^foo', but not @@ -206,7 +206,7 @@ CREATE INDEX name ON table within which many different indexing strategies can be implemented. Accordingly, the particular operators with which a GiST index can be used vary depending on the indexing strategy (the operator - class). As an example, the standard distribution of + class). As an example, the standard distribution of PostgreSQL includes GiST operator classes for several two-dimensional geometric data types, which support indexed queries using these operators: @@ -231,12 +231,12 @@ CREATE INDEX name ON table The GiST operator classes included in the standard distribution are documented in . Many other GiST operator - classes are available in the contrib collection or as separate + classes are available in the contrib collection or as separate projects. For more information see . - GiST indexes are also capable of optimizing nearest-neighbor + GiST indexes are also capable of optimizing nearest-neighbor searches, such as point '(101,456)' LIMIT 10; @@ -245,7 +245,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; which finds the ten places closest to a given target point. The ability to do this is again dependent on the particular operator class being used. In , operators that can be - used in this way are listed in the column Ordering Operators. + used in this way are listed in the column Ordering Operators. @@ -290,7 +290,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; GIN index - GIN indexes are inverted indexes which are appropriate for + GIN indexes are inverted indexes which are appropriate for data values that contain multiple component values, such as arrays. An inverted index contains a separate entry for each component value, and can efficiently handle queries that test for the presence of specific @@ -318,7 +318,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; The GIN operator classes included in the standard distribution are documented in . Many other GIN operator - classes are available in the contrib collection or as separate + classes are available in the contrib collection or as separate projects. For more information see . @@ -407,13 +407,13 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); are checked in the index, so they save visits to the table proper, but they do not reduce the portion of the index that has to be scanned. For example, given an index on (a, b, c) and a - query condition WHERE a = 5 AND b >= 42 AND c < 77, + query condition WHERE a = 5 AND b >= 42 AND c < 77, the index would have to be scanned from the first entry with - a = 5 and b = 42 up through the last entry with - a = 5. Index entries with c >= 77 would be + a = 5 and b = 42 up through the last entry with + a = 5. Index entries with c >= 77 would be skipped, but they'd still have to be scanned through. This index could in principle be used for queries that have constraints - on b and/or c with no constraint on a + on b and/or c with no constraint on a — but the entire index would have to be scanned, so in most cases the planner would prefer a sequential table scan over using the index. @@ -462,17 +462,17 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); - Indexes and <literal>ORDER BY</> + Indexes and <literal>ORDER BY</literal> index - and ORDER BY + and ORDER BY In addition to simply finding the rows to be returned by a query, an index may be able to deliver them in a specific sorted order. - This allows a query's ORDER BY specification to be honored + This allows a query's ORDER BY specification to be honored without a separate sorting step. Of the index types currently supported by PostgreSQL, only B-tree can produce sorted output — the other index types return @@ -480,7 +480,7 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); - The planner will consider satisfying an ORDER BY specification + The planner will consider satisfying an ORDER BY specification either by scanning an available index that matches the specification, or by scanning the table in physical order and doing an explicit sort. For a query that requires scanning a large fraction of the @@ -488,50 +488,50 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); because it requires less disk I/O due to following a sequential access pattern. Indexes are more useful when only a few rows need be fetched. An important - special case is ORDER BY in combination with - LIMIT n: an explicit sort will have to process - all the data to identify the first n rows, but if there is - an index matching the ORDER BY, the first n + special case is ORDER BY in combination with + LIMIT n: an explicit sort will have to process + all the data to identify the first n rows, but if there is + an index matching the ORDER BY, the first n rows can be retrieved directly, without scanning the remainder at all. By default, B-tree indexes store their entries in ascending order with nulls last. This means that a forward scan of an index on - column x produces output satisfying ORDER BY x - (or more verbosely, ORDER BY x ASC NULLS LAST). The + column x produces output satisfying ORDER BY x + (or more verbosely, ORDER BY x ASC NULLS LAST). The index can also be scanned backward, producing output satisfying - ORDER BY x DESC - (or more verbosely, ORDER BY x DESC NULLS FIRST, since - NULLS FIRST is the default for ORDER BY DESC). + ORDER BY x DESC + (or more verbosely, ORDER BY x DESC NULLS FIRST, since + NULLS FIRST is the default for ORDER BY DESC). You can adjust the ordering of a B-tree index by including the - options ASC, DESC, NULLS FIRST, - and/or NULLS LAST when creating the index; for example: + options ASC, DESC, NULLS FIRST, + and/or NULLS LAST when creating the index; for example: CREATE INDEX test2_info_nulls_low ON test2 (info NULLS FIRST); CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); An index stored in ascending order with nulls first can satisfy - either ORDER BY x ASC NULLS FIRST or - ORDER BY x DESC NULLS LAST depending on which direction + either ORDER BY x ASC NULLS FIRST or + ORDER BY x DESC NULLS LAST depending on which direction it is scanned in. You might wonder why bother providing all four options, when two options together with the possibility of backward scan would cover - all the variants of ORDER BY. In single-column indexes + all the variants of ORDER BY. In single-column indexes the options are indeed redundant, but in multicolumn indexes they can be - useful. Consider a two-column index on (x, y): this can - satisfy ORDER BY x, y if we scan forward, or - ORDER BY x DESC, y DESC if we scan backward. + useful. Consider a two-column index on (x, y): this can + satisfy ORDER BY x, y if we scan forward, or + ORDER BY x DESC, y DESC if we scan backward. But it might be that the application frequently needs to use - ORDER BY x ASC, y DESC. There is no way to get that + ORDER BY x ASC, y DESC. There is no way to get that ordering from a plain index, but it is possible if the index is defined - as (x ASC, y DESC) or (x DESC, y ASC). + as (x ASC, y DESC) or (x DESC, y ASC). @@ -559,38 +559,38 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); A single index scan can only use query clauses that use the index's columns with operators of its operator class and are joined with - AND. For example, given an index on (a, b) - a query condition like WHERE a = 5 AND b = 6 could - use the index, but a query like WHERE a = 5 OR b = 6 could not + AND. For example, given an index on (a, b) + a query condition like WHERE a = 5 AND b = 6 could + use the index, but a query like WHERE a = 5 OR b = 6 could not directly use the index. Fortunately, - PostgreSQL has the ability to combine multiple indexes + PostgreSQL has the ability to combine multiple indexes (including multiple uses of the same index) to handle cases that cannot - be implemented by single index scans. The system can form AND - and OR conditions across several index scans. For example, - a query like WHERE x = 42 OR x = 47 OR x = 53 OR x = 99 - could be broken down into four separate scans of an index on x, + be implemented by single index scans. The system can form AND + and OR conditions across several index scans. For example, + a query like WHERE x = 42 OR x = 47 OR x = 53 OR x = 99 + could be broken down into four separate scans of an index on x, each scan using one of the query clauses. The results of these scans are then ORed together to produce the result. Another example is that if we - have separate indexes on x and y, one possible - implementation of a query like WHERE x = 5 AND y = 6 is to + have separate indexes on x and y, one possible + implementation of a query like WHERE x = 5 AND y = 6 is to use each index with the appropriate query clause and then AND together the index results to identify the result rows. To combine multiple indexes, the system scans each needed index and - prepares a bitmap in memory giving the locations of + prepares a bitmap in memory giving the locations of table rows that are reported as matching that index's conditions. The bitmaps are then ANDed and ORed together as needed by the query. Finally, the actual table rows are visited and returned. The table rows are visited in physical order, because that is how the bitmap is laid out; this means that any ordering of the original indexes is lost, and so a separate sort step will be needed if the query has an ORDER - BY clause. For this reason, and because each additional index scan + BY clause. For this reason, and because each additional index scan adds extra time, the planner will sometimes choose to use a simple index scan even though additional indexes are available that could have been used as well. @@ -603,19 +603,19 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); indexes are best, but sometimes it's better to create separate indexes and rely on the index-combination feature. For example, if your workload includes a mix of queries that sometimes involve only column - x, sometimes only column y, and sometimes both + x, sometimes only column y, and sometimes both columns, you might choose to create two separate indexes on - x and y, relying on index combination to + x and y, relying on index combination to process the queries that use both columns. You could also create a - multicolumn index on (x, y). This index would typically be + multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in , it - would be almost useless for queries involving only y, so it + would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index - and a separate index on y would serve reasonably well. For - queries involving only x, the multicolumn index could be + and a separate index on y would serve reasonably well. For + queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on - x alone. The last alternative is to create all three + x alone. The last alternative is to create all three indexes, but this is probably only reasonable if the table is searched much more often than it is updated and all three types of query are common. If one of the types of query is much less common than the @@ -698,9 +698,9 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); - If we were to declare this index UNIQUE, it would prevent - creation of rows whose col1 values differ only in case, - as well as rows whose col1 values are actually identical. + If we were to declare this index UNIQUE, it would prevent + creation of rows whose col1 values differ only in case, + as well as rows whose col1 values are actually identical. Thus, indexes on expressions can be used to enforce constraints that are not definable as simple unique constraints. @@ -717,7 +717,7 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); - The syntax of the CREATE INDEX command normally requires + The syntax of the CREATE INDEX command normally requires writing parentheses around index expressions, as shown in the second example. The parentheses can be omitted when the expression is just a function call, as in the first example. @@ -727,9 +727,9 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); Index expressions are relatively expensive to maintain, because the derived expression(s) must be computed for each row upon insertion and whenever it is updated. However, the index expressions are - not recomputed during an indexed search, since they are + not recomputed during an indexed search, since they are already stored in the index. In both examples above, the system - sees the query as just WHERE indexedcolumn = 'constant' + sees the query as just WHERE indexedcolumn = 'constant' and so the speed of the search is equivalent to any other simple index query. Thus, indexes on expressions are useful when retrieval speed is more important than insertion and update speed. @@ -856,12 +856,12 @@ CREATE INDEX orders_unbilled_index ON orders (order_nr) SELECT * FROM orders WHERE billed is not true AND order_nr < 10000; However, the index can also be used in queries that do not involve - order_nr at all, e.g.: + order_nr at all, e.g.: SELECT * FROM orders WHERE billed is not true AND amount > 5000.00; This is not as efficient as a partial index on the - amount column would be, since the system has to + amount column would be, since the system has to scan the entire index. Yet, if there are relatively few unbilled orders, using this partial index just to find the unbilled orders could be a win. @@ -886,7 +886,7 @@ SELECT * FROM orders WHERE order_nr = 3501; predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that - the WHERE condition of the query mathematically implies + the WHERE condition of the query mathematically implies the predicate of the index. PostgreSQL does not have a sophisticated theorem prover that can recognize mathematically equivalent @@ -896,7 +896,7 @@ SELECT * FROM orders WHERE order_nr = 3501; The system can recognize simple inequality implications, for example x < 1 implies x < 2; otherwise the predicate condition must exactly match part of the query's - WHERE condition + WHERE condition or the index will not be recognized as usable. Matching takes place at query planning time, not at run time. As a result, parameterized query clauses do not work with a partial index. For @@ -919,9 +919,9 @@ SELECT * FROM orders WHERE order_nr = 3501; Suppose that we have a table describing test outcomes. We wish - to ensure that there is only one successful entry for + to ensure that there is only one successful entry for a given subject and target combination, but there might be any number of - unsuccessful entries. Here is one way to do it: + unsuccessful entries. Here is one way to do it: CREATE TABLE tests ( subject text, @@ -944,7 +944,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, - PostgreSQL makes reasonable choices about index + PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause @@ -956,7 +956,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in - PostgreSQL work. In most cases, the advantage of a + PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal. @@ -998,8 +998,8 @@ CREATE INDEX name ON table the proper class when making an index. The operator class determines the basic sort ordering (which can then be modified by adding sort options COLLATE, - ASC/DESC and/or - NULLS FIRST/NULLS LAST). + ASC/DESC and/or + NULLS FIRST/NULLS LAST). @@ -1025,8 +1025,8 @@ CREATE INDEX name ON table CREATE INDEX test_index ON test_table (col varchar_pattern_ops); Note that you should also create an index with the default operator - class if you want queries involving ordinary <, - <=, >, or >= comparisons + class if you want queries involving ordinary <, + <=, >, or >= comparisons to use an index. Such queries cannot use the xxx_pattern_ops operator classes. (Ordinary equality comparisons can use these @@ -1057,7 +1057,7 @@ SELECT am.amname AS index_method, An operator class is actually just a subset of a larger structure called an - operator family. In cases where several data types have + operator family. In cases where several data types have similar behaviors, it is frequently useful to define cross-data-type operators and allow these to work with indexes. To do this, the operator classes for each of the types must be grouped into the same operator @@ -1147,13 +1147,13 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); - All indexes in PostgreSQL are secondary + All indexes in PostgreSQL are secondary indexes, meaning that each index is stored separately from the table's - main data area (which is called the table's heap - in PostgreSQL terminology). This means that in an + main data area (which is called the table's heap + in PostgreSQL terminology). This means that in an ordinary index scan, each row retrieval requires fetching data from both the index and the heap. Furthermore, while the index entries that match a - given indexable WHERE condition are usually close together in + given indexable WHERE condition are usually close together in the index, the table rows they reference might be anywhere in the heap. The heap-access portion of an index scan thus involves a lot of random access into the heap, which can be slow, particularly on traditional @@ -1163,8 +1163,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); - To solve this performance problem, PostgreSQL - supports index-only scans, which can answer queries from an + To solve this performance problem, PostgreSQL + supports index-only scans, which can answer queries from an index alone without any heap access. The basic idea is to return values directly out of each index entry instead of consulting the associated heap entry. There are two fundamental restrictions on when this method can be @@ -1187,8 +1187,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); The query must reference only columns stored in the index. For - example, given an index on columns x and y of a - table that also has a column z, these queries could use + example, given an index on columns x and y of a + table that also has a column z, these queries could use index-only scans: SELECT x, y FROM tab WHERE x = 'key'; @@ -1210,17 +1210,17 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; If these two fundamental requirements are met, then all the data values required by the query are available from the index, so an index-only scan is physically possible. But there is an additional requirement for any - table scan in PostgreSQL: it must verify that each - retrieved row be visible to the query's MVCC snapshot, as + table scan in PostgreSQL: it must verify that each + retrieved row be visible to the query's MVCC snapshot, as discussed in . Visibility information is not stored in index entries, only in heap entries; so at first glance it would seem that every row retrieval would require a heap access anyway. And this is indeed the case, if the table row has been modified recently. However, for seldom-changing data there is a way around this - problem. PostgreSQL tracks, for each page in a table's + problem. PostgreSQL tracks, for each page in a table's heap, whether all rows stored in that page are old enough to be visible to all current and future transactions. This information is stored in a bit - in the table's visibility map. An index-only scan, after + in the table's visibility map. An index-only scan, after finding a candidate index entry, checks the visibility map bit for the corresponding heap page. If it's set, the row is known visible and so the data can be returned with no further work. If it's not set, the heap @@ -1243,48 +1243,48 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; To make effective use of the index-only scan feature, you might choose to create indexes in which only the leading columns are meant to - match WHERE clauses, while the trailing columns - hold payload data to be returned by a query. For example, if + match WHERE clauses, while the trailing columns + hold payload data to be returned by a query. For example, if you commonly run queries like SELECT y FROM tab WHERE x = 'key'; the traditional approach to speeding up such queries would be to create an - index on x only. However, an index on (x, y) + index on x only. However, an index on (x, y) would offer the possibility of implementing this query as an index-only scan. As previously discussed, such an index would be larger and hence - more expensive than an index on x alone, so this is attractive + more expensive than an index on x alone, so this is attractive only if the table is known to be mostly static. Note it's important that - the index be declared on (x, y) not (y, x), as for + the index be declared on (x, y) not (y, x), as for most index types (particularly B-trees) searches that do not constrain the leading index columns are not very efficient. In principle, index-only scans can be used with expression indexes. - For example, given an index on f(x) where x is a + For example, given an index on f(x) where x is a table column, it should be possible to execute SELECT f(x) FROM tab WHERE f(x) < 1; - as an index-only scan; and this is very attractive if f() is - an expensive-to-compute function. However, PostgreSQL's + as an index-only scan; and this is very attractive if f() is + an expensive-to-compute function. However, PostgreSQL's planner is currently not very smart about such cases. It considers a query to be potentially executable by index-only scan only when - all columns needed by the query are available from the index. - In this example, x is not needed except in the - context f(x), but the planner does not notice that and + all columns needed by the query are available from the index. + In this example, x is not needed except in the + context f(x), but the planner does not notice that and concludes that an index-only scan is not possible. If an index-only scan seems sufficiently worthwhile, this can be worked around by declaring the - index to be on (f(x), x), where the second column is not + index to be on (f(x), x), where the second column is not expected to be used in practice but is just there to convince the planner that an index-only scan is possible. An additional caveat, if the goal is - to avoid recalculating f(x), is that the planner won't - necessarily match uses of f(x) that aren't in - indexable WHERE clauses to the index column. It will usually + to avoid recalculating f(x), is that the planner won't + necessarily match uses of f(x) that aren't in + indexable WHERE clauses to the index column. It will usually get this right in simple queries such as shown above, but not in queries that involve joins. These deficiencies may be remedied in future versions - of PostgreSQL. + of PostgreSQL. @@ -1299,13 +1299,13 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) SELECT target FROM tests WHERE subject = 'some-subject' AND success; - But there's a problem: the WHERE clause refers - to success which is not available as a result column of the + But there's a problem: the WHERE clause refers + to success which is not available as a result column of the index. Nonetheless, an index-only scan is possible because the plan does - not need to recheck that part of the WHERE clause at run time: - all entries found in the index necessarily have success = true + not need to recheck that part of the WHERE clause at run time: + all entries found in the index necessarily have success = true so this need not be explicitly checked in the - plan. PostgreSQL versions 9.6 and later will recognize + plan. PostgreSQL versions 9.6 and later will recognize such cases and allow index-only scans to be generated, but older versions will not. @@ -1321,7 +1321,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; - Although indexes in PostgreSQL do not need + Although indexes in PostgreSQL do not need maintenance or tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage for an individual query is done with the @@ -1388,8 +1388,8 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; their use. There are run-time parameters that can turn off various plan types (see ). For instance, turning off sequential scans - (enable_seqscan) and nested-loop joins - (enable_nestloop), which are the most basic plans, + (enable_seqscan) and nested-loop joins + (enable_nestloop), which are the most basic plans, will force the system to use a different plan. If the system still chooses a sequential scan or nested-loop join then there is probably a more fundamental reason why the index is not being @@ -1428,7 +1428,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; If you do not succeed in adjusting the costs to be more appropriate, then you might have to resort to forcing index usage explicitly. You might also want to contact the - PostgreSQL developers to examine the issue. + PostgreSQL developers to examine the issue. diff --git a/doc/src/sgml/info.sgml b/doc/src/sgml/info.sgml index 233ba0e668..6b9f1b5d81 100644 --- a/doc/src/sgml/info.sgml +++ b/doc/src/sgml/info.sgml @@ -15,9 +15,9 @@ The PostgreSQL wiki contains the project's FAQ + url="https://wiki.postgresql.org/wiki/Frequently_Asked_Questions">FAQ (Frequently Asked Questions) list, TODO list, and + url="https://wiki.postgresql.org/wiki/Todo">TODO list, and detailed information about many more topics. @@ -42,7 +42,7 @@ The mailing lists are a good place to have your questions answered, to share experiences with other users, and to contact - the developers. Consult the PostgreSQL web site + the developers. Consult the PostgreSQL web site for details. diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index e07ff35bca..58c54254d7 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -35,12 +35,12 @@ This problem can appear when querying information schema views such - as check_constraint_routine_usage, - check_constraints, domain_constraints, and - referential_constraints. Some other views have similar + as check_constraint_routine_usage, + check_constraints, domain_constraints, and + referential_constraints. Some other views have similar issues but contain the table name to help distinguish duplicate - rows, e.g., constraint_column_usage, - constraint_table_usage, table_constraints. + rows, e.g., constraint_column_usage, + constraint_table_usage, table_constraints. @@ -384,19 +384,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -535,25 +535,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -572,7 +572,7 @@ is_derived_reference_attribute yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1256,7 +1256,7 @@ The view columns contains information about all table columns (or view columns) in the database. System columns - (oid, etc.) are not included. Only those columns are + (oid, etc.) are not included. Only those columns are shown that the current user has access to (by way of being the owner or having some privilege). @@ -1441,19 +1441,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1540,25 +1540,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -1577,7 +1577,7 @@ is_self_referencing yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1648,13 +1648,13 @@ is_generated character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL generation_expression character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2152,19 +2152,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2300,25 +2300,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -2442,31 +2442,31 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2501,37 +2501,37 @@ ORDER BY c.ordinal_position; numeric_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL @@ -2569,25 +2569,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -3160,13 +3160,13 @@ ORDER BY c.ordinal_position; is_result yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -3191,85 +3191,85 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL collation_schema sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL collation_name sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL @@ -3301,25 +3301,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -4045,37 +4045,37 @@ ORDER BY c.ordinal_position; module_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL module_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL module_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4094,85 +4094,85 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL collation_schema sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL collation_name sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL @@ -4204,25 +4204,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -4283,7 +4283,7 @@ ORDER BY c.ordinal_position; character_data Always GENERAL (The SQL standard defines - other parameter styles, which are not available in PostgreSQL.) + other parameter styles, which are not available in PostgreSQL.) @@ -4294,7 +4294,7 @@ ORDER BY c.ordinal_position; If the function is declared immutable (called deterministic in the SQL standard), then YES, else NO. (You cannot query the other volatility - levels available in PostgreSQL through the information schema.) + levels available in PostgreSQL through the information schema.) @@ -4304,7 +4304,7 @@ ORDER BY c.ordinal_position; Always MODIFIES, meaning that the function possibly modifies SQL data. This information is not useful for - PostgreSQL. + PostgreSQL. @@ -4321,7 +4321,7 @@ ORDER BY c.ordinal_position; sql_path character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4330,26 +4330,26 @@ ORDER BY c.ordinal_position; Always YES (The opposite would be a method of a user-defined type, which is a feature not available in - PostgreSQL.) + PostgreSQL.) max_dynamic_result_sets cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_user_defined_cast yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_implicitly_invocable yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4366,43 +4366,43 @@ ORDER BY c.ordinal_position; to_sql_specific_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL to_sql_specific_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL to_sql_specific_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL created time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL last_altered time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL new_savepoint_level yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4411,152 +4411,152 @@ ORDER BY c.ordinal_position; Currently always NO. The alternative YES applies to a feature not available in - PostgreSQL. + PostgreSQL. result_cast_from_data_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_max_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_octet_length character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_precision_radix cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_scale cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_datetime_precision character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_interval_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_interval_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_maximum_cardinality cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4606,25 +4606,25 @@ ORDER BY c.ordinal_position; default_character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL default_character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL default_character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL sql_path character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4808,7 +4808,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the feature is fully supported by the - current version of PostgreSQL, NO if not + current version of PostgreSQL, NO if not @@ -4816,7 +4816,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -4982,7 +4982,7 @@ ORDER BY c.ordinal_position; character_data The programming language, if the binding style is - EMBEDDED, else null. PostgreSQL only + EMBEDDED, else null. PostgreSQL only supports the language C. @@ -5031,7 +5031,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the package is fully supported by the - current version of PostgreSQL, NO if not + current version of PostgreSQL, NO if not @@ -5039,7 +5039,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -5093,7 +5093,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the part is fully supported by the - current version of PostgreSQL, + current version of PostgreSQL, NO if not @@ -5102,7 +5102,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -5182,7 +5182,7 @@ ORDER BY c.ordinal_position; The table sql_sizing_profiles contains information about the sql_sizing values that are - required by various profiles of the SQL standard. PostgreSQL does + required by various profiles of the SQL standard. PostgreSQL does not track any SQL profiles, so this table is empty. @@ -5465,13 +5465,13 @@ ORDER BY c.ordinal_position; self_referencing_column_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL reference_generation character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -5806,31 +5806,31 @@ ORDER BY c.ordinal_position; action_reference_old_table sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_new_table sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_old_row sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_new_row sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL created time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -5864,7 +5864,7 @@ ORDER BY c.ordinal_position; - Prior to PostgreSQL 9.1, this view's columns + Prior to PostgreSQL 9.1, this view's columns action_timing, action_reference_old_table, action_reference_new_table, @@ -6113,151 +6113,151 @@ ORDER BY c.ordinal_position; is_instantiable yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_final yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_form character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_category character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL reference_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL data_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_maximum_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_octet_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_precision_radix cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_scale cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL datetime_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL interval_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL interval_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL source_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ref_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -6660,7 +6660,7 @@ ORDER BY c.ordinal_position; check_option character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -6686,8 +6686,8 @@ ORDER BY c.ordinal_position; is_trigger_updatable yes_or_no - YES if the view has an INSTEAD OF - UPDATE trigger defined on it, NO if not + YES if the view has an INSTEAD OF + UPDATE trigger defined on it, NO if not @@ -6695,8 +6695,8 @@ ORDER BY c.ordinal_position; is_trigger_deletable yes_or_no - YES if the view has an INSTEAD OF - DELETE trigger defined on it, NO if not + YES if the view has an INSTEAD OF + DELETE trigger defined on it, NO if not @@ -6704,8 +6704,8 @@ ORDER BY c.ordinal_position; is_trigger_insertable_intoyes_or_no - YES if the view has an INSTEAD OF - INSERT trigger defined on it, NO if not + YES if the view has an INSTEAD OF + INSERT trigger defined on it, NO if not diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml index 696c620b18..029e1dbc28 100644 --- a/doc/src/sgml/install-windows.sgml +++ b/doc/src/sgml/install-windows.sgml @@ -84,13 +84,13 @@ Microsoft Windows SDK version 6.0a to 8.1 or Visual Studio 2008 and above. Compilation is supported down to Windows XP and - Windows Server 2003 when building with - Visual Studio 2005 to + Windows Server 2003 when building with + Visual Studio 2005 to Visual Studio 2013. Building with Visual Studio 2015 is supported down to - Windows Vista and Windows Server 2008. + Windows Vista and Windows Server 2008. Building with Visual Studio 2017 is supported - down to Windows 7 SP1 and Windows Server 2008 R2 SP1. + down to Windows 7 SP1 and Windows Server 2008 R2 SP1. @@ -163,7 +163,7 @@ $ENV{MSBFLAGS}="/m"; Microsoft Windows SDK it is recommended that you upgrade to the latest version (currently version 7.1), available for download from - . + . You must always include the @@ -182,7 +182,7 @@ $ENV{MSBFLAGS}="/m"; ActiveState Perl is required to run the build generation scripts. MinGW or Cygwin Perl will not work. It must also be present in the PATH. Binaries can be downloaded from - + (Note: version 5.8.3 or later is required, the free Standard Distribution is sufficient). @@ -219,7 +219,7 @@ $ENV{MSBFLAGS}="/m"; Both Bison and Flex are included in the msys tool suite, available - from as part of the + from as part of the MinGW compiler suite. @@ -259,7 +259,7 @@ $ENV{MSBFLAGS}="/m"; Diff Diff is required to run the regression tests, and can be downloaded - from . + from . @@ -267,7 +267,7 @@ $ENV{MSBFLAGS}="/m"; Gettext Gettext is required to build with NLS support, and can be downloaded - from . Note that binaries, + from . Note that binaries, dependencies and developer files are all needed. @@ -277,7 +277,7 @@ $ENV{MSBFLAGS}="/m"; Required for GSSAPI authentication support. MIT Kerberos can be downloaded from - . + . @@ -286,8 +286,8 @@ $ENV{MSBFLAGS}="/m"; libxslt Required for XML support. Binaries can be downloaded from - or source from - . Note that libxml2 requires iconv, + or source from + . Note that libxml2 requires iconv, which is available from the same download location. @@ -296,8 +296,8 @@ $ENV{MSBFLAGS}="/m"; openssl Required for SSL support. Binaries can be downloaded from - - or source from . + + or source from . @@ -306,7 +306,7 @@ $ENV{MSBFLAGS}="/m"; Required for UUID-OSSP support (contrib only). Source can be downloaded from - . + . @@ -314,7 +314,7 @@ $ENV{MSBFLAGS}="/m"; Python Required for building PL/Python. Binaries can - be downloaded from . + be downloaded from . @@ -323,7 +323,7 @@ $ENV{MSBFLAGS}="/m"; Required for compression support in pg_dump and pg_restore. Binaries can be downloaded - from . + from . @@ -347,8 +347,8 @@ $ENV{MSBFLAGS}="/m"; - To use a server-side third party library such as python or - openssl, this library must also be + To use a server-side third party library such as python or + openssl, this library must also be 64-bit. There is no support for loading a 32-bit library in a 64-bit server. Several of the third party libraries that PostgreSQL supports may only be available in 32-bit versions, in which case they cannot be used with @@ -462,20 +462,20 @@ $ENV{CONFIG}="Debug"; Running the regression tests on client programs, with - vcregress bincheck, or on recovery tests, with - vcregress recoverycheck, requires an additional Perl module + vcregress bincheck, or on recovery tests, with + vcregress recoverycheck, requires an additional Perl module to be installed: IPC::Run - As of this writing, IPC::Run is not included in the + As of this writing, IPC::Run is not included in the ActiveState Perl installation, nor in the ActiveState Perl Package Manager (PPM) library. To install, download the - IPC-Run-<version>.tar.gz source archive from CPAN, - at , and - uncompress. Edit the buildenv.pl file, and add a PERL5LIB - variable to point to the lib subdirectory from the + IPC-Run-<version>.tar.gz source archive from CPAN, + at , and + uncompress. Edit the buildenv.pl file, and add a PERL5LIB + variable to point to the lib subdirectory from the extracted archive. For example: $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; @@ -498,7 +498,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; OpenJade 1.3.1-2 Download from - + and uncompress in the subdirectory openjade-1.3.1. @@ -507,7 +507,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; DocBook DTD 4.2 Download from - + and uncompress in the subdirectory docbook. @@ -516,7 +516,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; ISO character entities Download from - and + and uncompress in the subdirectory docbook. diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index f4e4fc7c5e..f8e1d60356 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -52,17 +52,17 @@ su - postgres In general, a modern Unix-compatible platform should be able to run - PostgreSQL. + PostgreSQL. The platforms that had received specific testing at the time of release are listed in - below. In the doc subdirectory of the distribution - there are several platform-specific FAQ documents you + below. In the doc subdirectory of the distribution + there are several platform-specific FAQ documents you might wish to consult if you are having trouble. The following software packages are required for building - PostgreSQL: + PostgreSQL: @@ -71,9 +71,9 @@ su - postgres make - GNU make version 3.80 or newer is required; other - make programs or older GNU make versions will not work. - (GNU make is sometimes installed under + GNU make version 3.80 or newer is required; other + make programs or older GNU make versions will not work. + (GNU make is sometimes installed under the name gmake.) To test for GNU make enter: @@ -84,19 +84,19 @@ su - postgres - You need an ISO/ANSI C compiler (at least + You need an ISO/ANSI C compiler (at least C89-compliant). Recent - versions of GCC are recommended, but - PostgreSQL is known to build using a wide variety + versions of GCC are recommended, but + PostgreSQL is known to build using a wide variety of compilers from different vendors. - tar is required to unpack the source + tar is required to unpack the source distribution, in addition to either - gzip or bzip2. + gzip or bzip2. @@ -109,23 +109,23 @@ su - postgres libedit - The GNU Readline library is used by + The GNU Readline library is used by default. It allows psql (the PostgreSQL command line SQL interpreter) to remember each command you type, and allows you to use arrow keys to recall and edit previous commands. This is very helpful and is strongly recommended. If you don't want to use it then you must specify the option to - configure. As an alternative, you can often use the + configure. As an alternative, you can often use the BSD-licensed libedit library, originally developed on NetBSD. The libedit library is GNU Readline-compatible and is used if libreadline is not found, or if is used as an - option to configure. If you are using a package-based + option to configure. If you are using a package-based Linux distribution, be aware that you need both the - readline and readline-devel packages, if + readline and readline-devel packages, if those are separate in your distribution. @@ -140,8 +140,8 @@ su - postgres used by default. If you don't want to use it then you must specify the option to configure. Using this option disables - support for compressed archives in pg_dump and - pg_restore. + support for compressed archives in pg_dump and + pg_restore. @@ -179,14 +179,14 @@ su - postgres If you intend to make more than incidental use of PL/Perl, you should ensure that the Perl installation was built with the - usemultiplicity option enabled (perl -V + usemultiplicity option enabled (perl -V will show whether this is the case). - To build the PL/Python server programming + To build the PL/Python server programming language, you need a Python installation with the header files and the distutils module. The minimum @@ -209,15 +209,15 @@ su - postgres find a shared libpython. That might mean that you either have to install additional packages or rebuild (part of) your Python installation to provide this shared - library. When building from source, run Python's - configure with the --enable-shared flag. + library. When building from source, run Python's + configure with the --enable-shared flag. To build the PL/Tcl - procedural language, you of course need a Tcl + procedural language, you of course need a Tcl installation. The minimum required version is Tcl 8.4. @@ -228,13 +228,13 @@ su - postgres To enable Native Language Support (NLS), that is, the ability to display a program's messages in a language other than English, you need an implementation of the - Gettext API. Some operating + Gettext API. Some operating systems have this built-in (e.g., Linux, NetBSD, - Solaris), for other systems you + class="osname">Linux, NetBSD, + Solaris), for other systems you can download an add-on package from . - If you are using the Gettext implementation in + If you are using the Gettext implementation in the GNU C library then you will additionally need the GNU Gettext package for some utility programs. For any of the other implementations you will @@ -244,7 +244,7 @@ su - postgres - You need OpenSSL, if you want to support + You need OpenSSL, if you want to support encrypted client connections. The minimum required version is 0.9.8. @@ -252,8 +252,8 @@ su - postgres - You need Kerberos, OpenLDAP, - and/or PAM, if you want to support authentication + You need Kerberos, OpenLDAP, + and/or PAM, if you want to support authentication using those services. @@ -289,12 +289,12 @@ su - postgres yacc - GNU Flex and Bison + GNU Flex and Bison are needed to build from a Git checkout, or if you changed the actual scanner and parser definition files. If you need them, be sure - to get Flex 2.5.31 or later and - Bison 1.875 or later. Other lex - and yacc programs cannot be used. + to get Flex 2.5.31 or later and + Bison 1.875 or later. Other lex + and yacc programs cannot be used. @@ -303,10 +303,10 @@ su - postgres perl - Perl 5.8.3 or later is needed to build from a Git checkout, + Perl 5.8.3 or later is needed to build from a Git checkout, or if you changed the input files for any of the build steps that use Perl scripts. If building on Windows you will need - Perl in any case. Perl is + Perl in any case. Perl is also required to run some test suites. @@ -316,7 +316,7 @@ su - postgres If you need to get a GNU package, you can find it at your local GNU mirror site (see + url="http://www.gnu.org/order/ftp.html"> for a list) or at . @@ -337,7 +337,7 @@ su - postgres Getting The Source - The PostgreSQL &version; sources can be obtained from the + The PostgreSQL &version; sources can be obtained from the download section of our website: . You should get a file named postgresql-&version;.tar.gz @@ -351,7 +351,7 @@ su - postgres have the .bz2 file.) This will create a directory postgresql-&version; under the current directory - with the PostgreSQL sources. + with the PostgreSQL sources. Change into that directory for the rest of the installation procedure. @@ -377,7 +377,7 @@ su - postgres The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. - This is done by running the configure script. For a + This is done by running the configure script. For a default installation simply enter: ./configure @@ -403,7 +403,7 @@ su - postgres The default configuration will build the server and utilities, as well as all client applications and interfaces that require only a C compiler. All files will be installed under - /usr/local/pgsql by default. + /usr/local/pgsql by default. @@ -413,14 +413,14 @@ su - postgres - + - Install all files under the directory PREFIX + Install all files under the directory PREFIX instead of /usr/local/pgsql. The actual files will be installed into various subdirectories; no files will ever be installed directly into the - PREFIX directory. + PREFIX directory. @@ -428,13 +428,13 @@ su - postgres individual subdirectories with the following options. However, if you leave these with their defaults, the installation will be relocatable, meaning you can move the directory after - installation. (The man and doc + installation. (The man and doc locations are not affected by this.) For relocatable installs, you might want to use - configure's --disable-rpath + configure's --disable-rpath option. Also, you will need to tell the operating system how to find the shared libraries. @@ -442,15 +442,15 @@ su - postgres - + You can install architecture-dependent files under a - different prefix, EXEC-PREFIX, than what - PREFIX was set to. This can be useful to + different prefix, EXEC-PREFIX, than what + PREFIX was set to. This can be useful to share architecture-independent files between hosts. If you - omit this, then EXEC-PREFIX is set equal to - PREFIX and both architecture-dependent and + omit this, then EXEC-PREFIX is set equal to + PREFIX and both architecture-dependent and independent files will be installed under the same tree, which is probably what you want. @@ -458,114 +458,114 @@ su - postgres - + Specifies the directory for executable programs. The default - is EXEC-PREFIX/bin, which - normally means /usr/local/pgsql/bin. + is EXEC-PREFIX/bin, which + normally means /usr/local/pgsql/bin. - + Sets the directory for various configuration files, - PREFIX/etc by default. + PREFIX/etc by default. - + Sets the location to install libraries and dynamically loadable modules. The default is - EXEC-PREFIX/lib. + EXEC-PREFIX/lib. - + Sets the directory for installing C and C++ header files. The - default is PREFIX/include. + default is PREFIX/include. - + Sets the root directory for various types of read-only data files. This only sets the default for some of the following options. The default is - PREFIX/share. + PREFIX/share. - + Sets the directory for read-only data files used by the installed programs. The default is - DATAROOTDIR. Note that this has + DATAROOTDIR. Note that this has nothing to do with where your database files will be placed. - + Sets the directory for installing locale data, in particular message translation catalog files. The default is - DATAROOTDIR/locale. + DATAROOTDIR/locale. - + - The man pages that come with PostgreSQL will be installed under + The man pages that come with PostgreSQL will be installed under this directory, in their respective - manx subdirectories. - The default is DATAROOTDIR/man. + manx subdirectories. + The default is DATAROOTDIR/man. - + Sets the root directory for installing documentation files, - except man pages. This only sets the default for + except man pages. This only sets the default for the following options. The default value for this option is - DATAROOTDIR/doc/postgresql. + DATAROOTDIR/doc/postgresql. - + The HTML-formatted documentation for PostgreSQL will be installed under this directory. The default is - DATAROOTDIR. + DATAROOTDIR. @@ -574,15 +574,15 @@ su - postgres Care has been taken to make it possible to install - PostgreSQL into shared installation locations + PostgreSQL into shared installation locations (such as /usr/local/include) without interfering with the namespace of the rest of the system. First, the string /postgresql is automatically appended to datadir, sysconfdir, and docdir, unless the fully expanded directory name already contains the - string postgres or - pgsql. For example, if you choose + string postgres or + pgsql. For example, if you choose /usr/local as prefix, the documentation will be installed in /usr/local/doc/postgresql, but if the prefix is /opt/postgres, then it @@ -602,10 +602,10 @@ su - postgres - + - Append STRING to the PostgreSQL version number. You + Append STRING to the PostgreSQL version number. You can use this, for example, to mark binaries built from unreleased Git snapshots or containing custom patches with an extra version string such as a git describe identifier or a @@ -615,35 +615,35 @@ su - postgres - + - DIRECTORIES is a colon-separated list of + DIRECTORIES is a colon-separated list of directories that will be added to the list the compiler searches for header files. If you have optional packages - (such as GNU Readline) installed in a non-standard + (such as GNU Readline) installed in a non-standard location, you have to use this option and probably also the corresponding - option. - Example: --with-includes=/opt/gnu/include:/usr/sup/include. + Example: --with-includes=/opt/gnu/include:/usr/sup/include. - + - DIRECTORIES is a colon-separated list of + DIRECTORIES is a colon-separated list of directories to search for libraries. You will probably have to use this option (and the corresponding - option) if you have packages installed in non-standard locations. - Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib. + Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib. @@ -657,7 +657,7 @@ su - postgres language other than English. LANGUAGES is an optional space-separated list of codes of the languages that you want supported, for - example --enable-nls='de fr'. (The intersection + example --enable-nls='de fr'. (The intersection between your list and the set of actually provided translations will be computed automatically.) If you do not specify a list, then all available translations are @@ -666,22 +666,22 @@ su - postgres To use this option, you will need an implementation of the - Gettext API; see above. + Gettext API; see above. - + - Set NUMBER as the default port number for + Set NUMBER as the default port number for server and clients. The default is 5432. The port can always be changed later on, but if you specify it here then both server and clients will have the same default compiled in, which can be very convenient. Usually the only good reason to select a non-default value is if you intend to run multiple - PostgreSQL servers on the same machine. + PostgreSQL servers on the same machine. @@ -690,7 +690,7 @@ su - postgres - Build the PL/Perl server-side language. + Build the PL/Perl server-side language. @@ -699,7 +699,7 @@ su - postgres - Build the PL/Python server-side language. + Build the PL/Python server-side language. @@ -708,7 +708,7 @@ su - postgres - Build the PL/Tcl server-side language. + Build the PL/Tcl server-side language. @@ -734,10 +734,10 @@ su - postgres Build with support for GSSAPI authentication. On many systems, the GSSAPI (usually a part of the Kerberos installation) system is not installed in a location - that is searched by default (e.g., /usr/include, - /usr/lib), so you must use the options -
- <filename>intarray</> Functions + <filename>intarray</filename> Functions @@ -59,7 +59,7 @@ sort(int[], text dir)sort int[] - sort array — dir must be asc or desc + sort array — dir must be asc or desc sort('{1,2,3}'::int[], 'desc') {3,2,1} @@ -99,7 +99,7 @@ idx(int[], int item)idx int - index of first element matching item (0 if none) + index of first element matching item (0 if none) idx(array[11,22,33,22,11], 22) 2 @@ -107,7 +107,7 @@ subarray(int[], int start, int len)subarray int[] - portion of array starting at position start, len elements + portion of array starting at position start, len elements subarray('{1,2,3,2,1}'::int[], 2, 3) {2,3,2} @@ -115,7 +115,7 @@ subarray(int[], int start) int[] - portion of array starting at position start + portion of array starting at position start subarray('{1,2,3,2,1}'::int[], 2) {2,3,2,1} @@ -133,7 +133,7 @@
- <filename>intarray</> Operators + <filename>intarray</filename> Operators @@ -148,17 +148,17 @@ int[] && int[] boolean - overlap — true if arrays have at least one common element + overlap — true if arrays have at least one common element int[] @> int[] boolean - contains — true if left array contains right array + contains — true if left array contains right array int[] <@ int[] boolean - contained — true if left array is contained in right array + contained — true if left array is contained in right array # int[] @@ -168,7 +168,7 @@ int[] # int int - index (same as idx function) + index (same as idx function) int[] + int @@ -208,28 +208,28 @@ int[] @@ query_int boolean - true if array satisfies query (see below) + true if array satisfies query (see below) query_int ~~ int[] boolean - true if array satisfies query (commutator of @@) + true if array satisfies query (commutator of @@)
- (Before PostgreSQL 8.2, the containment operators @> and - <@ were respectively called @ and ~. + (Before PostgreSQL 8.2, the containment operators @> and + <@ were respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) - The operators &&, @> and - <@ are equivalent to PostgreSQL's built-in + The operators &&, @> and + <@ are equivalent to PostgreSQL's built-in operators of the same names, except that they work only on integer arrays that do not contain nulls, while the built-in operators work for any array type. This restriction makes them faster than the built-in operators @@ -237,14 +237,14 @@ - The @@ and ~~ operators test whether an array - satisfies a query, which is expressed as a value of a - specialized data type query_int. A query + The @@ and ~~ operators test whether an array + satisfies a query, which is expressed as a value of a + specialized data type query_int. A query consists of integer values that are checked against the elements of - the array, possibly combined using the operators & - (AND), | (OR), and ! (NOT). Parentheses + the array, possibly combined using the operators & + (AND), | (OR), and ! (NOT). Parentheses can be used as needed. For example, - the query 1&(2|3) matches arrays that contain 1 + the query 1&(2|3) matches arrays that contain 1 and also contain either 2 or 3.
@@ -253,16 +253,16 @@ Index Support - intarray provides index support for the - &&, @>, <@, - and @@ operators, as well as regular array equality. + intarray provides index support for the + &&, @>, <@, + and @@ operators, as well as regular array equality. Two GiST index operator classes are provided: - gist__int_ops (used by default) is suitable for + gist__int_ops (used by default) is suitable for small- to medium-size data sets, while - gist__intbig_ops uses a larger signature and is more + gist__intbig_ops uses a larger signature and is more suitable for indexing large data sets (i.e., columns containing a large number of distinct array values). The implementation uses an RD-tree data structure with @@ -271,7 +271,7 @@ There is also a non-default GIN operator class - gin__int_ops supporting the same operators. + gin__int_ops supporting the same operators. @@ -284,7 +284,7 @@ Example --- a message can be in one or more sections +-- a message can be in one or more sections CREATE TABLE message (mid INT PRIMARY KEY, sections INT[], ...); -- create specialized index @@ -305,9 +305,9 @@ SELECT message.mid FROM message WHERE message.sections @@ '1&2'::query_int; Benchmark - The source directory contrib/intarray/bench contains a + The source directory contrib/intarray/bench contains a benchmark test suite, which can be run against an installed - PostgreSQL server. (It also requires DBD::Pg + PostgreSQL server. (It also requires DBD::Pg to be installed.) To run: @@ -320,7 +320,7 @@ psql -c "CREATE EXTENSION intarray" TEST - The bench.pl script has numerous options, which + The bench.pl script has numerous options, which are displayed when it is run without any arguments. diff --git a/doc/src/sgml/intro.sgml b/doc/src/sgml/intro.sgml index f0dba6f56f..2fb19725f0 100644 --- a/doc/src/sgml/intro.sgml +++ b/doc/src/sgml/intro.sgml @@ -32,7 +32,7 @@ documents the SQL query language environment, including data types and functions, as well as user-level performance tuning. Every - PostgreSQL user should read this. + PostgreSQL user should read this. @@ -75,7 +75,7 @@ contains assorted information that might be of - use to PostgreSQL developers. + use to PostgreSQL developers.
diff --git a/doc/src/sgml/isn.sgml b/doc/src/sgml/isn.sgml index c1da702df6..329b7b2c54 100644 --- a/doc/src/sgml/isn.sgml +++ b/doc/src/sgml/isn.sgml @@ -123,7 +123,7 @@ UPC numbers are a subset of the EAN13 numbers (they are basically - EAN13 without the first 0 digit). + EAN13 without the first 0 digit). All UPC, ISBN, ISMN and ISSN numbers can be represented as EAN13 @@ -139,7 +139,7 @@ - The ISBN, ISMN, and ISSN types will display the + The ISBN, ISMN, and ISSN types will display the short version of the number (ISxN 10) whenever it's possible, and will show ISxN 13 format for numbers that do not fit in the short version. The EAN13, ISBN13, ISMN13 and @@ -152,7 +152,7 @@ Casts - The isn module provides the following pairs of type casts: + The isn module provides the following pairs of type casts: @@ -209,7 +209,7 @@ - When casting from EAN13 to another type, there is a run-time + When casting from EAN13 to another type, there is a run-time check that the value is within the domain of the other type, and an error is thrown if not. The other casts are simply relabelings that will always succeed. @@ -220,15 +220,15 @@ Functions and Operators - The isn module provides the standard comparison operators, + The isn module provides the standard comparison operators, plus B-tree and hash indexing support for all these data types. In addition there are several specialized functions; shown in . In this table, - isn means any one of the module's data types. + isn means any one of the module's data types. - <filename>isn</> Functions + <filename>isn</filename> Functions @@ -285,21 +285,21 @@ When you insert invalid numbers in a table using the weak mode, the number will be inserted with the corrected check digit, but it will be displayed - with an exclamation mark (!) at the end, for example - 0-11-000322-5!. This invalid marker can be checked with - the is_valid function and cleared with the - make_valid function. + with an exclamation mark (!) at the end, for example + 0-11-000322-5!. This invalid marker can be checked with + the is_valid function and cleared with the + make_valid function. You can also force the insertion of invalid numbers even when not in the - weak mode, by appending the ! character at the end of the + weak mode, by appending the ! character at the end of the number. Another special feature is that during input, you can write - ? in place of the check digit, and the correct check digit + ? in place of the check digit, and the correct check digit will be inserted automatically. @@ -384,7 +384,7 @@ SELECT isbn13(id) FROM test; This module was inspired by Garrett A. Wollman's - isbn_issn code. + isbn_issn code. diff --git a/doc/src/sgml/json.sgml b/doc/src/sgml/json.sgml index 7dfdf96764..05ecef2ffc 100644 --- a/doc/src/sgml/json.sgml +++ b/doc/src/sgml/json.sgml @@ -1,7 +1,7 @@ - <acronym>JSON</> Types + <acronym>JSON</acronym> Types JSON @@ -22,25 +22,25 @@ - There are two JSON data types: json and jsonb. - They accept almost identical sets of values as + There are two JSON data types: json and jsonb. + They accept almost identical sets of values as input. The major practical difference is one of efficiency. The - json data type stores an exact copy of the input text, + json data type stores an exact copy of the input text, which processing functions must reparse on each execution; while - jsonb data is stored in a decomposed binary format that + jsonb data is stored in a decomposed binary format that makes it slightly slower to input due to added conversion overhead, but significantly faster to process, since no reparsing - is needed. jsonb also supports indexing, which can be a + is needed. jsonb also supports indexing, which can be a significant advantage. - Because the json type stores an exact copy of the input text, it + Because the json type stores an exact copy of the input text, it will preserve semantically-insignificant white space between tokens, as well as the order of keys within JSON objects. Also, if a JSON object within the value contains the same key more than once, all the key/value pairs are kept. (The processing functions consider the last value as the - operative one.) By contrast, jsonb does not preserve white + operative one.) By contrast, jsonb does not preserve white space, does not preserve the order of object keys, and does not keep duplicate object keys. If duplicate keys are specified in the input, only the last value is kept. @@ -48,7 +48,7 @@ In general, most applications should prefer to store JSON data as - jsonb, unless there are quite specialized needs, such as + jsonb, unless there are quite specialized needs, such as legacy assumptions about ordering of object keys. @@ -64,15 +64,15 @@ RFC 7159 permits JSON strings to contain Unicode escape sequences - denoted by \uXXXX. In the input - function for the json type, Unicode escapes are allowed + denoted by \uXXXX. In the input + function for the json type, Unicode escapes are allowed regardless of the database encoding, and are checked only for syntactic - correctness (that is, that four hex digits follow \u). - However, the input function for jsonb is stricter: it disallows - Unicode escapes for non-ASCII characters (those above U+007F) - unless the database encoding is UTF8. The jsonb type also - rejects \u0000 (because that cannot be represented in - PostgreSQL's text type), and it insists + correctness (that is, that four hex digits follow \u). + However, the input function for jsonb is stricter: it disallows + Unicode escapes for non-ASCII characters (those above U+007F) + unless the database encoding is UTF8. The jsonb type also + rejects \u0000 (because that cannot be represented in + PostgreSQL's text type), and it insists that any use of Unicode surrogate pairs to designate characters outside the Unicode Basic Multilingual Plane be correct. Valid Unicode escapes are converted to the equivalent ASCII or UTF8 character for storage; @@ -84,8 +84,8 @@ Many of the JSON processing functions described in will convert Unicode escapes to regular characters, and will therefore throw the same types of errors - just described even if their input is of type json - not jsonb. The fact that the json input function does + just described even if their input is of type json + not jsonb. The fact that the json input function does not make these checks may be considered a historical artifact, although it does allow for simple storage (without processing) of JSON Unicode escapes in a non-UTF8 database encoding. In general, it is best to @@ -95,22 +95,22 @@ - When converting textual JSON input into jsonb, the primitive - types described by RFC 7159 are effectively mapped onto + When converting textual JSON input into jsonb, the primitive + types described by RFC 7159 are effectively mapped onto native PostgreSQL types, as shown in . Therefore, there are some minor additional constraints on what constitutes valid jsonb data that do not apply to the json type, nor to JSON in the abstract, corresponding to limits on what can be represented by the underlying data type. - Notably, jsonb will reject numbers that are outside the - range of the PostgreSQL numeric data - type, while json will not. Such implementation-defined - restrictions are permitted by RFC 7159. However, in + Notably, jsonb will reject numbers that are outside the + range of the PostgreSQL numeric data + type, while json will not. Such implementation-defined + restrictions are permitted by RFC 7159. However, in practice such problems are far more likely to occur in other - implementations, as it is common to represent JSON's number + implementations, as it is common to represent JSON's number primitive type as IEEE 754 double precision floating point - (which RFC 7159 explicitly anticipates and allows for). + (which RFC 7159 explicitly anticipates and allows for). When using JSON as an interchange format with such systems, the danger of losing numeric precision compared to data originally stored by PostgreSQL should be considered. @@ -134,23 +134,23 @@ - string - text - \u0000 is disallowed, as are non-ASCII Unicode + string + text + \u0000 is disallowed, as are non-ASCII Unicode escapes if database encoding is not UTF8 - number - numeric + number + numeric NaN and infinity values are disallowed - boolean - boolean + boolean + boolean Only lowercase true and false spellings are accepted - null + null (none) SQL NULL is a different concept @@ -162,10 +162,10 @@ JSON Input and Output Syntax The input/output syntax for the JSON data types is as specified in - RFC 7159. + RFC 7159. - The following are all valid json (or jsonb) expressions: + The following are all valid json (or jsonb) expressions: -- Simple scalar/primitive value -- Primitive values can be numbers, quoted strings, true, false, or null @@ -185,8 +185,8 @@ SELECT '{"foo": [true, "bar"], "tags": {"a": 1, "b": null}}'::json; As previously stated, when a JSON value is input and then printed without - any additional processing, json outputs the same text that was - input, while jsonb does not preserve semantically-insignificant + any additional processing, json outputs the same text that was + input, while jsonb does not preserve semantically-insignificant details such as whitespace. For example, note the differences here: SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::json; @@ -202,9 +202,9 @@ SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::jsonb; (1 row) One semantically-insignificant detail worth noting is that - in jsonb, numbers will be printed according to the behavior of the - underlying numeric type. In practice this means that numbers - entered with E notation will be printed without it, for + in jsonb, numbers will be printed according to the behavior of the + underlying numeric type. In practice this means that numbers + entered with E notation will be printed without it, for example: SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; @@ -213,7 +213,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; {"reading": 1.230e-5} | {"reading": 0.00001230} (1 row) - However, jsonb will preserve trailing fractional zeroes, as seen + However, jsonb will preserve trailing fractional zeroes, as seen in this example, even though those are semantically insignificant for purposes such as equality checks. @@ -231,7 +231,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; have a somewhat fixed structure. The structure is typically unenforced (though enforcing some business rules declaratively is possible), but having a predictable structure makes it easier to write - queries that usefully summarize a set of documents (datums) + queries that usefully summarize a set of documents (datums) in a table. @@ -249,7 +249,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; - <type>jsonb</> Containment and Existence + <type>jsonb</type> Containment and Existence jsonb containment @@ -259,10 +259,10 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; existence - Testing containment is an important capability of - jsonb. There is no parallel set of facilities for the - json type. Containment tests whether - one jsonb document has contained within it another one. + Testing containment is an important capability of + jsonb. There is no parallel set of facilities for the + json type. Containment tests whether + one jsonb document has contained within it another one. These examples return true except as noted: @@ -282,7 +282,7 @@ SELECT '[1, 2, 3]'::jsonb @> '[1, 2, 2]'::jsonb; -- within the object on the left side: SELECT '{"product": "PostgreSQL", "version": 9.4, "jsonb": true}'::jsonb @> '{"version": 9.4}'::jsonb; --- The array on the right side is not considered contained within the +-- The array on the right side is not considered contained within the -- array on the left, even though a similar array is nested within it: SELECT '[1, 2, [1, 3]]'::jsonb @> '[1, 3]'::jsonb; -- yields false @@ -319,10 +319,10 @@ SELECT '"bar"'::jsonb @> '["bar"]'::jsonb; -- yields false - jsonb also has an existence operator, which is + jsonb also has an existence operator, which is a variation on the theme of containment: it tests whether a string - (given as a text value) appears as an object key or array - element at the top level of the jsonb value. + (given as a text value) appears as an object key or array + element at the top level of the jsonb value. These examples return true except as noted: @@ -353,11 +353,11 @@ SELECT '"foo"'::jsonb ? 'foo'; Because JSON containment is nested, an appropriate query can skip explicit selection of sub-objects. As an example, suppose that we have - a doc column containing objects at the top level, with - most objects containing tags fields that contain arrays of + a doc column containing objects at the top level, with + most objects containing tags fields that contain arrays of sub-objects. This query finds entries in which sub-objects containing - both "term":"paris" and "term":"food" appear, - while ignoring any such keys outside the tags array: + both "term":"paris" and "term":"food" appear, + while ignoring any such keys outside the tags array: SELECT doc->'site_name' FROM websites WHERE doc @> '{"tags":[{"term":"paris"}, {"term":"food"}]}'; @@ -385,7 +385,7 @@ SELECT doc->'site_name' FROM websites - <type>jsonb</> Indexing + <type>jsonb</type> Indexing jsonb indexes on @@ -394,23 +394,23 @@ SELECT doc->'site_name' FROM websites GIN indexes can be used to efficiently search for keys or key/value pairs occurring within a large number of - jsonb documents (datums). - Two GIN operator classes are provided, offering different + jsonb documents (datums). + Two GIN operator classes are provided, offering different performance and flexibility trade-offs. - The default GIN operator class for jsonb supports queries with - top-level key-exists operators ?, ?& - and ?| operators and path/value-exists operator - @>. + The default GIN operator class for jsonb supports queries with + top-level key-exists operators ?, ?& + and ?| operators and path/value-exists operator + @>. (For details of the semantics that these operators implement, see .) An example of creating an index with this operator class is: CREATE INDEX idxgin ON api USING GIN (jdoc); - The non-default GIN operator class jsonb_path_ops - supports indexing the @> operator only. + The non-default GIN operator class jsonb_path_ops + supports indexing the @> operator only. An example of creating an index with this operator class is: CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); @@ -438,8 +438,8 @@ CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); ] } - We store these documents in a table named api, - in a jsonb column named jdoc. + We store these documents in a table named api, + in a jsonb column named jdoc. If a GIN index is created on this column, queries like the following can make use of the index: @@ -447,23 +447,23 @@ CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company": "Magnafone"}'; However, the index could not be used for queries like the - following, because though the operator ? is indexable, - it is not applied directly to the indexed column jdoc: + following, because though the operator ? is indexable, + it is not applied directly to the indexed column jdoc: -- Find documents in which the key "tags" contains key or array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc -> 'tags' ? 'qui'; Still, with appropriate use of expression indexes, the above query can use an index. If querying for particular items within - the "tags" key is common, defining an index like this + the "tags" key is common, defining an index like this may be worthwhile: CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags')); - Now, the WHERE clause jdoc -> 'tags' ? 'qui' + Now, the WHERE clause jdoc -> 'tags' ? 'qui' will be recognized as an application of the indexable - operator ? to the indexed - expression jdoc -> 'tags'. + operator ? to the indexed + expression jdoc -> 'tags'. (More information on expression indexes can be found in .) @@ -473,11 +473,11 @@ CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags')); -- Find documents in which the key "tags" contains array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qui"]}'; - A simple GIN index on the jdoc column can support this + A simple GIN index on the jdoc column can support this query. But note that such an index will store copies of every key and - value in the jdoc column, whereas the expression index + value in the jdoc column, whereas the expression index of the previous example stores only data found under - the tags key. While the simple-index approach is far more + the tags key. While the simple-index approach is far more flexible (since it supports queries about any key), targeted expression indexes are likely to be smaller and faster to search than a simple index. @@ -485,7 +485,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu Although the jsonb_path_ops operator class supports - only queries with the @> operator, it has notable + only queries with the @> operator, it has notable performance advantages over the default operator class jsonb_ops. A jsonb_path_ops index is usually much smaller than a jsonb_ops @@ -503,7 +503,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu data. - For this purpose, the term value includes array elements, + For this purpose, the term value includes array elements, though JSON terminology sometimes considers array elements distinct from values within objects. @@ -511,13 +511,13 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu Basically, each jsonb_path_ops index item is a hash of the value and the key(s) leading to it; for example to index {"foo": {"bar": "baz"}}, a single index item would - be created incorporating all three of foo, bar, - and baz into the hash value. Thus a containment query + be created incorporating all three of foo, bar, + and baz into the hash value. Thus a containment query looking for this structure would result in an extremely specific index - search; but there is no way at all to find out whether foo + search; but there is no way at all to find out whether foo appears as a key. On the other hand, a jsonb_ops - index would create three index items representing foo, - bar, and baz separately; then to do the + index would create three index items representing foo, + bar, and baz separately; then to do the containment query, it would look for rows containing all three of these items. While GIN indexes can perform such an AND search fairly efficiently, it will still be less specific and slower than the @@ -531,15 +531,15 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu that it produces no index entries for JSON structures not containing any values, such as {"a": {}}. If a search for documents containing such a structure is requested, it will require a - full-index scan, which is quite slow. jsonb_path_ops is + full-index scan, which is quite slow. jsonb_path_ops is therefore ill-suited for applications that often perform such searches. - jsonb also supports btree and hash + jsonb also supports btree and hash indexes. These are usually useful only if it's important to check equality of complete JSON documents. - The btree ordering for jsonb datums is seldom + The btree ordering for jsonb datums is seldom of great interest, but for completeness it is: Object > Array > Boolean > Number > String > Null diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 0aedd837dc..a7e2653371 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -13,23 +13,23 @@ libpq is the C - application programmer's interface to PostgreSQL. - libpq is a set of library functions that allow - client programs to pass queries to the PostgreSQL + application programmer's interface to PostgreSQL. + libpq is a set of library functions that allow + client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries. - libpq is also the underlying engine for several - other PostgreSQL application interfaces, including - those written for C++, Perl, Python, Tcl and ECPG. - So some aspects of libpq's behavior will be + libpq is also the underlying engine for several + other PostgreSQL application interfaces, including + those written for C++, Perl, Python, Tcl and ECPG. + So some aspects of libpq's behavior will be important to you if you use one of those packages. In particular, , and describe behavior that is visible to the user of any application - that uses libpq. + that uses libpq. @@ -42,7 +42,7 @@ Client programs that use libpq must include the header file - libpq-fe.hlibpq-fe.h + libpq-fe.hlibpq-fe.h and must link with the libpq library. @@ -55,13 +55,13 @@ application program can have several backend connections open at one time. (One reason to do that is to access more than one database.) Each connection is represented by a - PGconnPGconn object, which - is obtained from the function PQconnectdb, - PQconnectdbParams, or - PQsetdbLogin. Note that these functions will always + PGconnPGconn object, which + is obtained from the function PQconnectdb, + PQconnectdbParams, or + PQsetdbLogin. Note that these functions will always return a non-null object pointer, unless perhaps there is too - little memory even to allocate the PGconn object. - The PQstatus function should be called to check + little memory even to allocate the PGconn object. + The PQstatus function should be called to check the return value for a successful connection before queries are sent via the connection object. @@ -70,7 +70,7 @@ On Unix, forking a process with open libpq connections can lead to unpredictable results because the parent and child processes share the same sockets and operating system resources. For this reason, - such usage is not recommended, though doing an exec from + such usage is not recommended, though doing an exec from the child process to load a new executable is safe. @@ -79,20 +79,20 @@ On Windows, there is a way to improve performance if a single database connection is repeatedly started and shutdown. Internally, - libpq calls WSAStartup() and WSACleanup() for connection startup - and shutdown, respectively. WSAStartup() increments an internal - Windows library reference count which is decremented by WSACleanup(). - When the reference count is just one, calling WSACleanup() frees + libpq calls WSAStartup() and WSACleanup() for connection startup + and shutdown, respectively. WSAStartup() increments an internal + Windows library reference count which is decremented by WSACleanup(). + When the reference count is just one, calling WSACleanup() frees all resources and all DLLs are unloaded. This is an expensive operation. To avoid this, an application can manually call - WSAStartup() so resources will not be freed when the last database + WSAStartup() so resources will not be freed when the last database connection is closed. - PQconnectdbParamsPQconnectdbParams + PQconnectdbParamsPQconnectdbParams Makes a new connection to the database server. @@ -109,9 +109,9 @@ PGconn *PQconnectdbParams(const char * const *keywords, from two NULL-terminated arrays. The first, keywords, is defined as an array of strings, each one being a key word. The second, values, gives the value - for each key word. Unlike PQsetdbLogin below, the parameter + for each key word. Unlike PQsetdbLogin below, the parameter set can be extended without changing the function signature, so use of - this function (or its nonblocking analogs PQconnectStartParams + this function (or its nonblocking analogs PQconnectStartParams and PQconnectPoll) is preferred for new application programming. @@ -157,7 +157,7 @@ PGconn *PQconnectdbParams(const char * const *keywords, - PQconnectdbPQconnectdb + PQconnectdbPQconnectdb Makes a new connection to the database server. @@ -184,7 +184,7 @@ PGconn *PQconnectdb(const char *conninfo); - PQsetdbLoginPQsetdbLogin + PQsetdbLoginPQsetdbLogin Makes a new connection to the database server. @@ -211,13 +211,13 @@ PGconn *PQsetdbLogin(const char *pghost, an = sign or has a valid connection URI prefix, it is taken as a conninfo string in exactly the same way as if it had been passed to PQconnectdb, and the remaining - parameters are then applied as specified for PQconnectdbParams. + parameters are then applied as specified for PQconnectdbParams. - PQsetdbPQsetdb + PQsetdbPQsetdb Makes a new connection to the database server. @@ -232,16 +232,16 @@ PGconn *PQsetdb(char *pghost, This is a macro that calls PQsetdbLogin with null pointers - for the login and pwd parameters. It is provided + for the login and pwd parameters. It is provided for backward compatibility with very old programs. - PQconnectStartParamsPQconnectStartParams - PQconnectStartPQconnectStart - PQconnectPollPQconnectPoll + PQconnectStartParamsPQconnectStartParams + PQconnectStartPQconnectStart + PQconnectPollPQconnectPoll nonblocking connection @@ -263,7 +263,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); that your application's thread of execution is not blocked on remote I/O whilst doing so. The point of this approach is that the waits for I/O to complete can occur in the application's main loop, rather than down inside - PQconnectdbParams or PQconnectdb, and so the + PQconnectdbParams or PQconnectdb, and so the application can manage this operation in parallel with other activities. @@ -287,7 +287,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); - The hostaddr and host parameters are used appropriately to ensure that + The hostaddr and host parameters are used appropriately to ensure that name and reverse name queries are not made. See the documentation of these parameters in for details. @@ -310,27 +310,27 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); - Note: use of PQconnectStartParams is analogous to - PQconnectStart shown below. + Note: use of PQconnectStartParams is analogous to + PQconnectStart shown below. - To begin a nonblocking connection request, call conn = PQconnectStart("connection_info_string"). - If conn is null, then libpq has been unable to allocate a new PGconn - structure. Otherwise, a valid PGconn pointer is returned (though not yet + To begin a nonblocking connection request, call conn = PQconnectStart("connection_info_string"). + If conn is null, then libpq has been unable to allocate a new PGconn + structure. Otherwise, a valid PGconn pointer is returned (though not yet representing a valid connection to the database). On return from PQconnectStart, call status = PQstatus(conn). If status equals CONNECTION_BAD, PQconnectStart has failed. - If PQconnectStart succeeds, the next stage is to poll - libpq so that it can proceed with the connection sequence. + If PQconnectStart succeeds, the next stage is to poll + libpq so that it can proceed with the connection sequence. Use PQsocket(conn) to obtain the descriptor of the socket underlying the database connection. Loop thus: If PQconnectPoll(conn) last returned PGRES_POLLING_READING, wait until the socket is ready to - read (as indicated by select(), poll(), or + read (as indicated by select(), poll(), or similar system function). Then call PQconnectPoll(conn) again. Conversely, if PQconnectPoll(conn) last returned @@ -348,10 +348,10 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); At any time during connection, the status of the connection can be - checked by calling PQstatus. If this call returns CONNECTION_BAD, then the - connection procedure has failed; if the call returns CONNECTION_OK, then the + checked by calling PQstatus. If this call returns CONNECTION_BAD, then the + connection procedure has failed; if the call returns CONNECTION_OK, then the connection is ready. Both of these states are equally detectable - from the return value of PQconnectPoll, described above. Other states might also occur + from the return value of PQconnectPoll, described above. Other states might also occur during (and only during) an asynchronous connection procedure. These indicate the current stage of the connection procedure and might be useful to provide feedback to the user for example. These statuses are: @@ -472,7 +472,7 @@ switch(PQstatus(conn)) - PQconndefaultsPQconndefaults + PQconndefaultsPQconndefaults Returns the default connection options. @@ -501,7 +501,7 @@ typedef struct all possible PQconnectdb options and their current default values. The return value points to an array of PQconninfoOption structures, which ends - with an entry having a null keyword pointer. The + with an entry having a null keyword pointer. The null pointer is returned if memory could not be allocated. Note that the current default values (val fields) will depend on environment variables and other context. A @@ -519,7 +519,7 @@ typedef struct - PQconninfoPQconninfo + PQconninfoPQconninfo Returns the connection options used by a live connection. @@ -533,7 +533,7 @@ PQconninfoOption *PQconninfo(PGconn *conn); all possible PQconnectdb options and the values that were used to connect to the server. The return value points to an array of PQconninfoOption - structures, which ends with an entry having a null keyword + structures, which ends with an entry having a null keyword pointer. All notes above for PQconndefaults also apply to the result of PQconninfo. @@ -543,7 +543,7 @@ PQconninfoOption *PQconninfo(PGconn *conn); - PQconninfoParsePQconninfoParse + PQconninfoParsePQconninfoParse Returns parsed connection options from the provided connection string. @@ -555,12 +555,12 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); Parses a connection string and returns the resulting options as an - array; or returns NULL if there is a problem with the connection + array; or returns NULL if there is a problem with the connection string. This function can be used to extract the PQconnectdb options in the provided connection string. The return value points to an array of PQconninfoOption structures, which ends - with an entry having a null keyword pointer. + with an entry having a null keyword pointer. @@ -571,10 +571,10 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); - If errmsg is not NULL, then *errmsg is set - to NULL on success, else to a malloc'd error string explaining - the problem. (It is also possible for *errmsg to be - set to NULL and the function to return NULL; + If errmsg is not NULL, then *errmsg is set + to NULL on success, else to a malloc'd error string explaining + the problem. (It is also possible for *errmsg to be + set to NULL and the function to return NULL; this indicates an out-of-memory condition.) @@ -582,15 +582,15 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); After processing the options array, free it by passing it to PQconninfoFree. If this is not done, some memory is leaked for each call to PQconninfoParse. - Conversely, if an error occurs and errmsg is not NULL, - be sure to free the error string using PQfreemem. + Conversely, if an error occurs and errmsg is not NULL, + be sure to free the error string using PQfreemem. - PQfinishPQfinish + PQfinishPQfinish Closes the connection to the server. Also frees @@ -604,14 +604,14 @@ void PQfinish(PGconn *conn); Note that even if the server connection attempt fails (as indicated by PQstatus), the application should call PQfinish to free the memory used by the PGconn object. - The PGconn pointer must not be used again after + The PGconn pointer must not be used again after PQfinish has been called. - PQresetPQreset + PQresetPQreset Resets the communication channel to the server. @@ -631,8 +631,8 @@ void PQreset(PGconn *conn); - PQresetStartPQresetStart - PQresetPollPQresetPoll + PQresetStartPQresetStart + PQresetPollPQresetPoll Reset the communication channel to the server, in a nonblocking manner. @@ -650,8 +650,8 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn); parameters previously used. This can be useful for error recovery if a working connection is lost. They differ from PQreset (above) in that they act in a nonblocking manner. These functions suffer from the same - restrictions as PQconnectStartParams, PQconnectStart - and PQconnectPoll. + restrictions as PQconnectStartParams, PQconnectStart + and PQconnectPoll. @@ -665,12 +665,12 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn); - PQpingParamsPQpingParams + PQpingParamsPQpingParams PQpingParams reports the status of the server. It accepts connection parameters identical to those of - PQconnectdbParams, described above. It is not + PQconnectdbParams, described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt. @@ -734,12 +734,12 @@ PGPing PQpingParams(const char * const *keywords, - PQpingPQping + PQpingPQping PQping reports the status of the server. It accepts connection parameters identical to those of - PQconnectdb, described above. It is not + PQconnectdb, described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt. @@ -750,7 +750,7 @@ PGPing PQping(const char *conninfo); - The return values are the same as for PQpingParams. + The return values are the same as for PQpingParams. @@ -771,7 +771,7 @@ PGPing PQping(const char *conninfo); - Several libpq functions parse a user-specified string to obtain + Several libpq functions parse a user-specified string to obtain connection parameters. There are two accepted formats for these strings: plain keyword = value strings and URIs. URIs generally follow @@ -840,8 +840,8 @@ postgresql:///mydb?host=localhost&port=5433 Percent-encoding may be used to include symbols with special meaning in any - of the URI parts, e.g. replace = with - %3D. + of the URI parts, e.g. replace = with + %3D. @@ -895,18 +895,18 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname It is possible to specify multiple hosts to connect to, so that they are - tried in the given order. In the Keyword/Value format, the host, - hostaddr, and port options accept a comma-separated + tried in the given order. In the Keyword/Value format, the host, + hostaddr, and port options accept a comma-separated list of values. The same number of elements must be given in each option, such - that e.g. the first hostaddr corresponds to the first host name, - the second hostaddr corresponds to the second host name, and so + that e.g. the first hostaddr corresponds to the first host name, + the second hostaddr corresponds to the second host name, and so forth. As an exception, if only one port is specified, it applies to all the hosts. - In the connection URI format, you can list multiple host:port pairs - separated by commas, in the host component of the URI. In either + In the connection URI format, you can list multiple host:port pairs + separated by commas, in the host component of the URI. In either format, a single hostname can also translate to multiple network addresses. A common example of this is a host that has both an IPv4 and an IPv6 address. @@ -939,17 +939,17 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname host - Name of host to connect to.host name + Name of host to connect to.host name If a host name begins with a slash, it specifies Unix-domain communication rather than TCP/IP communication; the value is the name of the directory in which the socket file is stored. If multiple host names are specified, each will be tried in turn in the order given. The default behavior when host is not specified is to connect to a Unix-domain - socketUnix domain socket in + socketUnix domain socket in /tmp (or whatever socket directory was specified - when PostgreSQL was built). On machines without - Unix-domain sockets, the default is to connect to localhost. + when PostgreSQL was built). On machines without + Unix-domain sockets, the default is to connect to localhost. A comma-separated list of host names is also accepted, in which case @@ -964,53 +964,53 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Numeric IP address of host to connect to. This should be in the - standard IPv4 address format, e.g., 172.28.40.9. If + standard IPv4 address format, e.g., 172.28.40.9. If your machine supports IPv6, you can also use those addresses. TCP/IP communication is always used when a nonempty string is specified for this parameter. - Using hostaddr instead of host allows the + Using hostaddr instead of host allows the application to avoid a host name look-up, which might be important in applications with time constraints. However, a host name is required for GSSAPI or SSPI authentication - methods, as well as for verify-full SSL + methods, as well as for verify-full SSL certificate verification. The following rules are used: - If host is specified without hostaddr, + If host is specified without hostaddr, a host name lookup occurs. - If hostaddr is specified without host, - the value for hostaddr gives the server network address. + If hostaddr is specified without host, + the value for hostaddr gives the server network address. The connection attempt will fail if the authentication method requires a host name. - If both host and hostaddr are specified, - the value for hostaddr gives the server network address. - The value for host is ignored unless the + If both host and hostaddr are specified, + the value for hostaddr gives the server network address. + The value for host is ignored unless the authentication method requires it, in which case it will be used as the host name. - Note that authentication is likely to fail if host - is not the name of the server at network address hostaddr. - Also, note that host rather than hostaddr + Note that authentication is likely to fail if host + is not the name of the server at network address hostaddr. + Also, note that host rather than hostaddr is used to identify the connection in a password file (see ). - A comma-separated list of hostaddrs is also accepted, in + A comma-separated list of hostaddrs is also accepted, in which case each host in the list is tried in order. See for details. @@ -1018,7 +1018,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Without either a host name or host address, libpq will connect using a local Unix-domain socket; or on machines without Unix-domain - sockets, it will attempt to connect to localhost. + sockets, it will attempt to connect to localhost. @@ -1029,9 +1029,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Port number to connect to at the server host, or socket file name extension for Unix-domain - connections.port + connections.port If multiple hosts were given in the host or - hostaddr parameters, this parameter may specify a list + hostaddr parameters, this parameter may specify a list of ports of equal length, or it may specify a single port number to be used for all hosts. @@ -1077,7 +1077,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies the name of the file used to store passwords (see ). Defaults to ~/.pgpass, or - %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. + %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. (No error is reported if this file does not exist.) @@ -1091,7 +1091,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname string). Zero or not specified means wait indefinitely. It is not recommended to use a timeout of less than 2 seconds. This timeout applies separately to each connection attempt. - For example, if you specify two hosts and connect_timeout + For example, if you specify two hosts and connect_timeout is 5, each host will time out if no connection is made within 5 seconds, so the total time spent waiting for a connection might be up to 10 seconds. @@ -1119,11 +1119,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies command-line options to send to the server at connection - start. For example, setting this to -c geqo=off sets the - session's value of the geqo parameter to - off. Spaces within this string are considered to + start. For example, setting this to -c geqo=off sets the + session's value of the geqo parameter to + off. Spaces within this string are considered to separate command-line arguments, unless escaped with a backslash - (\); write \\ to represent a literal + (\); write \\ to represent a literal backslash. For a detailed discussion of the available options, consult . @@ -1147,7 +1147,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies a fallback value for the configuration parameter. This value will be used if no value has been given for - application_name via a connection parameter or the + application_name via a connection parameter or the PGAPPNAME environment variable. Specifying a fallback name is useful in generic utility programs that wish to set a default application name but allow it to be @@ -1176,7 +1176,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname send a keepalive message to the server. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPIDLE or + It is only supported on systems where TCP_KEEPIDLE or an equivalent socket option is available, and on Windows; on other systems, it has no effect. @@ -1191,7 +1191,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname that is not acknowledged by the server should be retransmitted. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPINTVL or + It is only supported on systems where TCP_KEEPINTVL or an equivalent socket option is available, and on Windows; on other systems, it has no effect. @@ -1206,7 +1206,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname client's connection to the server is considered dead. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPCNT or + It is only supported on systems where TCP_KEEPCNT or an equivalent socket option is available; on other systems, it has no effect. @@ -1227,7 +1227,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This option determines whether or with what priority a secure - SSL TCP/IP connection will be negotiated with the + SSL TCP/IP connection will be negotiated with the server. There are six modes: @@ -1235,7 +1235,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname disable - only try a non-SSL connection + only try a non-SSL connection @@ -1244,8 +1244,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname allow - first try a non-SSL connection; if that - fails, try an SSL connection + first try a non-SSL connection; if that + fails, try an SSL connection @@ -1254,8 +1254,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname prefer (default) - first try an SSL connection; if that fails, - try a non-SSL connection + first try an SSL connection; if that fails, + try a non-SSL connection @@ -1264,7 +1264,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname require - only try an SSL connection. If a root CA + only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified @@ -1275,9 +1275,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname verify-ca - only try an SSL connection, and verify that + only try an SSL connection, and verify that the server certificate is issued by a trusted - certificate authority (CA) + certificate authority (CA) @@ -1286,9 +1286,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname verify-full - only try an SSL connection, verify that the + only try an SSL connection, verify that the server certificate is issued by a - trusted CA and that the requested server host name + trusted CA and that the requested server host name matches that in the certificate @@ -1300,16 +1300,16 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - sslmode is ignored for Unix domain socket + sslmode is ignored for Unix domain socket communication. - If PostgreSQL is compiled without SSL support, - using options require, verify-ca, or - verify-full will cause an error, while - options allow and prefer will be - accepted but libpq will not actually attempt - an SSL - connection.SSLwith libpq + If PostgreSQL is compiled without SSL support, + using options require, verify-ca, or + verify-full will cause an error, while + options allow and prefer will be + accepted but libpq will not actually attempt + an SSL + connection.SSLwith libpq @@ -1318,20 +1318,20 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname requiressl - This option is deprecated in favor of the sslmode + This option is deprecated in favor of the sslmode setting. If set to 1, an SSL connection to the server - is required (this is equivalent to sslmode - require). libpq will then refuse + is required (this is equivalent to sslmode + require). libpq will then refuse to connect if the server does not accept an SSL connection. If set to 0 (default), - libpq will negotiate the connection type with - the server (equivalent to sslmode - prefer). This option is only available if - PostgreSQL is compiled with SSL support. + libpq will negotiate the connection type with + the server (equivalent to sslmode + prefer). This option is only available if + PostgreSQL is compiled with SSL support. @@ -1343,9 +1343,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname If set to 1 (default), data sent over SSL connections will be compressed. If set to 0, compression will be disabled (this requires - OpenSSL 1.0.0 or later). + OpenSSL 1.0.0 or later). This parameter is ignored if a connection without SSL is made, - or if the version of OpenSSL used does not support + or if the version of OpenSSL used does not support it. @@ -1363,7 +1363,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the file name of the client SSL certificate, replacing the default - ~/.postgresql/postgresql.crt. + ~/.postgresql/postgresql.crt. This parameter is ignored if an SSL connection is not made. @@ -1376,9 +1376,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the location for the secret key used for the client certificate. It can either specify a file name that will be used instead of the default - ~/.postgresql/postgresql.key, or it can specify a key - obtained from an external engine (engines are - OpenSSL loadable modules). An external engine + ~/.postgresql/postgresql.key, or it can specify a key + obtained from an external engine (engines are + OpenSSL loadable modules). An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. This parameter is ignored if an SSL connection is not made. @@ -1391,10 +1391,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the name of a file containing SSL - certificate authority (CA) certificate(s). + certificate authority (CA) certificate(s). If the file exists, the server's certificate will be verified to be signed by one of these authorities. The default is - ~/.postgresql/root.crt. + ~/.postgresql/root.crt. @@ -1407,7 +1407,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname revocation list (CRL). Certificates listed in this file, if it exists, will be rejected while attempting to authenticate the server's certificate. The default is - ~/.postgresql/root.crl. + ~/.postgresql/root.crl. @@ -1429,7 +1429,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname any user could start a server listening there. Use this parameter to ensure that you are connected to a server run by a trusted user.) This option is only supported on platforms for which the - peer authentication method is implemented; see + peer authentication method is implemented; see . @@ -1478,11 +1478,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname connection in which read-write transactions are accepted by default is considered acceptable. The query SHOW transaction_read_only will be sent upon any - successful connection; if it returns on, the connection + successful connection; if it returns on, the connection will be closed. If multiple hosts were specified in the connection string, any remaining servers will be tried just as if the connection attempt had failed. The default value of this parameter, - any, regards all connections as acceptable. + any, regards all connections as acceptable. @@ -1501,13 +1501,13 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - libpq-fe.h - libpq-int.h + libpq-fe.h + libpq-int.h libpq application programmers should be careful to maintain the PGconn abstraction. Use the accessor functions described below to get at the contents of PGconn. Reference to internal PGconn fields using - libpq-int.h is not recommended because they are subject to change + libpq-int.h is not recommended because they are subject to change in the future. @@ -1515,10 +1515,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname The following functions return parameter values established at connection. These values are fixed for the life of the connection. If a multi-host - connection string is used, the values of PQhost, - PQport, and PQpass can change if a new connection - is established using the same PGconn object. Other values - are fixed for the lifetime of the PGconn object. + connection string is used, the values of PQhost, + PQport, and PQpass can change if a new connection + is established using the same PGconn object. Other values + are fixed for the lifetime of the PGconn object. @@ -1589,7 +1589,7 @@ char *PQpass(const PGconn *conn); This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning - with /.) + with /.) char *PQhost(const PGconn *conn); @@ -1660,7 +1660,7 @@ char *PQoptions(const PGconn *conn); The following functions return status data that can change as operations - are executed on the PGconn object. + are executed on the PGconn object. @@ -1695,8 +1695,8 @@ ConnStatusType PQstatus(const PGconn *conn); - See the entry for PQconnectStartParams, PQconnectStart - and PQconnectPoll with regards to other status codes that + See the entry for PQconnectStartParams, PQconnectStart + and PQconnectPoll with regards to other status codes that might be returned. @@ -1747,62 +1747,62 @@ const char *PQparameterStatus(const PGconn *conn, const char *paramName); Certain parameter values are reported by the server automatically at connection startup or whenever their values change. - PQparameterStatus can be used to interrogate these settings. + PQparameterStatus can be used to interrogate these settings. It returns the current value of a parameter if known, or NULL if the parameter is not known. Parameters reported as of the current release include - server_version, - server_encoding, - client_encoding, - application_name, - is_superuser, - session_authorization, - DateStyle, - IntervalStyle, - TimeZone, - integer_datetimes, and - standard_conforming_strings. - (server_encoding, TimeZone, and - integer_datetimes were not reported by releases before 8.0; - standard_conforming_strings was not reported by releases + server_version, + server_encoding, + client_encoding, + application_name, + is_superuser, + session_authorization, + DateStyle, + IntervalStyle, + TimeZone, + integer_datetimes, and + standard_conforming_strings. + (server_encoding, TimeZone, and + integer_datetimes were not reported by releases before 8.0; + standard_conforming_strings was not reported by releases before 8.1; - IntervalStyle was not reported by releases before 8.4; - application_name was not reported by releases before 9.0.) + IntervalStyle was not reported by releases before 8.4; + application_name was not reported by releases before 9.0.) Note that - server_version, - server_encoding and - integer_datetimes + server_version, + server_encoding and + integer_datetimes cannot change after startup. Pre-3.0-protocol servers do not report parameter settings, but - libpq includes logic to obtain values for - server_version and client_encoding anyway. - Applications are encouraged to use PQparameterStatus - rather than ad hoc code to determine these values. + libpq includes logic to obtain values for + server_version and client_encoding anyway. + Applications are encouraged to use PQparameterStatus + rather than ad hoc code to determine these values. (Beware however that on a pre-3.0 connection, changing - client_encoding via SET after connection - startup will not be reflected by PQparameterStatus.) - For server_version, see also - PQserverVersion, which returns the information in a + client_encoding via SET after connection + startup will not be reflected by PQparameterStatus.) + For server_version, see also + PQserverVersion, which returns the information in a numeric form that is much easier to compare against. - If no value for standard_conforming_strings is reported, - applications can assume it is off, that is, backslashes + If no value for standard_conforming_strings is reported, + applications can assume it is off, that is, backslashes are treated as escapes in string literals. Also, the presence of this parameter can be taken as an indication that the escape string - syntax (E'...') is accepted. + syntax (E'...') is accepted. - Although the returned pointer is declared const, it in fact - points to mutable storage associated with the PGconn structure. + Although the returned pointer is declared const, it in fact + points to mutable storage associated with the PGconn structure. It is unwise to assume the pointer will remain valid across queries. @@ -1829,7 +1829,7 @@ int PQprotocolVersion(const PGconn *conn); not change after connection startup is complete, but it could theoretically change during a connection reset. The 3.0 protocol will normally be used when communicating with - PostgreSQL 7.4 or later servers; pre-7.4 servers + PostgreSQL 7.4 or later servers; pre-7.4 servers support only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.) @@ -1862,17 +1862,17 @@ int PQserverVersion(const PGconn *conn); - Prior to major version 10, PostgreSQL used + Prior to major version 10, PostgreSQL used three-part version numbers in which the first two parts together represented the major version. For those - versions, PQserverVersion uses two digits for each + versions, PQserverVersion uses two digits for each part; for example version 9.1.5 will be returned as 90105, and version 9.2.0 will be returned as 90200. Therefore, for purposes of determining feature compatibility, - applications should divide the result of PQserverVersion + applications should divide the result of PQserverVersion by 100 not 10000 to determine a logical major version number. In all release series, only the last two digits differ between minor releases (bug-fix releases). @@ -1890,7 +1890,7 @@ int PQserverVersion(const PGconn *conn); - error message Returns the error message + error message Returns the error message most recently generated by an operation on the connection. @@ -1900,22 +1900,22 @@ char *PQerrorMessage(const PGconn *conn); - Nearly all libpq functions will set a message for + Nearly all libpq functions will set a message for PQerrorMessage if they fail. Note that by libpq convention, a nonempty PQerrorMessage result can consist of multiple lines, and will include a trailing newline. The caller should not free the result directly. It will be freed when the associated - PGconn handle is passed to + PGconn handle is passed to PQfinish. The result string should not be expected to remain the same across operations on the - PGconn structure. + PGconn structure. - PQsocketPQsocket + PQsocketPQsocket Obtains the file descriptor number of the connection socket to @@ -1933,13 +1933,13 @@ int PQsocket(const PGconn *conn); - PQbackendPIDPQbackendPID + PQbackendPIDPQbackendPID Returns the process ID (PID) - PID - determining PID of server process - in libpq + PID + determining PID of server process + in libpq of the backend process handling this connection. @@ -1960,7 +1960,7 @@ int PQbackendPID(const PGconn *conn); - PQconnectionNeedsPasswordPQconnectionNeedsPassword + PQconnectionNeedsPasswordPQconnectionNeedsPassword Returns true (1) if the connection authentication method @@ -1980,7 +1980,7 @@ int PQconnectionNeedsPassword(const PGconn *conn); - PQconnectionUsedPasswordPQconnectionUsedPassword + PQconnectionUsedPasswordPQconnectionUsedPassword Returns true (1) if the connection authentication method @@ -2006,7 +2006,7 @@ int PQconnectionUsedPassword(const PGconn *conn); - PQsslInUsePQsslInUse + PQsslInUsePQsslInUse Returns true (1) if the connection uses SSL, false (0) if not. @@ -2020,7 +2020,7 @@ int PQsslInUse(const PGconn *conn); - PQsslAttributePQsslAttribute + PQsslAttributePQsslAttribute Returns SSL-related information about the connection. @@ -2093,7 +2093,7 @@ const char *PQsslAttribute(const PGconn *conn, const char *attribute_name); - PQsslAttributeNamesPQsslAttributeNames + PQsslAttributeNamesPQsslAttributeNames Return an array of SSL attribute names available. The array is terminated by a NULL pointer. @@ -2105,7 +2105,7 @@ const char * const * PQsslAttributeNames(const PGconn *conn); - PQsslStructPQsslStruct + PQsslStructPQsslStruct Return a pointer to an SSL-implementation-specific object describing @@ -2139,17 +2139,17 @@ void *PQsslStruct(const PGconn *conn, const char *struct_name); This structure can be used to verify encryption levels, check server - certificates, and more. Refer to the OpenSSL + certificates, and more. Refer to the OpenSSL documentation for information about this structure. - PQgetsslPQgetssl + PQgetsslPQgetssl - SSLin libpq + SSLin libpq Returns the SSL structure used in the connection, or null if SSL is not in use. @@ -2163,8 +2163,8 @@ void *PQgetssl(const PGconn *conn); not be used in new applications, because the returned struct is specific to OpenSSL and will not be available if another SSL implementation is used. To check if a connection uses SSL, call - PQsslInUse instead, and for more details about the - connection, use PQsslAttribute. + PQsslInUse instead, and for more details about the + connection, use PQsslAttribute. @@ -2209,7 +2209,7 @@ PGresult *PQexec(PGconn *conn, const char *command); Returns a PGresult pointer or possibly a null pointer. A non-null pointer will generally be returned except in out-of-memory conditions or serious errors such as inability to send - the command to the server. The PQresultStatus function + the command to the server. The PQresultStatus function should be called to check the return value for any errors (including the value of a null pointer, in which case it will return PGRES_FATAL_ERROR). Use @@ -2222,7 +2222,7 @@ PGresult *PQexec(PGconn *conn, const char *command); The command string can include multiple SQL commands (separated by semicolons). Multiple queries sent in a single - PQexec call are processed in a single transaction, unless + PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple transactions. (See @@ -2263,10 +2263,10 @@ PGresult *PQexecParams(PGconn *conn, - PQexecParams is like PQexec, but offers additional + PQexecParams is like PQexec, but offers additional functionality: parameter values can be specified separately from the command string proper, and query results can be requested in either text or binary - format. PQexecParams is supported only in protocol 3.0 and later + format. PQexecParams is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. @@ -2289,8 +2289,8 @@ PGresult *PQexecParams(PGconn *conn, The SQL command string to be executed. If parameters are used, - they are referred to in the command string as $1, - $2, etc. + they are referred to in the command string as $1, + $2, etc. @@ -2300,9 +2300,9 @@ PGresult *PQexecParams(PGconn *conn, The number of parameters supplied; it is the length of the arrays - paramTypes[], paramValues[], - paramLengths[], and paramFormats[]. (The - array pointers can be NULL when nParams + paramTypes[], paramValues[], + paramLengths[], and paramFormats[]. (The + array pointers can be NULL when nParams is zero.) @@ -2313,7 +2313,7 @@ PGresult *PQexecParams(PGconn *conn, Specifies, by OID, the data types to be assigned to the - parameter symbols. If paramTypes is + parameter symbols. If paramTypes is NULL, or any particular element in the array is zero, the server infers a data type for the parameter symbol in the same way it would do for an untyped literal string. @@ -2359,11 +2359,11 @@ PGresult *PQexecParams(PGconn *conn, Values passed in binary format require knowledge of the internal representation expected by the backend. For example, integers must be passed in network byte - order. Passing numeric values requires + order. Passing numeric values requires knowledge of the server storage format, as implemented in - src/backend/utils/adt/numeric.c::numeric_send() and - src/backend/utils/adt/numeric.c::numeric_recv(). + src/backend/utils/adt/numeric.c::numeric_send() and + src/backend/utils/adt/numeric.c::numeric_recv(). @@ -2387,14 +2387,14 @@ PGresult *PQexecParams(PGconn *conn, - The primary advantage of PQexecParams over - PQexec is that parameter values can be separated from the + The primary advantage of PQexecParams over + PQexec is that parameter values can be separated from the command string, thus avoiding the need for tedious and error-prone quoting and escaping. - Unlike PQexec, PQexecParams allows at most + Unlike PQexec, PQexecParams allows at most one SQL command in the given string. (There can be semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has some usefulness as an extra defense against @@ -2412,8 +2412,8 @@ PGresult *PQexecParams(PGconn *conn, SELECT * FROM mytable WHERE x = $1::bigint; - This forces parameter $1 to be treated as bigint, whereas - by default it would be assigned the same type as x. Forcing the + This forces parameter $1 to be treated as bigint, whereas + by default it would be assigned the same type as x. Forcing the parameter type decision, either this way or by specifying a numeric type OID, is strongly recommended when sending parameter values in binary format, because binary format has less redundancy than text format and so there is less chance @@ -2444,40 +2444,40 @@ PGresult *PQprepare(PGconn *conn, - PQprepare creates a prepared statement for later - execution with PQexecPrepared. This feature allows + PQprepare creates a prepared statement for later + execution with PQexecPrepared. This feature allows commands to be executed repeatedly without being parsed and planned each time; see for details. - PQprepare is supported only in protocol 3.0 and later + PQprepare is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. The function creates a prepared statement named - stmtName from the query string, which - must contain a single SQL command. stmtName can be - "" to create an unnamed statement, in which case any + stmtName from the query string, which + must contain a single SQL command. stmtName can be + "" to create an unnamed statement, in which case any pre-existing unnamed statement is automatically replaced; otherwise it is an error if the statement name is already defined in the current session. If any parameters are used, they are referred - to in the query as $1, $2, etc. - nParams is the number of parameters for which types - are pre-specified in the array paramTypes[]. (The + to in the query as $1, $2, etc. + nParams is the number of parameters for which types + are pre-specified in the array paramTypes[]. (The array pointer can be NULL when - nParams is zero.) paramTypes[] + nParams is zero.) paramTypes[] specifies, by OID, the data types to be assigned to the parameter - symbols. If paramTypes is NULL, + symbols. If paramTypes is NULL, or any particular element in the array is zero, the server assigns a data type to the parameter symbol in the same way it would do for an untyped literal string. Also, the query can use parameter - symbols with numbers higher than nParams; data types + symbols with numbers higher than nParams; data types will be inferred for these symbols as well. (See PQdescribePrepared for a means to find out what data types were inferred.) - As with PQexec, the result is normally a + As with PQexec, the result is normally a PGresult object whose contents indicate server-side success or failure. A null result indicates out-of-memory or inability to send the command at all. Use @@ -2488,9 +2488,9 @@ PGresult *PQprepare(PGconn *conn, - Prepared statements for use with PQexecPrepared can also + Prepared statements for use with PQexecPrepared can also be created by executing SQL - statements. Also, although there is no libpq + statements. Also, although there is no libpq function for deleting a prepared statement, the SQL statement can be used for that purpose. @@ -2522,21 +2522,21 @@ PGresult *PQexecPrepared(PGconn *conn, - PQexecPrepared is like PQexecParams, + PQexecPrepared is like PQexecParams, but the command to be executed is specified by naming a previously-prepared statement, instead of giving a query string. This feature allows commands that will be used repeatedly to be parsed and planned just once, rather than each time they are executed. The statement must have been prepared previously in - the current session. PQexecPrepared is supported + the current session. PQexecPrepared is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - The parameters are identical to PQexecParams, except that the + The parameters are identical to PQexecParams, except that the name of a prepared statement is given instead of a query string, and the - paramTypes[] parameter is not present (it is not needed since + paramTypes[] parameter is not present (it is not needed since the prepared statement's parameter types were determined when it was created). @@ -2560,20 +2560,20 @@ PGresult *PQdescribePrepared(PGconn *conn, const char *stmtName); - PQdescribePrepared allows an application to obtain + PQdescribePrepared allows an application to obtain information about a previously prepared statement. - PQdescribePrepared is supported only in protocol 3.0 + PQdescribePrepared is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - stmtName can be "" or NULL to reference + stmtName can be "" or NULL to reference the unnamed statement, otherwise it must be the name of an existing - prepared statement. On success, a PGresult with + prepared statement. On success, a PGresult with status PGRES_COMMAND_OK is returned. The functions PQnparams and PQparamtype can be applied to this - PGresult to obtain information about the parameters + PGresult to obtain information about the parameters of the prepared statement, and the functions PQnfields, PQfname, PQftype, etc provide information about the @@ -2600,23 +2600,23 @@ PGresult *PQdescribePortal(PGconn *conn, const char *portalName); - PQdescribePortal allows an application to obtain + PQdescribePortal allows an application to obtain information about a previously created portal. - (libpq does not provide any direct access to + (libpq does not provide any direct access to portals, but you can use this function to inspect the properties - of a cursor created with a DECLARE CURSOR SQL command.) - PQdescribePortal is supported only in protocol 3.0 + of a cursor created with a DECLARE CURSOR SQL command.) + PQdescribePortal is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - portalName can be "" or NULL to reference + portalName can be "" or NULL to reference the unnamed portal, otherwise it must be the name of an existing - portal. On success, a PGresult with status + portal. On success, a PGresult with status PGRES_COMMAND_OK is returned. The functions PQnfields, PQfname, PQftype, etc can be applied to the - PGresult to obtain information about the result + PGresult to obtain information about the result columns (if any) of the portal. @@ -2625,7 +2625,7 @@ PGresult *PQdescribePortal(PGconn *conn, const char *portalName); - The PGresultPGresult + The PGresultPGresult structure encapsulates the result returned by the server. libpq application programmers should be careful to maintain the PGresult abstraction. @@ -2678,7 +2678,7 @@ ExecStatusType PQresultStatus(const PGresult *res); Successful completion of a command returning data (such as - a SELECT or SHOW). + a SELECT or SHOW). @@ -2743,7 +2743,7 @@ ExecStatusType PQresultStatus(const PGresult *res); PGRES_SINGLE_TUPLE - The PGresult contains a single result tuple + The PGresult contains a single result tuple from the current command. This status occurs only when single-row mode has been selected for the query (see ). @@ -2786,7 +2786,7 @@ ExecStatusType PQresultStatus(const PGresult *res); Converts the enumerated type returned by - PQresultStatus into a string constant describing the + PQresultStatus into a string constant describing the status code. The caller should not free the result. @@ -2813,7 +2813,7 @@ char *PQresultErrorMessage(const PGresult *res); If there was an error, the returned string will include a trailing newline. The caller should not free the result directly. It will - be freed when the associated PGresult handle is + be freed when the associated PGresult handle is passed to PQclear. @@ -2845,7 +2845,7 @@ char *PQresultErrorMessage(const PGresult *res); Returns a reformatted version of the error message associated with - a PGresult object. + a PGresult object. char *PQresultVerboseErrorMessage(const PGresult *res, PGVerbosity verbosity, @@ -2857,17 +2857,17 @@ char *PQresultVerboseErrorMessage(const PGresult *res, by computing the message that would have been produced by PQresultErrorMessage if the specified verbosity settings had been in effect for the connection when the - given PGresult was generated. If - the PGresult is not an error result, - PGresult is not an error result is reported instead. + given PGresult was generated. If + the PGresult is not an error result, + PGresult is not an error result is reported instead. The returned string includes a trailing newline. Unlike most other functions for extracting data from - a PGresult, the result of this function is a freshly + a PGresult, the result of this function is a freshly allocated string. The caller must free it - using PQfreemem() when the string is no longer needed. + using PQfreemem() when the string is no longer needed. @@ -2877,20 +2877,20 @@ char *PQresultVerboseErrorMessage(const PGresult *res, - PQresultErrorFieldPQresultErrorField + PQresultErrorFieldPQresultErrorField Returns an individual field of an error report. char *PQresultErrorField(const PGresult *res, int fieldcode); - fieldcode is an error field identifier; see the symbols + fieldcode is an error field identifier; see the symbols listed below. NULL is returned if the PGresult is not an error or warning result, or does not include the specified field. Field values will normally not include a trailing newline. The caller should not free the result directly. It will be freed when the - associated PGresult handle is passed to + associated PGresult handle is passed to PQclear. @@ -2898,29 +2898,29 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); The following field codes are available: - PG_DIAG_SEVERITY + PG_DIAG_SEVERITY - The severity; the field contents are ERROR, - FATAL, or PANIC (in an error message), - or WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message), or + The severity; the field contents are ERROR, + FATAL, or PANIC (in an error message), + or WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message), or a localized translation of one of these. Always present. - PG_DIAG_SEVERITY_NONLOCALIZED + PG_DIAG_SEVERITY_NONLOCALIZED - The severity; the field contents are ERROR, - FATAL, or PANIC (in an error message), - or WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message). - This is identical to the PG_DIAG_SEVERITY field except + The severity; the field contents are ERROR, + FATAL, or PANIC (in an error message), + or WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message). + This is identical to the PG_DIAG_SEVERITY field except that the contents are never localized. This is present only in - reports generated by PostgreSQL versions 9.6 + reports generated by PostgreSQL versions 9.6 and later. @@ -2928,7 +2928,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SQLSTATE + PG_DIAG_SQLSTATE error codes libpq @@ -2948,7 +2948,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_PRIMARY + PG_DIAG_MESSAGE_PRIMARY The primary human-readable error message (typically one line). @@ -2958,7 +2958,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_DETAIL + PG_DIAG_MESSAGE_DETAIL Detail: an optional secondary error message carrying more @@ -2968,7 +2968,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_HINT + PG_DIAG_MESSAGE_HINT Hint: an optional suggestion what to do about the problem. @@ -2980,7 +2980,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_STATEMENT_POSITION + PG_DIAG_STATEMENT_POSITION A string containing a decimal integer indicating an error cursor @@ -2992,21 +2992,21 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_INTERNAL_POSITION + PG_DIAG_INTERNAL_POSITION This is defined the same as the - PG_DIAG_STATEMENT_POSITION field, but it is used + PG_DIAG_STATEMENT_POSITION field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. The - PG_DIAG_INTERNAL_QUERY field will always appear when + PG_DIAG_INTERNAL_QUERY field will always appear when this field appears. - PG_DIAG_INTERNAL_QUERY + PG_DIAG_INTERNAL_QUERY The text of a failed internally-generated command. This could @@ -3016,7 +3016,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_CONTEXT + PG_DIAG_CONTEXT An indication of the context in which the error occurred. @@ -3028,7 +3028,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SCHEMA_NAME + PG_DIAG_SCHEMA_NAME If the error was associated with a specific database object, @@ -3038,7 +3038,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_TABLE_NAME + PG_DIAG_TABLE_NAME If the error was associated with a specific table, the name of the @@ -3049,7 +3049,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_COLUMN_NAME + PG_DIAG_COLUMN_NAME If the error was associated with a specific table column, the name @@ -3060,7 +3060,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_DATATYPE_NAME + PG_DIAG_DATATYPE_NAME If the error was associated with a specific data type, the name of @@ -3071,7 +3071,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_CONSTRAINT_NAME + PG_DIAG_CONSTRAINT_NAME If the error was associated with a specific constraint, the name @@ -3084,7 +3084,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_FILE + PG_DIAG_SOURCE_FILE The file name of the source-code location where the error was @@ -3094,7 +3094,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_LINE + PG_DIAG_SOURCE_LINE The line number of the source-code location where the error @@ -3104,7 +3104,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_FUNCTION + PG_DIAG_SOURCE_FUNCTION The name of the source-code function reporting the error. @@ -3151,7 +3151,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PQclearPQclear + PQclearPQclear Frees the storage associated with a @@ -3184,7 +3184,7 @@ void PQclear(PGresult *res); These functions are used to extract information from a PGresult object that represents a successful query result (that is, one that has status - PGRES_TUPLES_OK or PGRES_SINGLE_TUPLE). + PGRES_TUPLES_OK or PGRES_SINGLE_TUPLE). They can also be used to extract information from a successful Describe operation: a Describe's result has all the same column information that actual execution of the query @@ -3204,8 +3204,8 @@ void PQclear(PGresult *res); Returns the number of rows (tuples) in the query result. - (Note that PGresult objects are limited to no more - than INT_MAX rows, so an int result is + (Note that PGresult objects are limited to no more + than INT_MAX rows, so an int result is sufficient.) @@ -3249,7 +3249,7 @@ int PQnfields(const PGresult *res); Returns the column name associated with the given column number. Column numbers start at 0. The caller should not free the result directly. It will be freed when the associated - PGresult handle is passed to + PGresult handle is passed to PQclear. char *PQfname(const PGresult *res, @@ -3323,7 +3323,7 @@ Oid PQftable(const PGresult *res, - InvalidOid is returned if the column number is out of range, + InvalidOid is returned if the column number is out of range, or if the specified column is not a simple reference to a table column, or when using pre-3.0 protocol. You can query the system table pg_class to determine @@ -3442,7 +3442,7 @@ int PQfmod(const PGresult *res, The interpretation of modifier values is type-specific; they typically indicate precision or size limits. The value -1 is - used to indicate no information available. Most data + used to indicate no information available. Most data types do not use modifiers, in which case the value is always -1. @@ -3468,7 +3468,7 @@ int PQfsize(const PGresult *res, - PQfsize returns the space allocated for this column + PQfsize returns the space allocated for this column in a database row, in other words the size of the server's internal representation of the data type. (Accordingly, it is not really very useful to clients.) A negative value indicates @@ -3487,7 +3487,7 @@ int PQfsize(const PGresult *res, - Returns 1 if the PGresult contains binary data + Returns 1 if the PGresult contains binary data and 0 if it contains text data. int PQbinaryTuples(const PGresult *res); @@ -3496,10 +3496,10 @@ int PQbinaryTuples(const PGresult *res); This function is deprecated (except for its use in connection with - COPY), because it is possible for a single - PGresult to contain text data in some columns and - binary data in others. PQfformat is preferred. - PQbinaryTuples returns 1 only if all columns of the + COPY), because it is possible for a single + PGresult to contain text data in some columns and + binary data in others. PQfformat is preferred. + PQbinaryTuples returns 1 only if all columns of the result are binary (format 1). @@ -3518,7 +3518,7 @@ int PQbinaryTuples(const PGresult *res); Returns a single field value of one row of a PGresult. Row and column numbers start at 0. The caller should not free the result directly. It will - be freed when the associated PGresult handle is + be freed when the associated PGresult handle is passed to PQclear. char *PQgetvalue(const PGresult *res, @@ -3532,7 +3532,7 @@ char *PQgetvalue(const PGresult *res, PQgetvalue is a null-terminated character string representation of the field value. For data in binary format, the value is in the binary representation determined by - the data type's typsend and typreceive + the data type's typsend and typreceive functions. (The value is actually followed by a zero byte in this case too, but that is not ordinarily useful, since the value is likely to contain embedded nulls.) @@ -3540,7 +3540,7 @@ char *PQgetvalue(const PGresult *res, An empty string is returned if the field value is null. See - PQgetisnull to distinguish null values from + PQgetisnull to distinguish null values from empty-string values. @@ -3609,8 +3609,8 @@ int PQgetlength(const PGresult *res, This is the actual data length for the particular data value, that is, the size of the object pointed to by PQgetvalue. For text data format this is - the same as strlen(). For binary format this is - essential information. Note that one should not + the same as strlen(). For binary format this is + essential information. Note that one should not rely on PQfsize to obtain the actual data length. @@ -3635,7 +3635,7 @@ int PQnparams(const PGresult *res); This function is only useful when inspecting the result of - PQdescribePrepared. For other types of queries it + PQdescribePrepared. For other types of queries it will return zero. @@ -3660,7 +3660,7 @@ Oid PQparamtype(const PGresult *res, int param_number); This function is only useful when inspecting the result of - PQdescribePrepared. For other types of queries it + PQdescribePrepared. For other types of queries it will return zero. @@ -3738,7 +3738,7 @@ char *PQcmdStatus(PGresult *res); Commonly this is just the name of the command, but it might include additional data such as the number of rows processed. The caller should not free the result directly. It will be freed when the - associated PGresult handle is passed to + associated PGresult handle is passed to PQclear. @@ -3762,17 +3762,17 @@ char *PQcmdTuples(PGresult *res); This function returns a string containing the number of rows - affected by the SQL statement that generated the - PGresult. This function can only be used following - the execution of a SELECT, CREATE TABLE AS, - INSERT, UPDATE, DELETE, - MOVE, FETCH, or COPY statement, - or an EXECUTE of a prepared query that contains an - INSERT, UPDATE, or DELETE statement. - If the command that generated the PGresult was anything - else, PQcmdTuples returns an empty string. The caller + affected by the SQL statement that generated the + PGresult. This function can only be used following + the execution of a SELECT, CREATE TABLE AS, + INSERT, UPDATE, DELETE, + MOVE, FETCH, or COPY statement, + or an EXECUTE of a prepared query that contains an + INSERT, UPDATE, or DELETE statement. + If the command that generated the PGresult was anything + else, PQcmdTuples returns an empty string. The caller should not free the return value directly. It will be freed when - the associated PGresult handle is passed to + the associated PGresult handle is passed to PQclear. @@ -3788,14 +3788,14 @@ char *PQcmdTuples(PGresult *res); - Returns the OIDOIDin libpq - of the inserted row, if the SQL command was an - INSERT that inserted exactly one row into a table that - has OIDs, or a EXECUTE of a prepared query containing - a suitable INSERT statement. Otherwise, this function + Returns the OIDOIDin libpq + of the inserted row, if the SQL command was an + INSERT that inserted exactly one row into a table that + has OIDs, or a EXECUTE of a prepared query containing + a suitable INSERT statement. Otherwise, this function returns InvalidOid. This function will also return InvalidOid if the table affected by the - INSERT statement does not contain OIDs. + INSERT statement does not contain OIDs. Oid PQoidValue(const PGresult *res); @@ -3858,19 +3858,19 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); values as literal constants in SQL commands. Certain characters (such as quotes and backslashes) must be escaped to prevent them from being interpreted specially by the SQL parser. - PQescapeLiteral performs this operation. + PQescapeLiteral performs this operation. - PQescapeLiteral returns an escaped version of the + PQescapeLiteral returns an escaped version of the str parameter in memory allocated with - malloc(). This memory should be freed using - PQfreemem() when the result is no longer needed. + malloc(). This memory should be freed using + PQfreemem() when the result is no longer needed. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeLiteral stops at the zero; the behavior is - thus rather like strncpy.) The + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeLiteral stops at the zero; the behavior is + thus rather like strncpy.) The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser. A terminating zero byte is also added. The @@ -3879,8 +3879,8 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); - On error, PQescapeLiteral returns NULL and a suitable - message is stored in the conn object. + On error, PQescapeLiteral returns NULL and a suitable + message is stored in the conn object. @@ -3888,14 +3888,14 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); It is especially important to do proper escaping when handling strings that were received from an untrustworthy source. Otherwise there is a security risk: you are vulnerable to - SQL injection attacks wherein unwanted SQL commands are + SQL injection attacks wherein unwanted SQL commands are fed to your database. Note that it is not necessary nor correct to do escaping when a data - value is passed as a separate parameter in PQexecParams or + value is passed as a separate parameter in PQexecParams or its sibling routines. @@ -3926,15 +3926,15 @@ char *PQescapeIdentifier(PGconn *conn, const char *str, size_t length); - PQescapeIdentifier returns a version of the + PQescapeIdentifier returns a version of the str parameter escaped as an SQL identifier - in memory allocated with malloc(). This memory must be - freed using PQfreemem() when the result is no longer + in memory allocated with malloc(). This memory must be + freed using PQfreemem() when the result is no longer needed. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeIdentifier stops at the zero; the behavior is - thus rather like strncpy.) The + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeIdentifier stops at the zero; the behavior is + thus rather like strncpy.) The return string has all special characters replaced so that it will be properly processed as an SQL identifier. A terminating zero byte is also added. The return string will also be surrounded by double @@ -3942,8 +3942,8 @@ char *PQescapeIdentifier(PGconn *conn, const char *str, size_t length); - On error, PQescapeIdentifier returns NULL and a suitable - message is stored in the conn object. + On error, PQescapeIdentifier returns NULL and a suitable + message is stored in the conn object. @@ -3974,39 +3974,39 @@ size_t PQescapeStringConn(PGconn *conn, - PQescapeStringConn escapes string literals, much like - PQescapeLiteral. Unlike PQescapeLiteral, + PQescapeStringConn escapes string literals, much like + PQescapeLiteral. Unlike PQescapeLiteral, the caller is responsible for providing an appropriately sized buffer. - Furthermore, PQescapeStringConn does not generate the - single quotes that must surround PostgreSQL string + Furthermore, PQescapeStringConn does not generate the + single quotes that must surround PostgreSQL string literals; they should be provided in the SQL command that the - result is inserted into. The parameter from points to + result is inserted into. The parameter from points to the first character of the string that is to be escaped, and the - length parameter gives the number of bytes in this + length parameter gives the number of bytes in this string. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeStringConn stops at the zero; the behavior is - thus rather like strncpy.) to shall point + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeStringConn stops at the zero; the behavior is + thus rather like strncpy.) to shall point to a buffer that is able to hold at least one more byte than twice - the value of length, otherwise the behavior is undefined. - Behavior is likewise undefined if the to and - from strings overlap. + the value of length, otherwise the behavior is undefined. + Behavior is likewise undefined if the to and + from strings overlap. - If the error parameter is not NULL, then - *error is set to zero on success, nonzero on error. + If the error parameter is not NULL, then + *error is set to zero on success, nonzero on error. Presently the only possible error conditions involve invalid multibyte encoding in the source string. The output string is still generated on error, but it can be expected that the server will reject it as malformed. On error, a suitable message is stored in the - conn object, whether or not error is NULL. + conn object, whether or not error is NULL. - PQescapeStringConn returns the number of bytes written - to to, not including the terminating zero byte. + PQescapeStringConn returns the number of bytes written + to to, not including the terminating zero byte. @@ -4021,30 +4021,30 @@ size_t PQescapeStringConn(PGconn *conn, - PQescapeString is an older, deprecated version of - PQescapeStringConn. + PQescapeString is an older, deprecated version of + PQescapeStringConn. size_t PQescapeString (char *to, const char *from, size_t length); - The only difference from PQescapeStringConn is that - PQescapeString does not take PGconn - or error parameters. + The only difference from PQescapeStringConn is that + PQescapeString does not take PGconn + or error parameters. Because of this, it cannot adjust its behavior depending on the connection properties (such as character encoding) and therefore - it might give the wrong results. Also, it has no way + it might give the wrong results. Also, it has no way to report error conditions. - PQescapeString can be used safely in - client programs that work with only one PostgreSQL + PQescapeString can be used safely in + client programs that work with only one PostgreSQL connection at a time (in this case it can find out what it needs to - know behind the scenes). In other contexts it is a security + know behind the scenes). In other contexts it is a security hazard and should be avoided in favor of - PQescapeStringConn. + PQescapeStringConn. @@ -4090,10 +4090,10 @@ unsigned char *PQescapeByteaConn(PGconn *conn, - PQescapeByteaConn returns an escaped version of the + PQescapeByteaConn returns an escaped version of the from parameter binary string in memory - allocated with malloc(). This memory should be freed using - PQfreemem() when the result is no longer needed. The + allocated with malloc(). This memory should be freed using + PQfreemem() when the result is no longer needed. The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser, and the bytea input function. A @@ -4104,7 +4104,7 @@ unsigned char *PQescapeByteaConn(PGconn *conn, On error, a null pointer is returned, and a suitable error message - is stored in the conn object. Currently, the only + is stored in the conn object. Currently, the only possible error is insufficient memory for the result string. @@ -4120,8 +4120,8 @@ unsigned char *PQescapeByteaConn(PGconn *conn, - PQescapeBytea is an older, deprecated version of - PQescapeByteaConn. + PQescapeBytea is an older, deprecated version of + PQescapeByteaConn. unsigned char *PQescapeBytea(const unsigned char *from, size_t from_length, @@ -4130,15 +4130,15 @@ unsigned char *PQescapeBytea(const unsigned char *from, - The only difference from PQescapeByteaConn is that - PQescapeBytea does not take a PGconn - parameter. Because of this, PQescapeBytea can + The only difference from PQescapeByteaConn is that + PQescapeBytea does not take a PGconn + parameter. Because of this, PQescapeBytea can only be used safely in client programs that use a single - PostgreSQL connection at a time (in this case + PostgreSQL connection at a time (in this case it can find out what it needs to know behind the - scenes). It might give the wrong results if + scenes). It might give the wrong results if used in programs that use multiple database connections (use - PQescapeByteaConn in such cases). + PQescapeByteaConn in such cases). @@ -4169,17 +4169,17 @@ unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); to a bytea column. PQunescapeBytea converts this string representation into its binary representation. It returns a pointer to a buffer allocated with - malloc(), or NULL on error, and puts the size of + malloc(), or NULL on error, and puts the size of the buffer in to_length. The result must be - freed using PQfreemem when it is no longer needed. + freed using PQfreemem when it is no longer needed. This conversion is not exactly the inverse of PQescapeBytea, because the string is not expected - to be escaped when received from PQgetvalue. + to be escaped when received from PQgetvalue. In particular this means there is no need for string quoting considerations, - and so no need for a PGconn parameter. + and so no need for a PGconn parameter. @@ -4273,7 +4273,7 @@ unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); Submits a command to the server without waiting for the result(s). 1 is returned if the command was successfully dispatched and 0 if - not (in which case, use PQerrorMessage to get more + not (in which case, use PQerrorMessage to get more information about the failure). int PQsendQuery(PGconn *conn, const char *command); @@ -4323,7 +4323,7 @@ int PQsendQueryParams(PGconn *conn, - PQsendPrepare + PQsendPrepare PQsendPrepare @@ -4341,7 +4341,7 @@ int PQsendPrepare(PGconn *conn, const Oid *paramTypes); - This is an asynchronous version of PQprepare: it + This is an asynchronous version of PQprepare: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to determine whether the server successfully created the prepared @@ -4388,7 +4388,7 @@ int PQsendQueryPrepared(PGconn *conn, - PQsendDescribePrepared + PQsendDescribePrepared PQsendDescribePrepared @@ -4402,7 +4402,7 @@ int PQsendQueryPrepared(PGconn *conn, int PQsendDescribePrepared(PGconn *conn, const char *stmtName); - This is an asynchronous version of PQdescribePrepared: + This is an asynchronous version of PQdescribePrepared: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to obtain the results. The function's parameters are handled @@ -4415,7 +4415,7 @@ int PQsendDescribePrepared(PGconn *conn, const char *stmtName); - PQsendDescribePortal + PQsendDescribePortal PQsendDescribePortal @@ -4429,7 +4429,7 @@ int PQsendDescribePrepared(PGconn *conn, const char *stmtName); int PQsendDescribePortal(PGconn *conn, const char *portalName); - This is an asynchronous version of PQdescribePortal: + This is an asynchronous version of PQdescribePortal: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to obtain the results. The function's parameters are handled @@ -4472,7 +4472,7 @@ PGresult *PQgetResult(PGconn *conn); PQgetResult will just return a null pointer at once.) Each non-null result from PQgetResult should be processed using the - same PGresult accessor functions previously + same PGresult accessor functions previously described. Don't forget to free each result object with PQclear when done with it. Note that PQgetResult will block only if a command is @@ -4484,7 +4484,7 @@ PGresult *PQgetResult(PGconn *conn); Even when PQresultStatus indicates a fatal error, PQgetResult should be called until it - returns a null pointer, to allow libpq to + returns a null pointer, to allow libpq to process the error information completely. @@ -4589,7 +4589,7 @@ int PQisBusy(PGconn *conn); A typical application using these functions will have a main loop that - uses select() or poll() to wait for + uses select() or poll() to wait for all the conditions that it must respond to. One of the conditions will be input available from the server, which in terms of select() means readable data on the file @@ -4599,7 +4599,7 @@ int PQisBusy(PGconn *conn); call PQisBusy, followed by PQgetResult if PQisBusy returns false (0). It can also call PQnotifies - to detect NOTIFY messages (see NOTIFY messages (see ). @@ -4737,12 +4737,12 @@ int PQflush(PGconn *conn); - Ordinarily, libpq collects a SQL command's + Ordinarily, libpq collects a SQL command's entire result and returns it to the application as a single PGresult. This can be unworkable for commands that return a large number of rows. For such cases, applications can use PQsendQuery and PQgetResult in - single-row mode. In this mode, the result row(s) are + single-row mode. In this mode, the result row(s) are returned to the application one at a time, as they are received from the server. @@ -4807,7 +4807,7 @@ int PQsetSingleRowMode(PGconn *conn); While processing a query, the server may return some rows and then encounter an error, causing the query to be aborted. Ordinarily, - libpq discards any such rows and reports only the + libpq discards any such rows and reports only the error. But in single-row mode, those rows will have already been returned to the application. Hence, the application will see some PGRES_SINGLE_TUPLE PGresult @@ -4853,10 +4853,10 @@ PGcancel *PQgetCancel(PGconn *conn); PQgetCancel creates a - PGcancelPGcancel object - given a PGconn connection object. It will return - NULL if the given conn is NULL or an invalid - connection. The PGcancel object is an opaque + PGcancelPGcancel object + given a PGconn connection object. It will return + NULL if the given conn is NULL or an invalid + connection. The PGcancel object is an opaque structure that is not meant to be accessed directly by the application; it can only be passed to PQcancel or PQfreeCancel. @@ -4905,9 +4905,9 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); The return value is 1 if the cancel request was successfully - dispatched and 0 if not. If not, errbuf is filled - with an explanatory error message. errbuf - must be a char array of size errbufsize (the + dispatched and 0 if not. If not, errbuf is filled + with an explanatory error message. errbuf + must be a char array of size errbufsize (the recommended size is 256 bytes). @@ -4922,11 +4922,11 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); PQcancel can safely be invoked from a signal - handler, if the errbuf is a local variable in the - signal handler. The PGcancel object is read-only + handler, if the errbuf is a local variable in the + signal handler. The PGcancel object is read-only as far as PQcancel is concerned, so it can also be invoked from a thread that is separate from the one - manipulating the PGconn object. + manipulating the PGconn object. @@ -4953,12 +4953,12 @@ int PQrequestCancel(PGconn *conn); Requests that the server abandon processing of the current command. It operates directly on the - PGconn object, and in case of failure stores the - error message in the PGconn object (whence it can + PGconn object, and in case of failure stores the + error message in the PGconn object (whence it can be retrieved by PQerrorMessage). Although the functionality is the same, this approach creates hazards for multiple-thread programs and signal handlers, since it is possible - that overwriting the PGconn's error message will + that overwriting the PGconn's error message will mess up the operation currently in progress on the connection. @@ -4991,7 +4991,7 @@ int PQrequestCancel(PGconn *conn); - The function PQfnPQfn + The function PQfnPQfn requests execution of a server function via the fast-path interface: PGresult *PQfn(PGconn *conn, @@ -5016,19 +5016,19 @@ typedef struct - The fnid argument is the OID of the function to be - executed. args and nargs define the + The fnid argument is the OID of the function to be + executed. args and nargs define the parameters to be passed to the function; they must match the declared - function argument list. When the isint field of a - parameter structure is true, the u.integer value is sent + function argument list. When the isint field of a + parameter structure is true, the u.integer value is sent to the server as an integer of the indicated length (this must be - 2 or 4 bytes); proper byte-swapping occurs. When isint - is false, the indicated number of bytes at *u.ptr are + 2 or 4 bytes); proper byte-swapping occurs. When isint + is false, the indicated number of bytes at *u.ptr are sent with no processing; the data must be in the format expected by the server for binary transmission of the function's argument data - type. (The declaration of u.ptr as being of - type int * is historical; it would be better to consider - it void *.) + type. (The declaration of u.ptr as being of + type int * is historical; it would be better to consider + it void *.) result_buf points to the buffer in which to place the function's return value. The caller must have allocated sufficient space to store the return value. (There is no check!) The actual result @@ -5036,14 +5036,14 @@ typedef struct result_len. If a 2- or 4-byte integer result is expected, set result_is_int to 1, otherwise set it to 0. Setting result_is_int to 1 causes - libpq to byte-swap the value if necessary, so that it + libpq to byte-swap the value if necessary, so that it is delivered as a proper int value for the client machine; - note that a 4-byte integer is delivered into *result_buf + note that a 4-byte integer is delivered into *result_buf for either allowed result size. - When result_is_int is 0, the binary-format byte string + When result_is_int is 0, the binary-format byte string sent by the server is returned unmodified. (In this case it's better to consider result_buf as being of - type void *.) + type void *.) @@ -5077,7 +5077,7 @@ typedef struct can stop listening with the UNLISTEN command). All sessions listening on a particular channel will be notified asynchronously when a NOTIFY command with that - channel name is executed by any session. A payload string can + channel name is executed by any session. A payload string can be passed to communicate additional data to the listeners. @@ -5087,14 +5087,14 @@ typedef struct and NOTIFY commands as ordinary SQL commands. The arrival of NOTIFY messages can subsequently be detected by calling - PQnotifies.PQnotifies + PQnotifies.PQnotifies The function PQnotifies returns the next notification from a list of unhandled notification messages received from the server. It returns a null pointer if there are no pending notifications. Once a - notification is returned from PQnotifies, it is considered + notification is returned from PQnotifies, it is considered handled and will be removed from the list of notifications. @@ -5128,14 +5128,14 @@ typedef struct pgNotify server; it just returns messages previously absorbed by another libpq function. In prior releases of libpq, the only way to ensure timely receipt - of NOTIFY messages was to constantly submit commands, even + of NOTIFY messages was to constantly submit commands, even empty ones, and then check PQnotifies after each PQexec. While this still works, it is deprecated as a waste of processing power. - A better way to check for NOTIFY messages when you have no + A better way to check for NOTIFY messages when you have no useful commands to execute is to call PQconsumeInput, then check PQnotifies. You can use @@ -5173,12 +5173,12 @@ typedef struct pgNotify The overall process is that the application first issues the SQL COPY command via PQexec or one of the equivalent functions. The response to this (if there is no - error in the command) will be a PGresult object bearing + error in the command) will be a PGresult object bearing a status code of PGRES_COPY_OUT or PGRES_COPY_IN (depending on the specified copy direction). The application should then use the functions of this section to receive or transmit data rows. When the data transfer is - complete, another PGresult object is returned to indicate + complete, another PGresult object is returned to indicate success or failure of the transfer. Its status will be PGRES_COMMAND_OK for success or PGRES_FATAL_ERROR if some problem was encountered. @@ -5192,8 +5192,8 @@ typedef struct pgNotify If a COPY command is issued via PQexec in a string that could contain additional commands, the application must continue fetching results via - PQgetResult after completing the COPY - sequence. Only when PQgetResult returns + PQgetResult after completing the COPY + sequence. Only when PQgetResult returns NULL is it certain that the PQexec command string is done and it is safe to issue more commands. @@ -5206,7 +5206,7 @@ typedef struct pgNotify - A PGresult object bearing one of these status values + A PGresult object bearing one of these status values carries some additional data about the COPY operation that is starting. This additional data is available using functions that are also used in connection with query results: @@ -5262,7 +5262,7 @@ typedef struct pgNotify each column of the copy operation. The per-column format codes will always be zero when the overall copy format is textual, but the binary format can support both text and binary columns. - (However, as of the current implementation of COPY, + (However, as of the current implementation of COPY, only binary columns appear in a binary copy; so the per-column formats always match the overall format at present.) @@ -5283,8 +5283,8 @@ typedef struct pgNotify These functions are used to send data during COPY FROM - STDIN. They will fail if called when the connection is not in - COPY_IN state. + STDIN. They will fail if called when the connection is not in + COPY_IN state. @@ -5298,7 +5298,7 @@ typedef struct pgNotify - Sends data to the server during COPY_IN state. + Sends data to the server during COPY_IN state. int PQputCopyData(PGconn *conn, const char *buffer, @@ -5308,7 +5308,7 @@ int PQputCopyData(PGconn *conn, Transmits the COPY data in the specified - buffer, of length nbytes, to the server. + buffer, of length nbytes, to the server. The result is 1 if the data was queued, zero if it was not queued because of full buffers (this will only happen in nonblocking mode), or -1 if an error occurred. @@ -5322,7 +5322,7 @@ int PQputCopyData(PGconn *conn, into buffer loads of any convenient size. Buffer-load boundaries have no semantic significance when sending. The contents of the data stream must match the data format expected by the - COPY command; see for details. + COPY command; see for details. @@ -5337,7 +5337,7 @@ int PQputCopyData(PGconn *conn, - Sends end-of-data indication to the server during COPY_IN state. + Sends end-of-data indication to the server during COPY_IN state. int PQputCopyEnd(PGconn *conn, const char *errormsg); @@ -5345,14 +5345,14 @@ int PQputCopyEnd(PGconn *conn, - Ends the COPY_IN operation successfully if - errormsg is NULL. If - errormsg is not NULL then the - COPY is forced to fail, with the string pointed to by - errormsg used as the error message. (One should not + Ends the COPY_IN operation successfully if + errormsg is NULL. If + errormsg is not NULL then the + COPY is forced to fail, with the string pointed to by + errormsg used as the error message. (One should not assume that this exact error message will come back from the server, however, as the server might have already failed the - COPY for its own reasons. Also note that the option + COPY for its own reasons. Also note that the option to force failure does not work when using pre-3.0-protocol connections.) @@ -5362,19 +5362,19 @@ int PQputCopyEnd(PGconn *conn, nonblocking mode, this may only indicate that the termination message was successfully queued. (In nonblocking mode, to be certain that the data has been sent, you should next wait for - write-ready and call PQflush, repeating until it + write-ready and call PQflush, repeating until it returns zero.) Zero indicates that the function could not queue the termination message because of full buffers; this will only happen in nonblocking mode. (In this case, wait for - write-ready and try the PQputCopyEnd call + write-ready and try the PQputCopyEnd call again.) If a hard error occurs, -1 is returned; you can use PQerrorMessage to retrieve details. - After successfully calling PQputCopyEnd, call - PQgetResult to obtain the final result status of the - COPY command. One can wait for this result to be + After successfully calling PQputCopyEnd, call + PQgetResult to obtain the final result status of the + COPY command. One can wait for this result to be available in the usual way. Then return to normal operation. @@ -5388,8 +5388,8 @@ int PQputCopyEnd(PGconn *conn, These functions are used to receive data during COPY TO - STDOUT. They will fail if called when the connection is not in - COPY_OUT state. + STDOUT. They will fail if called when the connection is not in + COPY_OUT state. @@ -5403,7 +5403,7 @@ int PQputCopyEnd(PGconn *conn, - Receives data from the server during COPY_OUT state. + Receives data from the server during COPY_OUT state. int PQgetCopyData(PGconn *conn, char **buffer, @@ -5416,11 +5416,11 @@ int PQgetCopyData(PGconn *conn, COPY. Data is always returned one data row at a time; if only a partial row is available, it is not returned. Successful return of a data row involves allocating a chunk of - memory to hold the data. The buffer parameter must - be non-NULL. *buffer is set to + memory to hold the data. The buffer parameter must + be non-NULL. *buffer is set to point to the allocated memory, or to NULL in cases where no buffer is returned. A non-NULL result - buffer should be freed using PQfreemem when no longer + buffer should be freed using PQfreemem when no longer needed. @@ -5431,26 +5431,26 @@ int PQgetCopyData(PGconn *conn, probably only useful for textual COPY. A result of zero indicates that the COPY is still in progress, but no row is yet available (this is only possible when - async is true). A result of -1 indicates that the + async is true). A result of -1 indicates that the COPY is done. A result of -2 indicates that an - error occurred (consult PQerrorMessage for the reason). + error occurred (consult PQerrorMessage for the reason). - When async is true (not zero), - PQgetCopyData will not block waiting for input; it + When async is true (not zero), + PQgetCopyData will not block waiting for input; it will return zero if the COPY is still in progress but no complete row is available. (In this case wait for read-ready - and then call PQconsumeInput before calling - PQgetCopyData again.) When async is - false (zero), PQgetCopyData will block until data is + and then call PQconsumeInput before calling + PQgetCopyData again.) When async is + false (zero), PQgetCopyData will block until data is available or the operation completes. - After PQgetCopyData returns -1, call - PQgetResult to obtain the final result status of the - COPY command. One can wait for this result to be + After PQgetCopyData returns -1, call + PQgetResult to obtain the final result status of the + COPY command. One can wait for this result to be available in the usual way. Then return to normal operation. @@ -5463,7 +5463,7 @@ int PQgetCopyData(PGconn *conn, Obsolete Functions for <command>COPY</command> - These functions represent older methods of handling COPY. + These functions represent older methods of handling COPY. Although they still work, they are deprecated due to poor error handling, inconvenient methods of detecting end-of-data, and lack of support for binary or nonblocking transfers. @@ -5481,7 +5481,7 @@ int PQgetCopyData(PGconn *conn, Reads a newline-terminated line of characters (transmitted - by the server) into a buffer string of size length. + by the server) into a buffer string of size length. int PQgetline(PGconn *conn, char *buffer, @@ -5490,7 +5490,7 @@ int PQgetline(PGconn *conn, - This function copies up to length-1 characters into + This function copies up to length-1 characters into the buffer and converts the terminating newline into a zero byte. PQgetline returns EOF at the end of input, 0 if the entire line has been read, and 1 if the @@ -5501,7 +5501,7 @@ int PQgetline(PGconn *conn, of the two characters \., which indicates that the server has finished sending the results of the COPY command. If the application might receive - lines that are more than length-1 characters long, + lines that are more than length-1 characters long, care is needed to be sure it recognizes the \. line correctly (and does not, for example, mistake the end of a long data line for a terminator line). @@ -5545,7 +5545,7 @@ int PQgetlineAsync(PGconn *conn, On each call, PQgetlineAsync will return data if a - complete data row is available in libpq's input buffer. + complete data row is available in libpq's input buffer. Otherwise, no data is returned until the rest of the row arrives. The function returns -1 if the end-of-copy-data marker has been recognized, or 0 if no data is available, or a positive number giving the number of @@ -5559,7 +5559,7 @@ int PQgetlineAsync(PGconn *conn, the caller is too small to hold a row sent by the server, then a partial data row will be returned. With textual data this can be detected by testing whether the last returned byte is \n or not. (In a binary - COPY, actual parsing of the COPY data format will be needed to make the + COPY, actual parsing of the COPY data format will be needed to make the equivalent determination.) The returned string is not null-terminated. (If you want to add a terminating null, be sure to pass a bufsize one smaller @@ -5600,7 +5600,7 @@ int PQputline(PGconn *conn, Before PostgreSQL protocol 3.0, it was necessary for the application to explicitly send the two characters \. as a final line to indicate to the server that it had - finished sending COPY data. While this still works, it is deprecated and the + finished sending COPY data. While this still works, it is deprecated and the special meaning of \. can be expected to be removed in a future release. It is sufficient to call PQendcopy after having sent the actual data. @@ -5696,7 +5696,7 @@ int PQendcopy(PGconn *conn); Control Functions - These functions control miscellaneous details of libpq's + These functions control miscellaneous details of libpq's behavior. @@ -5747,7 +5747,7 @@ int PQsetClientEncoding(PGconn *conn, const char *encoding is the encoding you want to use. If the function successfully sets the encoding, it returns 0, otherwise -1. The current encoding for this connection can be - determined by using PQclientEncoding. + determined by using PQclientEncoding. @@ -5763,7 +5763,7 @@ int PQsetClientEncoding(PGconn *conn, const char * Determines the verbosity of messages returned by - PQerrorMessage and PQresultErrorMessage. + PQerrorMessage and PQresultErrorMessage. typedef enum { @@ -5775,15 +5775,15 @@ typedef enum PGVerbosity PQsetErrorVerbosity(PGconn *conn, PGVerbosity verbosity); - PQsetErrorVerbosity sets the verbosity mode, returning - the connection's previous setting. In TERSE mode, + PQsetErrorVerbosity sets the verbosity mode, returning + the connection's previous setting. In TERSE mode, returned messages include severity, primary text, and position only; this will normally fit on a single line. The default mode produces messages that include the above plus any detail, hint, or context - fields (these might span multiple lines). The VERBOSE + fields (these might span multiple lines). The VERBOSE mode includes all available fields. Changing the verbosity does not affect the messages available from already-existing - PGresult objects, only subsequently-created ones. + PGresult objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage if you want to print a previous error with a different verbosity.) @@ -5800,9 +5800,9 @@ PGVerbosity PQsetErrorVerbosity(PGconn *conn, PGVerbosity verbosity); - Determines the handling of CONTEXT fields in messages - returned by PQerrorMessage - and PQresultErrorMessage. + Determines the handling of CONTEXT fields in messages + returned by PQerrorMessage + and PQresultErrorMessage. typedef enum { @@ -5814,17 +5814,17 @@ typedef enum PGContextVisibility PQsetErrorContextVisibility(PGconn *conn, PGContextVisibility show_context); - PQsetErrorContextVisibility sets the context display mode, + PQsetErrorContextVisibility sets the context display mode, returning the connection's previous setting. This mode controls whether the CONTEXT field is included in messages - (unless the verbosity setting is TERSE, in which - case CONTEXT is never shown). The NEVER mode - never includes CONTEXT, while ALWAYS always - includes it if available. In ERRORS mode (the - default), CONTEXT fields are included only for error + (unless the verbosity setting is TERSE, in which + case CONTEXT is never shown). The NEVER mode + never includes CONTEXT, while ALWAYS always + includes it if available. In ERRORS mode (the + default), CONTEXT fields are included only for error messages, not for notices and warnings. Changing this mode does not affect the messages available from - already-existing PGresult objects, only + already-existing PGresult objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage if you want to print a previous error with a different display mode.) @@ -5850,9 +5850,9 @@ void PQtrace(PGconn *conn, FILE *stream); - On Windows, if the libpq library and an application are + On Windows, if the libpq library and an application are compiled with different flags, this function call will crash the - application because the internal representation of the FILE + application because the internal representation of the FILE pointers differ. Specifically, multithreaded/single-threaded, release/debug, and static/dynamic flags should be the same for the library and all applications using that library. @@ -5901,25 +5901,25 @@ void PQuntrace(PGconn *conn); - Frees memory allocated by libpq. + Frees memory allocated by libpq. void PQfreemem(void *ptr); - Frees memory allocated by libpq, particularly + Frees memory allocated by libpq, particularly PQescapeByteaConn, PQescapeBytea, PQunescapeBytea, and PQnotifies. It is particularly important that this function, rather than - free(), be used on Microsoft Windows. This is because + free(), be used on Microsoft Windows. This is because allocating memory in a DLL and releasing it in the application works only if multithreaded/single-threaded, release/debug, and static/dynamic flags are the same for the DLL and the application. On non-Microsoft Windows platforms, this function is the same as the standard library - function free(). + function free(). @@ -5935,7 +5935,7 @@ void PQfreemem(void *ptr); Frees the data structures allocated by - PQconndefaults or PQconninfoParse. + PQconndefaults or PQconninfoParse. void PQconninfoFree(PQconninfoOption *connOptions); @@ -5958,44 +5958,44 @@ void PQconninfoFree(PQconninfoOption *connOptions); - Prepares the encrypted form of a PostgreSQL password. + Prepares the encrypted form of a PostgreSQL password. char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm); This function is intended to be used by client applications that wish to send commands like ALTER USER joe PASSWORD - 'pwd'. It is good practice not to send the original cleartext + 'pwd'. It is good practice not to send the original cleartext password in such a command, because it might be exposed in command logs, activity displays, and so on. Instead, use this function to convert the password to encrypted form before it is sent. - The passwd and user arguments + The passwd and user arguments are the cleartext password, and the SQL name of the user it is for. - algorithm specifies the encryption algorithm + algorithm specifies the encryption algorithm to use to encrypt the password. Currently supported algorithms are - md5 and scram-sha-256 (on and - off are also accepted as aliases for md5, for + md5 and scram-sha-256 (on and + off are also accepted as aliases for md5, for compatibility with older server versions). Note that support for - scram-sha-256 was introduced in PostgreSQL + scram-sha-256 was introduced in PostgreSQL version 10, and will not work correctly with older server versions. If - algorithm is NULL, this function will query + algorithm is NULL, this function will query the server for the current value of the setting. That can block, and will fail if the current transaction is aborted, or if the connection is busy executing another query. If you wish to use the default algorithm for the server but want to avoid blocking, query - password_encryption yourself before calling - PQencryptPasswordConn, and pass that value as the - algorithm. + password_encryption yourself before calling + PQencryptPasswordConn, and pass that value as the + algorithm. - The return value is a string allocated by malloc. + The return value is a string allocated by malloc. The caller can assume the string doesn't contain any special characters - that would require escaping. Use PQfreemem to free the - result when done with it. On error, returns NULL, and + that would require escaping. Use PQfreemem to free the + result when done with it. On error, returns NULL, and a suitable message is stored in the connection object. @@ -6012,14 +6012,14 @@ char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, - Prepares the md5-encrypted form of a PostgreSQL password. + Prepares the md5-encrypted form of a PostgreSQL password. char *PQencryptPassword(const char *passwd, const char *user); - PQencryptPassword is an older, deprecated version of - PQencryptPasswodConn. The difference is that - PQencryptPassword does not - require a connection object, and md5 is always used as the + PQencryptPassword is an older, deprecated version of + PQencryptPasswodConn. The difference is that + PQencryptPassword does not + require a connection object, and md5 is always used as the encryption algorithm. @@ -6042,18 +6042,18 @@ PGresult *PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status); - This is libpq's internal function to allocate and + This is libpq's internal function to allocate and initialize an empty PGresult object. This - function returns NULL if memory could not be allocated. It is + function returns NULL if memory could not be allocated. It is exported because some applications find it useful to generate result objects (particularly objects with error status) themselves. If - conn is not null and status + conn is not null and status indicates an error, the current error message of the specified connection is copied into the PGresult. Also, if conn is not null, any event procedures registered in the connection are copied into the PGresult. (They do not get - PGEVT_RESULTCREATE calls, but see + PGEVT_RESULTCREATE calls, but see PQfireResultCreateEvents.) Note that PQclear should eventually be called on the object, just as with a PGresult @@ -6082,14 +6082,14 @@ int PQfireResultCreateEvents(PGconn *conn, PGresult *res); - The conn argument is passed through to event procedures - but not used directly. It can be NULL if the event + The conn argument is passed through to event procedures + but not used directly. It can be NULL if the event procedures won't use it. Event procedures that have already received a - PGEVT_RESULTCREATE or PGEVT_RESULTCOPY event + PGEVT_RESULTCREATE or PGEVT_RESULTCOPY event for this object are not fired again. @@ -6115,7 +6115,7 @@ int PQfireResultCreateEvents(PGconn *conn, PGresult *res); Makes a copy of a PGresult object. The copy is not linked to the source result in any way and PQclear must be called when the copy is no longer - needed. If the function fails, NULL is returned. + needed. If the function fails, NULL is returned. PGresult *PQcopyResult(const PGresult *src, int flags); @@ -6159,7 +6159,7 @@ int PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs); The provided attDescs are copied into the result. - If the attDescs pointer is NULL or + If the attDescs pointer is NULL or numAttributes is less than one, the request is ignored and the function succeeds. If res already contains attributes, the function will fail. If the function @@ -6193,7 +6193,7 @@ int PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len); field of any existing tuple can be modified in any order. If a value at field_num already exists, it will be overwritten. If len is -1 or - value is NULL, the field value + value is NULL, the field value will be set to an SQL null value. The value is copied into the result's private storage, thus is no longer needed after the function @@ -6222,9 +6222,9 @@ void *PQresultAlloc(PGresult *res, size_t nBytes); Any memory allocated with this function will be freed when res is cleared. If the function fails, - the return value is NULL. The result is + the return value is NULL. The result is guaranteed to be adequately aligned for any type of data, - just as for malloc. + just as for malloc. @@ -6240,7 +6240,7 @@ void *PQresultAlloc(PGresult *res, size_t nBytes); - Return the version of libpq that is being used. + Return the version of libpq that is being used. int PQlibVersion(void); @@ -6251,7 +6251,7 @@ int PQlibVersion(void); run time, whether specific functionality is available in the currently loaded version of libpq. The function can be used, for example, to determine which connection options are available in - PQconnectdb. + PQconnectdb. @@ -6262,17 +6262,17 @@ int PQlibVersion(void); - Prior to major version 10, PostgreSQL used + Prior to major version 10, PostgreSQL used three-part version numbers in which the first two parts together represented the major version. For those - versions, PQlibVersion uses two digits for each + versions, PQlibVersion uses two digits for each part; for example version 9.1.5 will be returned as 90105, and version 9.2.0 will be returned as 90200. Therefore, for purposes of determining feature compatibility, - applications should divide the result of PQlibVersion + applications should divide the result of PQlibVersion by 100 not 10000 to determine a logical major version number. In all release series, only the last two digits differ between minor releases (bug-fix releases). @@ -6280,7 +6280,7 @@ int PQlibVersion(void); - This function appeared in PostgreSQL version 9.1, so + This function appeared in PostgreSQL version 9.1, so it cannot be used to detect required functionality in earlier versions, since calling it will create a link dependency on version 9.1 or later. @@ -6322,12 +6322,12 @@ int PQlibVersion(void); The function PQsetNoticeReceiver - notice receiver - PQsetNoticeReceiver sets or + notice receiver + PQsetNoticeReceiver sets or examines the current notice receiver for a connection object. Similarly, PQsetNoticeProcessor - notice processor - PQsetNoticeProcessor sets or + notice processor + PQsetNoticeProcessor sets or examines the current notice processor. @@ -6358,9 +6358,9 @@ PQsetNoticeProcessor(PGconn *conn, receiver function is called. It is passed the message in the form of a PGRES_NONFATAL_ERROR PGresult. (This allows the receiver to extract - individual fields using PQresultErrorField, or obtain a - complete preformatted message using PQresultErrorMessage - or PQresultVerboseErrorMessage.) The same + individual fields using PQresultErrorField, or obtain a + complete preformatted message using PQresultErrorMessage + or PQresultVerboseErrorMessage.) The same void pointer passed to PQsetNoticeReceiver is also passed. (This pointer can be used to access application-specific state if needed.) @@ -6368,7 +6368,7 @@ PQsetNoticeProcessor(PGconn *conn, The default notice receiver simply extracts the message (using - PQresultErrorMessage) and passes it to the notice + PQresultErrorMessage) and passes it to the notice processor. @@ -6394,10 +6394,10 @@ defaultNoticeProcessor(void *arg, const char *message) Once you have set a notice receiver or processor, you should expect that that function could be called as long as either the - PGconn object or PGresult objects made - from it exist. At creation of a PGresult, the - PGconn's current notice handling pointers are copied - into the PGresult for possible use by functions like + PGconn object or PGresult objects made + from it exist. At creation of a PGresult, the + PGconn's current notice handling pointers are copied + into the PGresult for possible use by functions like PQgetvalue. @@ -6419,21 +6419,21 @@ defaultNoticeProcessor(void *arg, const char *message) Each registered event handler is associated with two pieces of data, - known to libpq only as opaque void * - pointers. There is a passthrough pointer that is provided + known to libpq only as opaque void * + pointers. There is a passthrough pointer that is provided by the application when the event handler is registered with a - PGconn. The passthrough pointer never changes for the - life of the PGconn and all PGresults + PGconn. The passthrough pointer never changes for the + life of the PGconn and all PGresults generated from it; so if used, it must point to long-lived data. - In addition there is an instance data pointer, which starts - out NULL in every PGconn and PGresult. + In addition there is an instance data pointer, which starts + out NULL in every PGconn and PGresult. This pointer can be manipulated using the PQinstanceData, PQsetInstanceData, PQresultInstanceData and PQsetResultInstanceData functions. Note that - unlike the passthrough pointer, instance data of a PGconn - is not automatically inherited by PGresults created from + unlike the passthrough pointer, instance data of a PGconn + is not automatically inherited by PGresults created from it. libpq does not know what passthrough and instance data pointers point to (if anything) and will never attempt to free them — that is the responsibility of the event handler. @@ -6443,7 +6443,7 @@ defaultNoticeProcessor(void *arg, const char *message) Event Types - The enum PGEventId names the types of events handled by + The enum PGEventId names the types of events handled by the event system. All its values have names beginning with PGEVT. For each event type, there is a corresponding event info structure that carries the parameters passed to the event @@ -6507,8 +6507,8 @@ typedef struct PGconn was just reset, all event data remains unchanged. This event should be used to reset/reload/requery any associated instanceData. Note that even if the - event procedure fails to process PGEVT_CONNRESET, it will - still receive a PGEVT_CONNDESTROY event when the connection + event procedure fails to process PGEVT_CONNRESET, it will + still receive a PGEVT_CONNDESTROY event when the connection is closed. @@ -6568,7 +6568,7 @@ typedef struct instanceData that needs to be associated with the result. If the event procedure fails, the result will be cleared and the failure will be propagated. The event procedure must not try to - PQclear the result object for itself. When returning a + PQclear the result object for itself. When returning a failure code, all cleanup must be performed as no PGEVT_RESULTDESTROY event will be sent. @@ -6675,7 +6675,7 @@ int eventproc(PGEventId evtId, void *evtInfo, void *passThrough) A particular event procedure can be registered only once in any - PGconn. This is because the address of the procedure + PGconn. This is because the address of the procedure is used as a lookup key to identify the associated instance data. @@ -6684,9 +6684,9 @@ int eventproc(PGEventId evtId, void *evtInfo, void *passThrough) On Windows, functions can have two different addresses: one visible from outside a DLL and another visible from inside the DLL. One should be careful that only one of these addresses is used with - libpq's event-procedure functions, else confusion will + libpq's event-procedure functions, else confusion will result. The simplest rule for writing code that will work is to - ensure that event procedures are declared static. If the + ensure that event procedures are declared static. If the procedure's address must be available outside its own source file, expose a separate function to return the address. @@ -6720,7 +6720,7 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, An event procedure must be registered once on each - PGconn you want to receive events about. There is no + PGconn you want to receive events about. There is no limit, other than memory, on the number of event procedures that can be registered with a connection. The function returns a non-zero value if it succeeds and zero if it fails. @@ -6731,11 +6731,11 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, event is fired. Its memory address is also used to lookup instanceData. The name argument is used to refer to the event procedure in error messages. - This value cannot be NULL or a zero-length string. The name string is - copied into the PGconn, so what is passed need not be + This value cannot be NULL or a zero-length string. The name string is + copied into the PGconn, so what is passed need not be long-lived. The passThrough pointer is passed to the proc whenever an event occurs. This - argument can be NULL. + argument can be NULL. @@ -6749,11 +6749,11 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, - Sets the connection conn's instanceData - for procedure proc to data. This + Sets the connection conn's instanceData + for procedure proc to data. This returns non-zero for success and zero for failure. (Failure is - only possible if proc has not been properly - registered in conn.) + only possible if proc has not been properly + registered in conn.) int PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data); @@ -6772,8 +6772,8 @@ int PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data); Returns the - connection conn's instanceData - associated with procedure proc, + connection conn's instanceData + associated with procedure proc, or NULL if there is none. @@ -6792,10 +6792,10 @@ void *PQinstanceData(const PGconn *conn, PGEventProc proc); - Sets the result's instanceData - for proc to data. This returns + Sets the result's instanceData + for proc to data. This returns non-zero for success and zero for failure. (Failure is only - possible if proc has not been properly registered + possible if proc has not been properly registered in the result.) @@ -6814,7 +6814,7 @@ int PQresultSetInstanceData(PGresult *res, PGEventProc proc, void *data); - Returns the result's instanceData associated with proc, or NULL + Returns the result's instanceData associated with proc, or NULL if there is none. @@ -6992,8 +6992,8 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) The following environment variables can be used to select default connection parameter values, which will be used by - PQconnectdb, PQsetdbLogin and - PQsetdb if no value is directly specified by the calling + PQconnectdb, PQsetdbLogin and + PQsetdb if no value is directly specified by the calling code. These are useful to avoid hard-coding database connection information into simple client applications, for example. @@ -7060,7 +7060,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via - ps; instead consider using a password file + ps; instead consider using a password file (see ). @@ -7092,7 +7092,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSERVICEFILE specifies the name of the per-user connection service file. If not set, it defaults - to ~/.pg_service.conf + to ~/.pg_service.conf (see ). @@ -7309,7 +7309,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSYSCONFDIR PGSYSCONFDIR sets the directory containing the - pg_service.conf file and in a future version + pg_service.conf file and in a future version possibly other system-wide configuration files. @@ -7320,7 +7320,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGLOCALEDIR PGLOCALEDIR sets the directory containing the - locale files for message localization. + locale files for message localization. @@ -7344,8 +7344,8 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) contain passwords to be used if the connection requires a password (and no password has been specified otherwise). On Microsoft Windows the file is named - %APPDATA%\postgresql\pgpass.conf (where - %APPDATA% refers to the Application Data subdirectory in + %APPDATA%\postgresql\pgpass.conf (where + %APPDATA% refers to the Application Data subdirectory in the user's profile). Alternatively, a password file can be specified using the connection parameter @@ -7358,19 +7358,19 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) hostname:port:database:username:password (You can add a reminder comment to the file by copying the line above and - preceding it with #.) + preceding it with #.) Each of the first four fields can be a literal value, or *, which matches anything. The password field from the first line that matches the current connection parameters will be used. (Therefore, put more-specific entries first when you are using wildcards.) If an entry needs to contain : or \, escape this character with \. - A host name of localhost matches both TCP (host name - localhost) and Unix domain socket (pghost empty + A host name of localhost matches both TCP (host name + localhost) and Unix domain socket (pghost empty or the default socket directory) connections coming from the local - machine. In a standby server, a database name of replication + machine. In a standby server, a database name of replication matches streaming replication connections made to the master server. - The database field is of limited usefulness because + The database field is of limited usefulness because users have the same password for all databases in the same cluster. @@ -7526,17 +7526,17 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - PostgreSQL has native support for using SSL + PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. See for details about the server-side - SSL functionality. + SSL functionality. libpq reads the system-wide OpenSSL configuration file. By default, this file is named openssl.cnf and is located in the - directory reported by openssl version -d. This default + directory reported by openssl version -d. This default can be overridden by setting environment variable OPENSSL_CONF to the name of the desired configuration file. @@ -7546,43 +7546,43 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) Client Verification of Server Certificates - By default, PostgreSQL will not perform any verification of + By default, PostgreSQL will not perform any verification of the server certificate. This means that it is possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent spoofing, - SSL certificate verification must be used. + SSL certificate verification must be used. - If the parameter sslmode is set to verify-ca, + If the parameter sslmode is set to verify-ca, libpq will verify that the server is trustworthy by checking the certificate chain up to a trusted certificate authority - (CA). If sslmode is set to verify-full, - libpq will also verify that the server host name matches its + (CA). If sslmode is set to verify-full, + libpq will also verify that the server host name matches its certificate. The SSL connection will fail if the server certificate cannot - be verified. verify-full is recommended in most + be verified. verify-full is recommended in most security-sensitive environments. - In verify-full mode, the host name is matched against the + In verify-full mode, the host name is matched against the certificate's Subject Alternative Name attribute(s), or against the Common Name attribute if no Subject Alternative Name of type dNSName is present. If the certificate's name attribute starts with an asterisk - (*), the asterisk will be treated as - a wildcard, which will match all characters except a dot - (.). This means the certificate will not match subdomains. + (*), the asterisk will be treated as + a wildcard, which will match all characters except a dot + (.). This means the certificate will not match subdomains. If the connection is made using an IP address instead of a host name, the IP address will be matched (without doing any DNS lookups). To allow server certificate verification, the certificate(s) of one or more - trusted CAs must be - placed in the file ~/.postgresql/root.crt in the user's home - directory. If intermediate CAs appear in + trusted CAs must be + placed in the file ~/.postgresql/root.crt in the user's home + directory. If intermediate CAs appear in root.crt, the file must also contain certificate - chains to their root CAs. (On Microsoft Windows the file is named + chains to their root CAs. (On Microsoft Windows the file is named %APPDATA%\postgresql\root.crt.) @@ -7596,8 +7596,8 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) The location of the root certificate file and the CRL can be changed by setting - the connection parameters sslrootcert and sslcrl - or the environment variables PGSSLROOTCERT and PGSSLCRL. + the connection parameters sslrootcert and sslcrl + or the environment variables PGSSLROOTCERT and PGSSLCRL. @@ -7619,10 +7619,10 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If the server requests a trusted client certificate, libpq will send the certificate stored in - file ~/.postgresql/postgresql.crt in the user's home + file ~/.postgresql/postgresql.crt in the user's home directory. The certificate must be signed by one of the certificate authorities (CA) trusted by the server. A matching - private key file ~/.postgresql/postgresql.key must also + private key file ~/.postgresql/postgresql.key must also be present. The private key file must not allow any access to world or group; achieve this by the command chmod 0600 ~/.postgresql/postgresql.key. @@ -7631,23 +7631,23 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) %APPDATA%\postgresql\postgresql.key, and there is no special permissions check since the directory is presumed secure. The location of the certificate and key files can be overridden by the - connection parameters sslcert and sslkey or the - environment variables PGSSLCERT and PGSSLKEY. + connection parameters sslcert and sslkey or the + environment variables PGSSLCERT and PGSSLKEY. In some cases, the client certificate might be signed by an - intermediate certificate authority, rather than one that is + intermediate certificate authority, rather than one that is directly trusted by the server. To use such a certificate, append the - certificate of the signing authority to the postgresql.crt + certificate of the signing authority to the postgresql.crt file, then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by + authority, root or intermediate, that is trusted by the server, i.e. signed by a certificate in the server's root CA file (). - Note that the client's ~/.postgresql/root.crt lists the top-level CAs + Note that the client's ~/.postgresql/root.crt lists the top-level CAs that are considered trusted for signing server certificates. In principle it need not list the CA that signed the client's certificate, though in most cases that CA would also be trusted for server certificates. @@ -7659,7 +7659,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) Protection Provided in Different Modes - The different values for the sslmode parameter provide different + The different values for the sslmode parameter provide different levels of protection. SSL can provide protection against three types of attacks: @@ -7669,23 +7669,23 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If a third party can examine the network traffic between the client and the server, it can read both connection information (including - the user name and password) and the data that is passed. SSL + the user name and password) and the data that is passed. SSL uses encryption to prevent this. - Man in the middle (MITM) + Man in the middle (MITM) If a third party can modify the data while passing between the client and server, it can pretend to be the server and therefore see and - modify data even if it is encrypted. The third party can then + modify data even if it is encrypted. The third party can then forward the connection information and data to the original server, making it impossible to detect this attack. Common vectors to do this include DNS poisoning and address hijacking, whereby the client is directed to a different server than intended. There are also several other - attack methods that can accomplish this. SSL uses certificate + attack methods that can accomplish this. SSL uses certificate verification to prevent this, by authenticating the server to the client. @@ -7696,7 +7696,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If a third party can pretend to be an authorized client, it can simply access data it should not have access to. Typically this can - happen through insecure password management. SSL uses + happen through insecure password management. SSL uses client certificates to prevent this, by making sure that only holders of valid certificates can access the server. @@ -7707,15 +7707,15 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) For a connection to be known secure, SSL usage must be configured - on both the client and the server before the connection + on both the client and the server before the connection is made. If it is only configured on the server, the client may end up sending sensitive information (e.g. passwords) before it knows that the server requires high security. In libpq, secure connections can be ensured - by setting the sslmode parameter to verify-full or - verify-ca, and providing the system with a root certificate to - verify against. This is analogous to using an https - URL for encrypted web browsing. + by setting the sslmode parameter to verify-full or + verify-ca, and providing the system with a root certificate to + verify against. This is analogous to using an https + URL for encrypted web browsing. @@ -7726,10 +7726,10 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - All SSL options carry overhead in the form of encryption and + All SSL options carry overhead in the form of encryption and key-exchange, so there is a trade-off that has to be made between performance and security. - illustrates the risks the different sslmode values + illustrates the risks the different sslmode values protect against, and what statement they make about security and overhead. @@ -7738,16 +7738,16 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - sslmode + sslmode Eavesdropping protection - MITM protection + MITM protection Statement - disable + disable No No I don't care about security, and I don't want to pay the overhead @@ -7756,7 +7756,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - allow + allow Maybe No I don't care about security, but I will pay the overhead of @@ -7765,7 +7765,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - prefer + prefer Maybe No I don't care about encryption, but I wish to pay the overhead of @@ -7774,7 +7774,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - require + require Yes No I want my data to be encrypted, and I accept the overhead. I trust @@ -7783,16 +7783,16 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - verify-ca + verify-ca Yes - Depends on CA-policy + Depends on CA-policy I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server that I trust. - verify-full + verify-full Yes Yes I want my data encrypted, and I accept the overhead. I want to be @@ -7806,17 +7806,17 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*)
- The difference between verify-ca and verify-full - depends on the policy of the root CA. If a public - CA is used, verify-ca allows connections to a server - that somebody else may have registered with the CA. - In this case, verify-full should always be used. If - a local CA is used, or even a self-signed certificate, using - verify-ca often provides enough protection. + The difference between verify-ca and verify-full + depends on the policy of the root CA. If a public + CA is used, verify-ca allows connections to a server + that somebody else may have registered with the CA. + In this case, verify-full should always be used. If + a local CA is used, or even a self-signed certificate, using + verify-ca often provides enough protection. - The default value for sslmode is prefer. As is shown + The default value for sslmode is prefer. As is shown in the table, this makes no sense from a security point of view, and it only promises performance overhead if possible. It is only provided as the default for backward compatibility, and is not recommended in secure deployments. @@ -7846,27 +7846,27 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - ~/.postgresql/postgresql.crt + ~/.postgresql/postgresql.crt client certificate requested by server - ~/.postgresql/postgresql.key + ~/.postgresql/postgresql.key client private key proves client certificate sent by owner; does not indicate certificate owner is trustworthy - ~/.postgresql/root.crt + ~/.postgresql/root.crt trusted certificate authorities checks that server certificate is signed by a trusted certificate authority - ~/.postgresql/root.crl + ~/.postgresql/root.crl certificates revoked by certificate authorities server certificate must not be on this list @@ -7880,11 +7880,11 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) SSL Library Initialization - If your application initializes libssl and/or - libcrypto libraries and libpq - is built with SSL support, you should call - PQinitOpenSSL to tell libpq - that the libssl and/or libcrypto libraries + If your application initializes libssl and/or + libcrypto libraries and libpq + is built with SSL support, you should call + PQinitOpenSSL to tell libpq + that the libssl and/or libcrypto libraries have been initialized by your application, so that libpq will not also initialize those libraries. @@ -7912,18 +7912,18 @@ void PQinitOpenSSL(int do_ssl, int do_crypto); - When do_ssl is non-zero, libpq - will initialize the OpenSSL library before first - opening a database connection. When do_crypto is - non-zero, the libcrypto library will be initialized. By - default (if PQinitOpenSSL is not called), both libraries + When do_ssl is non-zero, libpq + will initialize the OpenSSL library before first + opening a database connection. When do_crypto is + non-zero, the libcrypto library will be initialized. By + default (if PQinitOpenSSL is not called), both libraries are initialized. When SSL support is not compiled in, this function is present but does nothing. - If your application uses and initializes either OpenSSL - or its underlying libcrypto library, you must + If your application uses and initializes either OpenSSL + or its underlying libcrypto library, you must call this function with zeroes for the appropriate parameter(s) before first opening a database connection. Also be sure that you have done that initialization before opening a database connection. @@ -7949,15 +7949,15 @@ void PQinitSSL(int do_ssl); This function is equivalent to - PQinitOpenSSL(do_ssl, do_ssl). + PQinitOpenSSL(do_ssl, do_ssl). It is sufficient for applications that initialize both or neither - of OpenSSL and libcrypto. + of OpenSSL and libcrypto. - PQinitSSL has been present since - PostgreSQL 8.0, while PQinitOpenSSL - was added in PostgreSQL 8.4, so PQinitSSL + PQinitSSL has been present since + PostgreSQL 8.0, while PQinitOpenSSL + was added in PostgreSQL 8.4, so PQinitSSL might be preferable for applications that need to work with older versions of libpq. @@ -7984,8 +7984,8 @@ void PQinitSSL(int do_ssl); options when you compile your application code. Refer to your system's documentation for information about how to build thread-enabled applications, or look in - src/Makefile.global for PTHREAD_CFLAGS - and PTHREAD_LIBS. This function allows the querying of + src/Makefile.global for PTHREAD_CFLAGS + and PTHREAD_LIBS. This function allows the querying of libpq's thread-safe status: @@ -8017,18 +8017,18 @@ int PQisthreadsafe(); One thread restriction is that no two threads attempt to manipulate - the same PGconn object at the same time. In particular, + the same PGconn object at the same time. In particular, you cannot issue concurrent commands from different threads through the same connection object. (If you need to run concurrent commands, use multiple connections.) - PGresult objects are normally read-only after creation, + PGresult objects are normally read-only after creation, and so can be passed around freely between threads. However, if you use - any of the PGresult-modifying functions described in + any of the PGresult-modifying functions described in or , it's up - to you to avoid concurrent operations on the same PGresult, + to you to avoid concurrent operations on the same PGresult, too. @@ -8045,14 +8045,14 @@ int PQisthreadsafe(); If you are using Kerberos inside your application (in addition to inside libpq), you will need to do locking around Kerberos calls because Kerberos functions are not thread-safe. See - function PQregisterThreadLock in the + function PQregisterThreadLock in the libpq source code for a way to do cooperative locking between libpq and your application. If you experience problems with threaded applications, run the program - in src/tools/thread to see if your platform has + in src/tools/thread to see if your platform has thread-unsafe functions. This program is run by configure, but for binary distributions your library might not match the library used to build the binaries. @@ -8095,7 +8095,7 @@ foo.c:95: `PGRES_TUPLES_OK' undeclared (first use in this function) - Point your compiler to the directory where the PostgreSQL header + Point your compiler to the directory where the PostgreSQL header files were installed, by supplying the -Idirectory option to your compiler. (In some cases the compiler will look into @@ -8116,8 +8116,8 @@ CPPFLAGS += -I/usr/local/pgsql/include If there is any chance that your program might be compiled by other users then you should not hardcode the directory location like that. Instead, you can run the utility - pg_configpg_configwith libpq to find out where the header + pg_configpg_configwith libpq to find out where the header files are on the local system: $ pg_config --includedir diff --git a/doc/src/sgml/lo.sgml b/doc/src/sgml/lo.sgml index 9c318f1c98..8d8ee82722 100644 --- a/doc/src/sgml/lo.sgml +++ b/doc/src/sgml/lo.sgml @@ -8,9 +8,9 @@ - The lo module provides support for managing Large Objects - (also called LOs or BLOBs). This includes a data type lo - and a trigger lo_manage. + The lo module provides support for managing Large Objects + (also called LOs or BLOBs). This includes a data type lo + and a trigger lo_manage. @@ -24,7 +24,7 @@ - As PostgreSQL stands, this doesn't occur. Large objects + As PostgreSQL stands, this doesn't occur. Large objects are treated as objects in their own right; a table entry can reference a large object by OID, but there can be multiple table entries referencing the same large object OID, so the system doesn't delete the large object @@ -32,30 +32,30 @@ - Now this is fine for PostgreSQL-specific applications, but + Now this is fine for PostgreSQL-specific applications, but standard code using JDBC or ODBC won't delete the objects, resulting in orphan objects — objects that are not referenced by anything, and simply occupy disk space. - The lo module allows fixing this by attaching a trigger + The lo module allows fixing this by attaching a trigger to tables that contain LO reference columns. The trigger essentially just - does a lo_unlink whenever you delete or modify a value + does a lo_unlink whenever you delete or modify a value referencing a large object. When you use this trigger, you are assuming that there is only one database reference to any large object that is referenced in a trigger-controlled column! - The module also provides a data type lo, which is really just - a domain of the oid type. This is useful for differentiating + The module also provides a data type lo, which is really just + a domain of the oid type. This is useful for differentiating database columns that hold large object references from those that are - OIDs of other things. You don't have to use the lo type to + OIDs of other things. You don't have to use the lo type to use the trigger, but it may be convenient to use it to keep track of which columns in your database represent large objects that you are managing with the trigger. It is also rumored that the ODBC driver gets confused if you - don't use lo for BLOB columns. + don't use lo for BLOB columns. @@ -75,11 +75,11 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image For each column that will contain unique references to large objects, - create a BEFORE UPDATE OR DELETE trigger, and give the column + create a BEFORE UPDATE OR DELETE trigger, and give the column name as the sole trigger argument. You can also restrict the trigger to only execute on updates to the column by using BEFORE UPDATE OF column_name. - If you need multiple lo + If you need multiple lo columns in the same table, create a separate trigger for each one, remembering to give a different name to each trigger on the same table. @@ -93,18 +93,18 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image Dropping a table will still orphan any objects it contains, as the trigger is not executed. You can avoid this by preceding the DROP - TABLE with DELETE FROM table. + TABLE with DELETE FROM table. - TRUNCATE has the same hazard. + TRUNCATE has the same hazard. If you already have, or suspect you have, orphaned large objects, see the module to help - you clean them up. It's a good idea to run vacuumlo - occasionally as a back-stop to the lo_manage trigger. + you clean them up. It's a good idea to run vacuumlo + occasionally as a back-stop to the lo_manage trigger. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 7757e1e441..2e930ac240 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -3,11 +3,11 @@ Large Objects - large object - BLOBlarge object + large object + BLOBlarge object - PostgreSQL has a large object + PostgreSQL has a large object facility, which provides stream-style access to user data that is stored in a special large-object structure. Streaming access is useful when working with data values that are too large to manipulate @@ -76,12 +76,12 @@ of 1000000 bytes worth of storage; only of chunks covering the range of data bytes actually written. A read operation will, however, read out zeroes for any unallocated locations preceding the last existing chunk. - This corresponds to the common behavior of sparsely allocated + This corresponds to the common behavior of sparsely allocated files in Unix file systems. - As of PostgreSQL 9.0, large objects have an owner + As of PostgreSQL 9.0, large objects have an owner and a set of access permissions, which can be managed using and . @@ -101,7 +101,7 @@ This section describes the facilities that - PostgreSQL's libpq + PostgreSQL's libpq client interface library provides for accessing large objects. The PostgreSQL large object interface is modeled after the Unix file-system interface, with @@ -121,7 +121,7 @@ If an error occurs while executing any one of these functions, the function will return an otherwise-impossible value, typically 0 or -1. A message describing the error is stored in the connection object and - can be retrieved with PQerrorMessage. + can be retrieved with PQerrorMessage. @@ -134,7 +134,7 @@ Creating a Large Object - lo_creat + lo_creat The function Oid lo_creat(PGconn *conn, int mode); @@ -147,7 +147,7 @@ Oid lo_creat(PGconn *conn, int mode); ignored as of PostgreSQL 8.1; however, for backward compatibility with earlier releases it is best to set it to INV_READ, INV_WRITE, - or INV_READ | INV_WRITE. + or INV_READ | INV_WRITE. (These symbolic constants are defined in the header file libpq/libpq-fs.h.) @@ -160,7 +160,7 @@ inv_oid = lo_creat(conn, INV_READ|INV_WRITE); - lo_create + lo_create The function Oid lo_create(PGconn *conn, Oid lobjId); @@ -169,14 +169,14 @@ Oid lo_create(PGconn *conn, Oid lobjId); specified by lobjId; if so, failure occurs if that OID is already in use for some large object. If lobjId - is InvalidOid (zero) then lo_create assigns an unused - OID (this is the same behavior as lo_creat). + is InvalidOid (zero) then lo_create assigns an unused + OID (this is the same behavior as lo_creat). The return value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure. - lo_create is new as of PostgreSQL + lo_create is new as of PostgreSQL 8.1; if this function is run against an older server version, it will fail and return InvalidOid. @@ -193,7 +193,7 @@ inv_oid = lo_create(conn, desired_oid); Importing a Large Object - lo_import + lo_import To import an operating system file as a large object, call Oid lo_import(PGconn *conn, const char *filename); @@ -209,7 +209,7 @@ Oid lo_import(PGconn *conn, const char *filename); - lo_import_with_oid + lo_import_with_oid The function Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); @@ -218,14 +218,14 @@ Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); specified by lobjId; if so, failure occurs if that OID is already in use for some large object. If lobjId - is InvalidOid (zero) then lo_import_with_oid assigns an unused - OID (this is the same behavior as lo_import). + is InvalidOid (zero) then lo_import_with_oid assigns an unused + OID (this is the same behavior as lo_import). The return value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure. - lo_import_with_oid is new as of PostgreSQL + lo_import_with_oid is new as of PostgreSQL 8.4 and uses lo_create internally which is new in 8.1; if this function is run against 8.0 or before, it will fail and return InvalidOid. @@ -235,7 +235,7 @@ Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); Exporting a Large Object - lo_export + lo_export To export a large object into an operating system file, call @@ -253,14 +253,14 @@ int lo_export(PGconn *conn, Oid lobjId, const char *filename); Opening an Existing Large Object - lo_open + lo_open To open an existing large object for reading or writing, call int lo_open(PGconn *conn, Oid lobjId, int mode); The lobjId argument specifies the OID of the large object to open. The mode bits control whether the - object is opened for reading (INV_READ), writing + object is opened for reading (INV_READ), writing (INV_WRITE), or both. (These symbolic constants are defined in the header file libpq/libpq-fs.h.) @@ -277,19 +277,19 @@ int lo_open(PGconn *conn, Oid lobjId, int mode); The server currently does not distinguish between modes - INV_WRITE and INV_READ | + INV_WRITE and INV_READ | INV_WRITE: you are allowed to read from the descriptor in either case. However there is a significant difference between - these modes and INV_READ alone: with INV_READ + these modes and INV_READ alone: with INV_READ you cannot write on the descriptor, and the data read from it will reflect the contents of the large object at the time of the transaction - snapshot that was active when lo_open was executed, + snapshot that was active when lo_open was executed, regardless of later writes by this or other transactions. Reading from a descriptor opened with INV_WRITE returns data that reflects all writes of other committed transactions as well as writes of the current transaction. This is similar to the behavior - of REPEATABLE READ versus READ COMMITTED transaction - modes for ordinary SQL SELECT commands. + of REPEATABLE READ versus READ COMMITTED transaction + modes for ordinary SQL SELECT commands. @@ -304,14 +304,14 @@ inv_fd = lo_open(conn, inv_oid, INV_READ|INV_WRITE); Writing Data to a Large Object - lo_write + lo_write The function int lo_write(PGconn *conn, int fd, const char *buf, size_t len); writes len bytes from buf (which must be of size len) to large object - descriptor fd. The fd argument must + descriptor fd. The fd argument must have been returned by a previous lo_open. The number of bytes actually written is returned (in the current implementation, this will always equal len unless @@ -320,8 +320,8 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len); Although the len parameter is declared as - size_t, this function will reject length values larger than - INT_MAX. In practice, it's best to transfer data in chunks + size_t, this function will reject length values larger than + INT_MAX. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. @@ -330,7 +330,7 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len); Reading Data from a Large Object - lo_read + lo_read The function int lo_read(PGconn *conn, int fd, char *buf, size_t len); @@ -347,8 +347,8 @@ int lo_read(PGconn *conn, int fd, char *buf, size_t len); Although the len parameter is declared as - size_t, this function will reject length values larger than - INT_MAX. In practice, it's best to transfer data in chunks + size_t, this function will reject length values larger than + INT_MAX. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. @@ -357,7 +357,7 @@ int lo_read(PGconn *conn, int fd, char *buf, size_t len); Seeking in a Large Object - lo_lseek + lo_lseek To change the current read or write location associated with a large object descriptor, call @@ -365,16 +365,16 @@ int lo_lseek(PGconn *conn, int fd, int offset, int whence); This function moves the current location pointer for the large object descriptor identified by - fd to the new location specified by - offset. The valid values for whence - are SEEK_SET (seek from object start), - SEEK_CUR (seek from current position), and - SEEK_END (seek from object end). The return value is + fd to the new location specified by + offset. The valid values for whence + are SEEK_SET (seek from object start), + SEEK_CUR (seek from current position), and + SEEK_END (seek from object end). The return value is the new location pointer, or -1 on error. - lo_lseek64 + lo_lseek64 When dealing with large objects that might exceed 2GB in size, instead use @@ -382,14 +382,14 @@ pg_int64 lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence); This function has the same behavior as lo_lseek, but it can accept an - offset larger than 2GB and/or deliver a result larger + offset larger than 2GB and/or deliver a result larger than 2GB. Note that lo_lseek will fail if the new location pointer would be greater than 2GB. - lo_lseek64 is new as of PostgreSQL + lo_lseek64 is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. @@ -400,7 +400,7 @@ pg_int64 lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence); Obtaining the Seek Position of a Large Object - lo_tell + lo_tell To obtain the current read or write location of a large object descriptor, call @@ -410,7 +410,7 @@ int lo_tell(PGconn *conn, int fd); - lo_tell64 + lo_tell64 When dealing with large objects that might exceed 2GB in size, instead use @@ -424,7 +424,7 @@ pg_int64 lo_tell64(PGconn *conn, int fd); - lo_tell64 is new as of PostgreSQL + lo_tell64 is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. @@ -434,15 +434,15 @@ pg_int64 lo_tell64(PGconn *conn, int fd); Truncating a Large Object - lo_truncate + lo_truncate To truncate a large object to a given length, call int lo_truncate(PGcon *conn, int fd, size_t len); This function truncates the large object - descriptor fd to length len. The + descriptor fd to length len. The fd argument must have been returned by a - previous lo_open. If len is + previous lo_open. If len is greater than the large object's current length, the large object is extended to the specified length with null bytes ('\0'). On success, lo_truncate returns @@ -456,12 +456,12 @@ int lo_truncate(PGcon *conn, int fd, size_t len); Although the len parameter is declared as - size_t, lo_truncate will reject length - values larger than INT_MAX. + size_t, lo_truncate will reject length + values larger than INT_MAX. - lo_truncate64 + lo_truncate64 When dealing with large objects that might exceed 2GB in size, instead use @@ -469,17 +469,17 @@ int lo_truncate64(PGcon *conn, int fd, pg_int64 len); This function has the same behavior as lo_truncate, but it can accept a - len value exceeding 2GB. + len value exceeding 2GB. - lo_truncate is new as of PostgreSQL + lo_truncate is new as of PostgreSQL 8.3; if this function is run against an older server version, it will fail and return -1. - lo_truncate64 is new as of PostgreSQL + lo_truncate64 is new as of PostgreSQL 9.3; if this function is run against an older server version, it will fail and return -1. @@ -489,12 +489,12 @@ int lo_truncate64(PGcon *conn, int fd, pg_int64 len); Closing a Large Object Descriptor - lo_close + lo_close A large object descriptor can be closed by calling int lo_close(PGconn *conn, int fd); - where fd is a + where fd is a large object descriptor returned by lo_open. On success, lo_close returns zero. On error, the return value is -1. @@ -510,7 +510,7 @@ int lo_close(PGconn *conn, int fd); Removing a Large Object - lo_unlink + lo_unlink To remove a large object from the database, call int lo_unlink(PGconn *conn, Oid lobjId); @@ -554,7 +554,7 @@ int lo_unlink(PGconn *conn, Oid lobjId); oid Create a large object and store data there, returning its OID. - Pass 0 to have the system choose an OID. + Pass 0 to have the system choose an OID. lo_from_bytea(0, E'\\xffffff00') 24528 @@ -599,11 +599,11 @@ int lo_unlink(PGconn *conn, Oid lobjId); client-side functions described earlier; indeed, for the most part the client-side functions are simply interfaces to the equivalent server-side functions. The ones just as convenient to call via SQL commands are - lo_creatlo_creat, + lo_creatlo_creat, lo_create, - lo_unlinklo_unlink, - lo_importlo_import, and - lo_exportlo_export. + lo_unlinklo_unlink, + lo_importlo_import, and + lo_exportlo_export. Here are examples of their use: @@ -645,7 +645,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image lo_write is also available via server-side calls, but the names of the server-side functions differ from the client side interfaces in that they do not contain underscores. You must call - these functions as loread and lowrite. + these functions as loread and lowrite. @@ -656,7 +656,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image is a sample program which shows how the large object interface - in libpq can be used. Parts of the program are + in libpq can be used. Parts of the program are commented out but are left in the source for the reader's benefit. This program can also be found in src/test/examples/testlo.c in the source distribution. diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index 35ac5abbe5..c02f6e9765 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -156,13 +156,13 @@ postgres=# SELECT pg_drop_replication_slot('regression_slot'); $ pg_recvlogical -d postgres --slot test --create-slot $ pg_recvlogical -d postgres --slot test --start -f - -ControlZ +ControlZ $ psql -d postgres -c "INSERT INTO data(data) VALUES('4');" $ fg BEGIN 693 table public.data: INSERT: id[integer]:4 data[text]:'4' COMMIT 693 -ControlC +ControlC $ pg_recvlogical -d postgres --slot test --drop-slot @@ -286,7 +286,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot Creation of a snapshot is not always possible. In particular, it will fail when connected to a hot standby. Applications that do not require - snapshot export may suppress it with the NOEXPORT_SNAPSHOT + snapshot export may suppress it with the NOEXPORT_SNAPSHOT option. @@ -303,7 +303,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot
- DROP_REPLICATION_SLOT slot_name WAIT + DROP_REPLICATION_SLOT slot_name WAIT @@ -426,12 +426,12 @@ CREATE TABLE another_catalog_table(data text) WITH (user_catalog_table = true); data in a data type that can contain arbitrary data (e.g., bytea) is cumbersome. If the output plugin only outputs textual data in the server's encoding, it can declare that by - setting OutputPluginOptions.output_type - to OUTPUT_PLUGIN_TEXTUAL_OUTPUT instead - of OUTPUT_PLUGIN_BINARY_OUTPUT in + setting OutputPluginOptions.output_type + to OUTPUT_PLUGIN_TEXTUAL_OUTPUT instead + of OUTPUT_PLUGIN_BINARY_OUTPUT in the startup - callback. In that case, all the data has to be in the server's encoding - so that a text datum can contain it. This is checked in assertion-enabled + callback. In that case, all the data has to be in the server's encoding + so that a text datum can contain it. This is checked in assertion-enabled builds. diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml index fccfd320f5..602d9403f7 100644 --- a/doc/src/sgml/ltree.sgml +++ b/doc/src/sgml/ltree.sgml @@ -8,7 +8,7 @@ - This module implements a data type ltree for representing + This module implements a data type ltree for representing labels of data stored in a hierarchical tree-like structure. Extensive facilities for searching through label trees are provided. @@ -19,17 +19,17 @@ A label is a sequence of alphanumeric characters and underscores (for example, in C locale the characters - A-Za-z0-9_ are allowed). Labels must be less than 256 bytes + A-Za-z0-9_ are allowed). Labels must be less than 256 bytes long. - Examples: 42, Personal_Services + Examples: 42, Personal_Services A label path is a sequence of zero or more - labels separated by dots, for example L1.L2.L3, representing + labels separated by dots, for example L1.L2.L3, representing a path from the root of a hierarchical tree to a particular node. The length of a label path must be less than 65kB, but keeping it under 2kB is preferable. In practice this is not a major limitation; for example, @@ -42,7 +42,7 @@ - The ltree module provides several data types: + The ltree module provides several data types: @@ -55,13 +55,13 @@ lquery represents a regular-expression-like pattern - for matching ltree values. A simple word matches that - label within a path. A star symbol (*) matches zero + for matching ltree values. A simple word matches that + label within a path. A star symbol (*) matches zero or more labels. For example: -foo Match the exact label path foo -*.foo.* Match any label path containing the label foo -*.foo Match any label path whose last label is foo +foo Match the exact label path foo +*.foo.* Match any label path containing the label foo +*.foo Match any label path whose last label is foo @@ -69,34 +69,34 @@ foo Match the exact label path foo -*{n} Match exactly n labels -*{n,} Match at least n labels -*{n,m} Match at least n but not more than m labels -*{,m} Match at most m labels — same as *{0,m} +*{n} Match exactly n labels +*{n,} Match at least n labels +*{n,m} Match at least n but not more than m labels +*{,m} Match at most m labels — same as *{0,m} There are several modifiers that can be put at the end of a non-star - label in lquery to make it match more than just the exact match: + label in lquery to make it match more than just the exact match: -@ Match case-insensitively, for example a@ matches A -* Match any label with this prefix, for example foo* matches foobar +@ Match case-insensitively, for example a@ matches A +* Match any label with this prefix, for example foo* matches foobar % Match initial underscore-separated words - The behavior of % is a bit complicated. It tries to match + The behavior of % is a bit complicated. It tries to match words rather than the entire label. For example - foo_bar% matches foo_bar_baz but not - foo_barbaz. If combined with *, prefix + foo_bar% matches foo_bar_baz but not + foo_barbaz. If combined with *, prefix matching applies to each word separately, for example - foo_bar%* matches foo1_bar2_baz but - not foo1_br2_baz. + foo_bar%* matches foo1_bar2_baz but + not foo1_br2_baz. Also, you can write several possibly-modified labels separated with - | (OR) to match any of those labels, and you can put - ! (NOT) at the start to match any label that doesn't + | (OR) to match any of those labels, and you can put + ! (NOT) at the start to match any label that doesn't match any of the alternatives. @@ -141,14 +141,14 @@ a. b. c. d. e. ltxtquery represents a full-text-search-like - pattern for matching ltree values. An + pattern for matching ltree values. An ltxtquery value contains words, possibly with the - modifiers @, *, % at the end; - the modifiers have the same meanings as in lquery. - Words can be combined with & (AND), - | (OR), ! (NOT), and parentheses. + modifiers @, *, % at the end; + the modifiers have the same meanings as in lquery. + Words can be combined with & (AND), + | (OR), ! (NOT), and parentheses. The key difference from - lquery is that ltxtquery matches words without + lquery is that ltxtquery matches words without regard to their position in the label path. @@ -161,7 +161,7 @@ Europe & Russia*@ & !Transportation any label beginning with Russia (case-insensitive), but not paths containing the label Transportation. The location of these words within the path is not important. - Also, when % is used, the word can be matched to any + Also, when % is used, the word can be matched to any underscore-separated word within a label, regardless of position. @@ -169,8 +169,8 @@ Europe & Russia*@ & !Transportation - Note: ltxtquery allows whitespace between symbols, but - ltree and lquery do not. + Note: ltxtquery allows whitespace between symbols, but + ltree and lquery do not. @@ -178,16 +178,16 @@ Europe & Russia*@ & !Transportation Operators and Functions - Type ltree has the usual comparison operators - =, <>, - <, >, <=, >=. + Type ltree has the usual comparison operators + =, <>, + <, >, <=, >=. Comparison sorts in the order of a tree traversal, with the children of a node sorted by label text. In addition, the specialized operators shown in are available. - <type>ltree</> Operators + <type>ltree</type> Operators @@ -200,153 +200,153 @@ Europe & Russia*@ & !Transportation - ltree @> ltree + ltree @> ltree boolean is left argument an ancestor of right (or equal)? - ltree <@ ltree + ltree <@ ltree boolean is left argument a descendant of right (or equal)? - ltree ~ lquery + ltree ~ lquery boolean - does ltree match lquery? + does ltree match lquery? - lquery ~ ltree + lquery ~ ltree boolean - does ltree match lquery? + does ltree match lquery? - ltree ? lquery[] + ltree ? lquery[] boolean - does ltree match any lquery in array? + does ltree match any lquery in array? - lquery[] ? ltree + lquery[] ? ltree boolean - does ltree match any lquery in array? + does ltree match any lquery in array? - ltree @ ltxtquery + ltree @ ltxtquery boolean - does ltree match ltxtquery? + does ltree match ltxtquery? - ltxtquery @ ltree + ltxtquery @ ltree boolean - does ltree match ltxtquery? + does ltree match ltxtquery? - ltree || ltree + ltree || ltree ltree - concatenate ltree paths + concatenate ltree paths - ltree || text + ltree || text ltree - convert text to ltree and concatenate + convert text to ltree and concatenate - text || ltree + text || ltree ltree - convert text to ltree and concatenate + convert text to ltree and concatenate - ltree[] @> ltree + ltree[] @> ltree boolean - does array contain an ancestor of ltree? + does array contain an ancestor of ltree? - ltree <@ ltree[] + ltree <@ ltree[] boolean - does array contain an ancestor of ltree? + does array contain an ancestor of ltree? - ltree[] <@ ltree + ltree[] <@ ltree boolean - does array contain a descendant of ltree? + does array contain a descendant of ltree? - ltree @> ltree[] + ltree @> ltree[] boolean - does array contain a descendant of ltree? + does array contain a descendant of ltree? - ltree[] ~ lquery + ltree[] ~ lquery boolean - does array contain any path matching lquery? + does array contain any path matching lquery? - lquery ~ ltree[] + lquery ~ ltree[] boolean - does array contain any path matching lquery? + does array contain any path matching lquery? - ltree[] ? lquery[] + ltree[] ? lquery[] boolean - does ltree array contain any path matching any lquery? + does ltree array contain any path matching any lquery? - lquery[] ? ltree[] + lquery[] ? ltree[] boolean - does ltree array contain any path matching any lquery? + does ltree array contain any path matching any lquery? - ltree[] @ ltxtquery + ltree[] @ ltxtquery boolean - does array contain any path matching ltxtquery? + does array contain any path matching ltxtquery? - ltxtquery @ ltree[] + ltxtquery @ ltree[] boolean - does array contain any path matching ltxtquery? + does array contain any path matching ltxtquery? - ltree[] ?@> ltree + ltree[] ?@> ltree ltree - first array entry that is an ancestor of ltree; NULL if none + first array entry that is an ancestor of ltree; NULL if none - ltree[] ?<@ ltree + ltree[] ?<@ ltree ltree - first array entry that is a descendant of ltree; NULL if none + first array entry that is a descendant of ltree; NULL if none - ltree[] ?~ lquery + ltree[] ?~ lquery ltree - first array entry that matches lquery; NULL if none + first array entry that matches lquery; NULL if none - ltree[] ?@ ltxtquery + ltree[] ?@ ltxtquery ltree - first array entry that matches ltxtquery; NULL if none + first array entry that matches ltxtquery; NULL if none @@ -356,7 +356,7 @@ Europe & Russia*@ & !Transportation The operators <@, @>, @ and ~ have analogues - ^<@, ^@>, ^@, + ^<@, ^@>, ^@, ^~, which are the same except they do not use indexes. These are useful only for testing purposes. @@ -366,7 +366,7 @@ Europe & Russia*@ & !Transportation
- <type>ltree</> Functions + <type>ltree</type> Functions @@ -383,8 +383,8 @@ Europe & Russia*@ & !Transportation subltree(ltree, int start, int end)subltree ltree - subpath of ltree from position start to - position end-1 (counting from 0) + subpath of ltree from position start to + position end-1 (counting from 0) subltree('Top.Child1.Child2',1,2) Child1 @@ -392,10 +392,10 @@ Europe & Russia*@ & !Transportation subpath(ltree, int offset, int len)subpath ltree - subpath of ltree starting at position - offset, length len. - If offset is negative, subpath starts that far from the - end of the path. If len is negative, leaves that many + subpath of ltree starting at position + offset, length len. + If offset is negative, subpath starts that far from the + end of the path. If len is negative, leaves that many labels off the end of the path. subpath('Top.Child1.Child2',0,2) Top.Child1 @@ -404,9 +404,9 @@ Europe & Russia*@ & !Transportation subpath(ltree, int offset) ltree - subpath of ltree starting at position - offset, extending to end of path. - If offset is negative, subpath starts that far from the + subpath of ltree starting at position + offset, extending to end of path. + If offset is negative, subpath starts that far from the end of the path. subpath('Top.Child1.Child2',1) Child1.Child2 @@ -423,8 +423,8 @@ Europe & Russia*@ & !Transportation index(ltree a, ltree b)index integer - position of first occurrence of b in - a; -1 if not found + position of first occurrence of b in + a; -1 if not found index('0.1.2.3.5.4.5.6.8.5.6.8','5.6') 6 @@ -432,9 +432,9 @@ Europe & Russia*@ & !Transportation index(ltree a, ltree b, int offset) integer - position of first occurrence of b in - a, searching starting at offset; - negative offset means start -offset + position of first occurrence of b in + a, searching starting at offset; + negative offset means start -offset labels from the end of the path index('0.1.2.3.5.4.5.6.8.5.6.8','5.6',-4) 9 @@ -443,7 +443,7 @@ Europe & Russia*@ & !Transportation text2ltree(text)text2ltree ltree - cast text to ltree + cast text to ltree @@ -451,7 +451,7 @@ Europe & Russia*@ & !Transportation ltree2text(ltree)ltree2text text - cast ltree to text + cast ltree to text @@ -481,25 +481,25 @@ Europe & Russia*@ & !Transportation Indexes - ltree supports several types of indexes that can speed + ltree supports several types of indexes that can speed up the indicated operators: - B-tree index over ltree: - <, <=, =, - >=, > + B-tree index over ltree: + <, <=, =, + >=, > - GiST index over ltree: - <, <=, =, - >=, >, - @>, <@, - @, ~, ? + GiST index over ltree: + <, <=, =, + >=, >, + @>, <@, + @, ~, ? Example of creating such an index: @@ -510,9 +510,9 @@ CREATE INDEX path_gist_idx ON test USING GIST (path); - GiST index over ltree[]: - ltree[] <@ ltree, ltree @> ltree[], - @, ~, ? + GiST index over ltree[]: + ltree[] <@ ltree, ltree @> ltree[], + @, ~, ? Example of creating such an index: @@ -532,7 +532,7 @@ CREATE INDEX path_gist_idx ON test USING GIST (array_path); This example uses the following data (also available in file - contrib/ltree/ltreetest.sql in the source distribution): + contrib/ltree/ltreetest.sql in the source distribution): @@ -555,7 +555,7 @@ CREATE INDEX path_idx ON test USING BTREE (path); - Now, we have a table test populated with data describing + Now, we have a table test populated with data describing the hierarchy shown below: diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 616aece6c0..1952bc9178 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -12,12 +12,12 @@ - PostgreSQL, like any database software, requires that certain tasks + PostgreSQL, like any database software, requires that certain tasks be performed regularly to achieve optimum performance. The tasks discussed here are required, but they are repetitive in nature and can easily be automated using standard tools such as cron scripts or - Windows' Task Scheduler. It is the database + Windows' Task Scheduler. It is the database administrator's responsibility to set up appropriate scripts, and to check that they execute successfully. @@ -32,7 +32,7 @@ - The other main category of maintenance task is periodic vacuuming + The other main category of maintenance task is periodic vacuuming of the database. This activity is discussed in . Closely related to this is updating the statistics that will be used by the query planner, as discussed in @@ -46,9 +46,9 @@ check_postgres + url="http://bucardo.org/wiki/Check_postgres">check_postgres is available for monitoring database health and reporting unusual - conditions. check_postgres integrates with + conditions. check_postgres integrates with Nagios and MRTG, but can be run standalone too. @@ -68,15 +68,15 @@ PostgreSQL databases require periodic - maintenance known as vacuuming. For many installations, it + maintenance known as vacuuming. For many installations, it is sufficient to let vacuuming be performed by the autovacuum - daemon, which is described in . You might + daemon, which is described in . You might need to adjust the autovacuuming parameters described there to obtain best results for your situation. Some database administrators will want to supplement or replace the daemon's activities with manually-managed - VACUUM commands, which typically are executed according to a + VACUUM commands, which typically are executed according to a schedule by cron or Task - Scheduler scripts. To set up manually-managed vacuuming properly, + Scheduler scripts. To set up manually-managed vacuuming properly, it is essential to understand the issues discussed in the next few subsections. Administrators who rely on autovacuuming may still wish to skim this material to help them understand and adjust autovacuuming. @@ -109,30 +109,30 @@ To protect against loss of very old data due to - transaction ID wraparound or - multixact ID wraparound. + transaction ID wraparound or + multixact ID wraparound. - Each of these reasons dictates performing VACUUM operations + Each of these reasons dictates performing VACUUM operations of varying frequency and scope, as explained in the following subsections. - There are two variants of VACUUM: standard VACUUM - and VACUUM FULL. VACUUM FULL can reclaim more + There are two variants of VACUUM: standard VACUUM + and VACUUM FULL. VACUUM FULL can reclaim more disk space but runs much more slowly. Also, - the standard form of VACUUM can run in parallel with production + the standard form of VACUUM can run in parallel with production database operations. (Commands such as SELECT, INSERT, UPDATE, and DELETE will continue to function normally, though you will not be able to modify the definition of a table with commands such as ALTER TABLE while it is being vacuumed.) - VACUUM FULL requires exclusive lock on the table it is + VACUUM FULL requires exclusive lock on the table it is working on, and therefore cannot be done in parallel with other use of the table. Generally, therefore, - administrators should strive to use standard VACUUM and - avoid VACUUM FULL. + administrators should strive to use standard VACUUM and + avoid VACUUM FULL. @@ -153,15 +153,15 @@ In PostgreSQL, an - UPDATE or DELETE of a row does not + UPDATE or DELETE of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion - concurrency control (MVCC, see ): the row version + concurrency control (MVCC, see ): the row version must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must then be reclaimed for reuse by new rows, to avoid unbounded growth of disk - space requirements. This is done by running VACUUM. + space requirements. This is done by running VACUUM. @@ -170,7 +170,7 @@ future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be - easily obtained. In contrast, VACUUM FULL actively compacts + easily obtained. In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until @@ -178,18 +178,18 @@ - The usual goal of routine vacuuming is to do standard VACUUMs - often enough to avoid needing VACUUM FULL. The + The usual goal of routine vacuuming is to do standard VACUUMs + often enough to avoid needing VACUUM FULL. The autovacuum daemon attempts to work this way, and in fact will - never issue VACUUM FULL. In this approach, the idea + never issue VACUUM FULL. In this approach, the idea is not to keep tables at their minimum size, but to maintain steady-state usage of disk space: each table occupies space equivalent to its minimum size plus however much space gets used up between vacuumings. - Although VACUUM FULL can be used to shrink a table back + Although VACUUM FULL can be used to shrink a table back to its minimum size and return the disk space to the operating system, there is not much point in this if the table will just grow again in the - future. Thus, moderately-frequent standard VACUUM runs are a - better approach than infrequent VACUUM FULL runs for + future. Thus, moderately-frequent standard VACUUM runs are a + better approach than infrequent VACUUM FULL runs for maintaining heavily-updated tables. @@ -198,20 +198,20 @@ doing all the work at night when load is low. The difficulty with doing vacuuming according to a fixed schedule is that if a table has an unexpected spike in update activity, it may - get bloated to the point that VACUUM FULL is really necessary + get bloated to the point that VACUUM FULL is really necessary to reclaim space. Using the autovacuum daemon alleviates this problem, since the daemon schedules vacuuming dynamically in response to update activity. It is unwise to disable the daemon completely unless you have an extremely predictable workload. One possible compromise is to set the daemon's parameters so that it will only react to unusually heavy update activity, thus keeping things from getting out of hand, - while scheduled VACUUMs are expected to do the bulk of the + while scheduled VACUUMs are expected to do the bulk of the work when the load is typical. For those not using autovacuum, a typical approach is to schedule a - database-wide VACUUM once a day during a low-usage period, + database-wide VACUUM once a day during a low-usage period, supplemented by more frequent vacuuming of heavily-updated tables as necessary. (Some installations with extremely high update rates vacuum their busiest tables as often as once every few minutes.) If you have @@ -222,11 +222,11 @@ - Plain VACUUM may not be satisfactory when + Plain VACUUM may not be satisfactory when a table contains large numbers of dead row versions as a result of massive update or delete activity. If you have such a table and you need to reclaim the excess disk space it occupies, you will need - to use VACUUM FULL, or alternatively + to use VACUUM FULL, or alternatively or one of the table-rewriting variants of . @@ -271,19 +271,19 @@ generate good plans for queries. These statistics are gathered by the command, which can be invoked by itself or - as an optional step in VACUUM. It is important to have + as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans might degrade database performance. The autovacuum daemon, if enabled, will automatically issue - ANALYZE commands whenever the content of a table has + ANALYZE commands whenever the content of a table has changed sufficiently. However, administrators might prefer to rely - on manually-scheduled ANALYZE operations, particularly + on manually-scheduled ANALYZE operations, particularly if it is known that update activity on a table will not affect the - statistics of interesting columns. The daemon schedules - ANALYZE strictly as a function of the number of rows + statistics of interesting columns. The daemon schedules + ANALYZE strictly as a function of the number of rows inserted or updated; it has no knowledge of whether that will lead to meaningful statistical changes. @@ -305,24 +305,24 @@ - It is possible to run ANALYZE on specific tables and even + It is possible to run ANALYZE on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it. In practice, however, it is usually best to just analyze the entire - database, because it is a fast operation. ANALYZE uses a + database, because it is a fast operation. ANALYZE uses a statistically random sampling of the rows of a table rather than reading every single row. - Although per-column tweaking of ANALYZE frequency might not be + Although per-column tweaking of ANALYZE frequency might not be very productive, you might find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by - ANALYZE. Columns that are heavily used in WHERE + ANALYZE. Columns that are heavily used in WHERE clauses and have highly irregular data distributions might require a finer-grain data histogram than other columns. See ALTER TABLE - SET STATISTICS, or change the database-wide default using the , or change the database-wide default using the configuration parameter. @@ -337,11 +337,11 @@ - The autovacuum daemon does not issue ANALYZE commands for + The autovacuum daemon does not issue ANALYZE commands for foreign tables, since it has no means of determining how often that might be useful. If your queries require statistics on foreign tables for proper planning, it's a good idea to run manually-managed - ANALYZE commands on those tables on a suitable schedule. + ANALYZE commands on those tables on a suitable schedule. @@ -350,7 +350,7 @@ Updating The Visibility Map - Vacuum maintains a visibility map for each + Vacuum maintains a visibility map for each table to keep track of which pages contain only tuples that are known to be visible to all active transactions (and all future transactions, until the page is again modified). This has two purposes. First, vacuum @@ -366,7 +366,7 @@ matching index entry, to check whether it should be seen by the current transaction. An index-only - scan, on the other hand, checks the visibility map first. + scan, on the other hand, checks the visibility map first. If it's known that all tuples on the page are visible, the heap fetch can be skipped. This is most useful on large data sets where the visibility map can prevent disk accesses. @@ -391,13 +391,13 @@ PostgreSQL's MVCC transaction semantics - depend on being able to compare transaction ID (XID) + depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current - transaction's XID is in the future and should not be visible + transaction's XID is in the future and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID - wraparound: the XID counter wraps around to zero, and all of a sudden + wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their output become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you cannot @@ -407,47 +407,47 @@ The reason that periodic vacuuming solves the problem is that - VACUUM will mark rows as frozen, indicating that + VACUUM will mark rows as frozen, indicating that they were inserted by a transaction that committed sufficiently far in the past that the effects of the inserting transaction are certain to be visible to all current and future transactions. Normal XIDs are - compared using modulo-232 arithmetic. This means + compared using modulo-232 arithmetic. This means that for every normal XID, there are two billion XIDs that are - older and two billion that are newer; another + older and two billion that are newer; another way to say it is that the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular - normal XID, the row version will appear to be in the past for + normal XID, the row version will appear to be in the past for the next two billion transactions, no matter which normal XID we are talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To - prevent this, PostgreSQL reserves a special XID, - FrozenTransactionId, which does not follow the normal XID + prevent this, PostgreSQL reserves a special XID, + FrozenTransactionId, which does not follow the normal XID comparison rules and is always considered older than every normal XID. Frozen row versions are treated as if the inserting XID were - FrozenTransactionId, so that they will appear to be - in the past to all normal transactions regardless of wraparound + FrozenTransactionId, so that they will appear to be + in the past to all normal transactions regardless of wraparound issues, and so such row versions will be valid until deleted, no matter how long that is. - In PostgreSQL versions before 9.4, freezing was + In PostgreSQL versions before 9.4, freezing was implemented by actually replacing a row's insertion XID - with FrozenTransactionId, which was visible in the - row's xmin system column. Newer versions just set a flag - bit, preserving the row's original xmin for possible - forensic use. However, rows with xmin equal - to FrozenTransactionId (2) may still be found - in databases pg_upgrade'd from pre-9.4 versions. + with FrozenTransactionId, which was visible in the + row's xmin system column. Newer versions just set a flag + bit, preserving the row's original xmin for possible + forensic use. However, rows with xmin equal + to FrozenTransactionId (2) may still be found + in databases pg_upgrade'd from pre-9.4 versions. - Also, system catalogs may contain rows with xmin equal - to BootstrapTransactionId (1), indicating that they were - inserted during the first phase of initdb. - Like FrozenTransactionId, this special XID is treated as + Also, system catalogs may contain rows with xmin equal + to BootstrapTransactionId (1), indicating that they were + inserted during the first phase of initdb. + Like FrozenTransactionId, this special XID is treated as older than every normal XID. @@ -463,26 +463,26 @@ - VACUUM uses the visibility map + VACUUM uses the visibility map to determine which pages of a table must be scanned. Normally, it will skip pages that don't have any dead row versions even if those pages might still have row versions with old XID values. Therefore, normal - VACUUMs won't always freeze every old row version in the table. - Periodically, VACUUM will perform an aggressive - vacuum, skipping only those pages which contain neither dead rows nor + VACUUMs won't always freeze every old row version in the table. + Periodically, VACUUM will perform an aggressive + vacuum, skipping only those pages which contain neither dead rows nor any unfrozen XID or MXID values. - controls when VACUUM does that: all-visible but not all-frozen + controls when VACUUM does that: all-visible but not all-frozen pages are scanned if the number of transactions that have passed since the - last such scan is greater than vacuum_freeze_table_age minus - vacuum_freeze_min_age. Setting - vacuum_freeze_table_age to 0 forces VACUUM to + last such scan is greater than vacuum_freeze_table_age minus + vacuum_freeze_min_age. Setting + vacuum_freeze_table_age to 0 forces VACUUM to use this more aggressive strategy for all scans. The maximum time that a table can go unvacuumed is two billion - transactions minus the vacuum_freeze_min_age value at + transactions minus the vacuum_freeze_min_age value at the time of the last aggressive vacuum. If it were to go unvacuumed for longer than that, data loss could result. To ensure that this does not happen, @@ -495,29 +495,29 @@ This implies that if a table is not otherwise vacuumed, autovacuum will be invoked on it approximately once every - autovacuum_freeze_max_age minus - vacuum_freeze_min_age transactions. + autovacuum_freeze_max_age minus + vacuum_freeze_min_age transactions. For tables that are regularly vacuumed for space reclamation purposes, this is of little importance. However, for static tables (including tables that receive inserts, but no updates or deletes), there is no need to vacuum for space reclamation, so it can be useful to try to maximize the interval between forced autovacuums on very large static tables. Obviously one can do this either by - increasing autovacuum_freeze_max_age or decreasing - vacuum_freeze_min_age. + increasing autovacuum_freeze_max_age or decreasing + vacuum_freeze_min_age. - The effective maximum for vacuum_freeze_table_age is 0.95 * - autovacuum_freeze_max_age; a setting higher than that will be + The effective maximum for vacuum_freeze_table_age is 0.95 * + autovacuum_freeze_max_age; a setting higher than that will be capped to the maximum. A value higher than - autovacuum_freeze_max_age wouldn't make sense because an + autovacuum_freeze_max_age wouldn't make sense because an anti-wraparound autovacuum would be triggered at that point anyway, and the 0.95 multiplier leaves some breathing room to run a manual - VACUUM before that happens. As a rule of thumb, - vacuum_freeze_table_age should be set to a value somewhat - below autovacuum_freeze_max_age, leaving enough gap so that - a regularly scheduled VACUUM or an autovacuum triggered by + VACUUM before that happens. As a rule of thumb, + vacuum_freeze_table_age should be set to a value somewhat + below autovacuum_freeze_max_age, leaving enough gap so that + a regularly scheduled VACUUM or an autovacuum triggered by normal delete and update activity is run in that window. Setting it too close could lead to anti-wraparound autovacuums, even though the table was recently vacuumed to reclaim space, whereas lower values lead to more @@ -525,29 +525,29 @@ - The sole disadvantage of increasing autovacuum_freeze_max_age - (and vacuum_freeze_table_age along with it) is that - the pg_xact and pg_commit_ts + The sole disadvantage of increasing autovacuum_freeze_max_age + (and vacuum_freeze_table_age along with it) is that + the pg_xact and pg_commit_ts subdirectories of the database cluster will take more space, because it - must store the commit status and (if track_commit_timestamp is + must store the commit status and (if track_commit_timestamp is enabled) timestamp of all transactions back to - the autovacuum_freeze_max_age horizon. The commit status uses + the autovacuum_freeze_max_age horizon. The commit status uses two bits per transaction, so if - autovacuum_freeze_max_age is set to its maximum allowed value - of two billion, pg_xact can be expected to grow to about half + autovacuum_freeze_max_age is set to its maximum allowed value + of two billion, pg_xact can be expected to grow to about half a gigabyte and pg_commit_ts to about 20GB. If this is trivial compared to your total database size, - setting autovacuum_freeze_max_age to its maximum allowed value + setting autovacuum_freeze_max_age to its maximum allowed value is recommended. Otherwise, set it depending on what you are willing to - allow for pg_xact and pg_commit_ts storage. + allow for pg_xact and pg_commit_ts storage. (The default, 200 million transactions, translates to about 50MB - of pg_xact storage and about 2GB of pg_commit_ts + of pg_xact storage and about 2GB of pg_commit_ts storage.) - One disadvantage of decreasing vacuum_freeze_min_age is that - it might cause VACUUM to do useless work: freezing a row + One disadvantage of decreasing vacuum_freeze_min_age is that + it might cause VACUUM to do useless work: freezing a row version is a waste of time if the row is modified soon thereafter (causing it to acquire a new XID). So the setting should be large enough that rows are not frozen until they are unlikely to change @@ -556,18 +556,18 @@ To track the age of the oldest unfrozen XIDs in a database, - VACUUM stores XID - statistics in the system tables pg_class and - pg_database. In particular, - the relfrozenxid column of a table's - pg_class row contains the freeze cutoff XID that was used - by the last aggressive VACUUM for that table. All rows + VACUUM stores XID + statistics in the system tables pg_class and + pg_database. In particular, + the relfrozenxid column of a table's + pg_class row contains the freeze cutoff XID that was used + by the last aggressive VACUUM for that table. All rows inserted by transactions with XIDs older than this cutoff XID are guaranteed to have been frozen. Similarly, - the datfrozenxid column of a database's - pg_database row is a lower bound on the unfrozen XIDs + the datfrozenxid column of a database's + pg_database row is a lower bound on the unfrozen XIDs appearing in that database — it is just the minimum of the - per-table relfrozenxid values within the database. + per-table relfrozenxid values within the database. A convenient way to examine this information is to execute queries such as: @@ -581,27 +581,27 @@ WHERE c.relkind IN ('r', 'm'); SELECT datname, age(datfrozenxid) FROM pg_database; - The age column measures the number of transactions from the + The age column measures the number of transactions from the cutoff XID to the current transaction's XID. - VACUUM normally only scans pages that have been modified - since the last vacuum, but relfrozenxid can only be + VACUUM normally only scans pages that have been modified + since the last vacuum, but relfrozenxid can only be advanced when every page of the table that might contain unfrozen XIDs is scanned. This happens when - relfrozenxid is more than - vacuum_freeze_table_age transactions old, when - VACUUM's FREEZE option is used, or when all + relfrozenxid is more than + vacuum_freeze_table_age transactions old, when + VACUUM's FREEZE option is used, or when all pages that are not already all-frozen happen to - require vacuuming to remove dead row versions. When VACUUM + require vacuuming to remove dead row versions. When VACUUM scans every page in the table that is not already all-frozen, it should - set age(relfrozenxid) to a value just a little more than the - vacuum_freeze_min_age setting + set age(relfrozenxid) to a value just a little more than the + vacuum_freeze_min_age setting that was used (more by the number of transactions started since the - VACUUM started). If no relfrozenxid-advancing - VACUUM is issued on the table until - autovacuum_freeze_max_age is reached, an autovacuum will soon + VACUUM started). If no relfrozenxid-advancing + VACUUM is issued on the table until + autovacuum_freeze_max_age is reached, an autovacuum will soon be forced for the table. @@ -616,10 +616,10 @@ WARNING: database "mydb" must be vacuumed within 177009986 transactions HINT: To avoid a database shutdown, execute a database-wide VACUUM in "mydb". - (A manual VACUUM should fix the problem, as suggested by the - hint; but note that the VACUUM must be performed by a + (A manual VACUUM should fix the problem, as suggested by the + hint; but note that the VACUUM must be performed by a superuser, else it will fail to process system catalogs and thus not - be able to advance the database's datfrozenxid.) + be able to advance the database's datfrozenxid.) If these warnings are ignored, the system will shut down and refuse to start any new transactions once there are fewer than 1 million transactions left @@ -632,10 +632,10 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. The 1-million-transaction safety margin exists to let the administrator recover without data loss, by manually executing the - required VACUUM commands. However, since the system will not + required VACUUM commands. However, since the system will not execute commands once it has gone into the safety shutdown mode, the only way to do this is to stop the server and start the server in single-user - mode to execute VACUUM. The shutdown mode is not enforced + mode to execute VACUUM. The shutdown mode is not enforced in single-user mode. See the reference page for details about using single-user mode. @@ -653,15 +653,15 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Multixact IDs are used to support row locking by + Multixact IDs are used to support row locking by multiple transactions. Since there is only limited space in a tuple header to store lock information, that information is encoded as - a multiple transaction ID, or multixact ID for short, + a multiple transaction ID, or multixact ID for short, whenever there is more than one transaction concurrently locking a row. Information about which transaction IDs are included in any particular multixact ID is stored separately in - the pg_multixact subdirectory, and only the multixact ID - appears in the xmax field in the tuple header. + the pg_multixact subdirectory, and only the multixact ID + appears in the xmax field in the tuple header. Like transaction IDs, multixact IDs are implemented as a 32-bit counter and corresponding storage, all of which requires careful aging management, storage cleanup, and wraparound handling. @@ -671,23 +671,23 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Whenever VACUUM scans any part of a table, it will replace + Whenever VACUUM scans any part of a table, it will replace any multixact ID it encounters which is older than by a different value, which can be the zero value, a single transaction ID, or a newer multixact ID. For each table, - pg_class.relminmxid stores the oldest + pg_class.relminmxid stores the oldest possible multixact ID still appearing in any tuple of that table. If this value is older than , an aggressive vacuum is forced. As discussed in the previous section, an aggressive vacuum means that only those pages which are known to be all-frozen will - be skipped. mxid_age() can be used on - pg_class.relminmxid to find its age. + be skipped. mxid_age() can be used on + pg_class.relminmxid to find its age. - Aggressive VACUUM scans, regardless of + Aggressive VACUUM scans, regardless of what causes them, enable advancing the value for that table. Eventually, as all tables in all databases are scanned and their oldest multixact values are advanced, on-disk storage for older @@ -729,21 +729,21 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - The autovacuum daemon actually consists of multiple processes. + The autovacuum daemon actually consists of multiple processes. There is a persistent daemon process, called the autovacuum launcher, which is in charge of starting autovacuum worker processes for all databases. The launcher will distribute the work across time, attempting to start one worker within each database every - seconds. (Therefore, if the installation has N databases, + seconds. (Therefore, if the installation has N databases, a new worker will be launched every - autovacuum_naptime/N seconds.) + autovacuum_naptime/N seconds.) A maximum of worker processes are allowed to run at the same time. If there are more than - autovacuum_max_workers databases to be processed, + autovacuum_max_workers databases to be processed, the next database will be processed as soon as the first worker finishes. Each worker process will check each table within its database and - execute VACUUM and/or ANALYZE as needed. + execute VACUUM and/or ANALYZE as needed. can be set to monitor autovacuum workers' activity. @@ -761,7 +761,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Tables whose relfrozenxid value is more than + Tables whose relfrozenxid value is more than transactions old are always vacuumed (this also applies to those tables whose freeze max age has been modified via storage parameters; see below). Otherwise, if the @@ -781,10 +781,10 @@ vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuple collector; it is a semi-accurate count updated by each UPDATE and DELETE operation. (It is only semi-accurate because some information might be lost under heavy - load.) If the relfrozenxid value of the table is more - than vacuum_freeze_table_age transactions old, an aggressive + load.) If the relfrozenxid value of the table is more + than vacuum_freeze_table_age transactions old, an aggressive vacuum is performed to freeze old tuples and advance - relfrozenxid; otherwise, only pages that have been modified + relfrozenxid; otherwise, only pages that have been modified since the last vacuum are scanned. @@ -821,8 +821,8 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu balanced among all the running workers, so that the total I/O impact on the system is the same regardless of the number of workers actually running. However, any workers processing tables whose - per-table autovacuum_vacuum_cost_delay or - autovacuum_vacuum_cost_limit storage parameters have been set + per-table autovacuum_vacuum_cost_delay or + autovacuum_vacuum_cost_limit storage parameters have been set are not considered in the balancing algorithm. @@ -872,7 +872,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu But since the command requires an exclusive table lock, it is often preferable to execute an index rebuild with a sequence of creation and replacement steps. Index types that support - with the CONCURRENTLY + with the CONCURRENTLY option can instead be recreated that way. If that is successful and the resulting index is valid, the original index can then be replaced by the newly built one using a combination of @@ -896,17 +896,17 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu It is a good idea to save the database server's log output - somewhere, rather than just discarding it via /dev/null. + somewhere, rather than just discarding it via /dev/null. The log output is invaluable when diagnosing problems. However, the log output tends to be voluminous (especially at higher debug levels) so you won't want to save it - indefinitely. You need to rotate the log files so that + indefinitely. You need to rotate the log files so that new log files are started and old ones removed after a reasonable period of time. - If you simply direct the stderr of + If you simply direct the stderr of postgres into a file, you will have log output, but the only way to truncate the log file is to stop and restart @@ -917,13 +917,13 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu A better approach is to send the server's - stderr output to some type of log rotation program. + stderr output to some type of log rotation program. There is a built-in log rotation facility, which you can use by - setting the configuration parameter logging_collector to - true in postgresql.conf. The control + setting the configuration parameter logging_collector to + true in postgresql.conf. The control parameters for this program are described in . You can also use this approach - to capture the log data in machine readable CSV + to capture the log data in machine readable CSV (comma-separated values) format. @@ -934,10 +934,10 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu tool included in the Apache distribution can be used with PostgreSQL. To do this, just pipe the server's - stderr output to the desired program. + stderr output to the desired program. If you start the server with - pg_ctl, then stderr - is already redirected to stdout, so you just need a + pg_ctl, then stderr + is already redirected to stdout, so you just need a pipe command, for example: @@ -947,12 +947,12 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 Another production-grade approach to managing log output is to - send it to syslog and let - syslog deal with file rotation. To do this, set the - configuration parameter log_destination to syslog - (to log to syslog only) in - postgresql.conf. Then you can send a SIGHUP - signal to the syslog daemon whenever you want to force it + send it to syslog and let + syslog deal with file rotation. To do this, set the + configuration parameter log_destination to syslog + (to log to syslog only) in + postgresql.conf. Then you can send a SIGHUP + signal to the syslog daemon whenever you want to force it to start writing a new log file. If you want to automate log rotation, the logrotate program can be configured to work with log files from @@ -960,12 +960,12 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 - On many systems, however, syslog is not very reliable, + On many systems, however, syslog is not very reliable, particularly with large log messages; it might truncate or drop messages - just when you need them the most. Also, on Linux, - syslog will flush each message to disk, yielding poor - performance. (You can use a - at the start of the file name - in the syslog configuration file to disable syncing.) + just when you need them the most. Also, on Linux, + syslog will flush each message to disk, yielding poor + performance. (You can use a - at the start of the file name + in the syslog configuration file to disable syncing.) diff --git a/doc/src/sgml/manage-ag.sgml b/doc/src/sgml/manage-ag.sgml index fe1a6355c4..f005538220 100644 --- a/doc/src/sgml/manage-ag.sgml +++ b/doc/src/sgml/manage-ag.sgml @@ -3,7 +3,7 @@ Managing Databases - database + database Every instance of a running PostgreSQL @@ -26,7 +26,7 @@ (database objects). Generally, every database object (tables, functions, etc.) belongs to one and only one database. (However there are a few system catalogs, for example - pg_database, that belong to a whole cluster and + pg_database, that belong to a whole cluster and are accessible from each database within the cluster.) More accurately, a database is a collection of schemas and the schemas contain the tables, functions, etc. So the full hierarchy is: @@ -41,7 +41,7 @@ connection. However, an application is not restricted in the number of connections it opens to the same or other databases. Databases are physically separated and access control is managed at the - connection level. If one PostgreSQL server + connection level. If one PostgreSQL server instance is to house projects or users that should be separate and for the most part unaware of each other, it is therefore recommended to put them into separate databases. If the projects @@ -53,23 +53,23 @@ - Databases are created with the CREATE DATABASE command + Databases are created with the CREATE DATABASE command (see ) and destroyed with the - DROP DATABASE command + DROP DATABASE command (see ). To determine the set of existing databases, examine the - pg_database system catalog, for example + pg_database system catalog, for example SELECT datname FROM pg_database; - The program's \l meta-command - and - The SQL standard calls databases catalogs, but there + The SQL standard calls databases catalogs, but there is no difference in practice. @@ -78,10 +78,10 @@ SELECT datname FROM pg_database; Creating a Database - CREATE DATABASE + CREATE DATABASE - In order to create a database, the PostgreSQL + In order to create a database, the PostgreSQL server must be up and running (see ). @@ -90,9 +90,9 @@ SELECT datname FROM pg_database; Databases are created with the SQL command : -CREATE DATABASE name; +CREATE DATABASE name; - where name follows the usual rules for + where name follows the usual rules for SQL identifiers. The current role automatically becomes the owner of the new database. It is the privilege of the owner of a database to remove it later (which also removes all @@ -107,25 +107,25 @@ CREATE DATABASE name; Since you need to be connected to the database server in order to execute the CREATE DATABASE command, the - question remains how the first database at any given + question remains how the first database at any given site can be created. The first database is always created by the - initdb command when the data storage area is + initdb command when the data storage area is initialized. (See .) This database is called - postgres.postgres So to - create the first ordinary database you can connect to - postgres. + postgres.postgres So to + create the first ordinary database you can connect to + postgres. A second database, - template1,template1 + template1,template1 is also created during database cluster initialization. Whenever a new database is created within the cluster, template1 is essentially cloned. - This means that any changes you make in template1 are + This means that any changes you make in template1 are propagated to all subsequently created databases. Because of this, - avoid creating objects in template1 unless you want them + avoid creating objects in template1 unless you want them propagated to every newly created database. More details appear in . @@ -133,17 +133,17 @@ CREATE DATABASE name; As a convenience, there is a program you can execute from the shell to create new databases, - createdb.createdb + createdb.createdb createdb dbname - createdb does no magic. It connects to the postgres - database and issues the CREATE DATABASE command, + createdb does no magic. It connects to the postgres + database and issues the CREATE DATABASE command, exactly as described above. The reference page contains the invocation - details. Note that createdb without any arguments will create + details. Note that createdb without any arguments will create a database with the current user name. @@ -160,11 +160,11 @@ createdb dbname configure and manage it themselves. To achieve that, use one of the following commands: -CREATE DATABASE dbname OWNER rolename; +CREATE DATABASE dbname OWNER rolename; from the SQL environment, or: -createdb -O rolename dbname +createdb -O rolename dbname from the shell. Only the superuser is allowed to create a database for @@ -176,55 +176,55 @@ createdb -O rolename dbname Template Databases - CREATE DATABASE actually works by copying an existing + CREATE DATABASE actually works by copying an existing database. By default, it copies the standard system database named - template1.template1 Thus that - database is the template from which new databases are - made. If you add objects to template1, these objects + template1.template1 Thus that + database is the template from which new databases are + made. If you add objects to template1, these objects will be copied into subsequently created user databases. This behavior allows site-local modifications to the standard set of objects in databases. For example, if you install the procedural - language PL/Perl in template1, it will + language PL/Perl in template1, it will automatically be available in user databases without any extra action being taken when those databases are created. There is a second standard system database named - template0.template0 This + template0.template0 This database contains the same data as the initial contents of - template1, that is, only the standard objects + template1, that is, only the standard objects predefined by your version of - PostgreSQL. template0 + PostgreSQL. template0 should never be changed after the database cluster has been initialized. By instructing - CREATE DATABASE to copy template0 instead - of template1, you can create a virgin user + CREATE DATABASE to copy template0 instead + of template1, you can create a virgin user database that contains none of the site-local additions in - template1. This is particularly handy when restoring a - pg_dump dump: the dump script should be restored in a + template1. This is particularly handy when restoring a + pg_dump dump: the dump script should be restored in a virgin database to ensure that one recreates the correct contents of the dumped database, without conflicting with objects that - might have been added to template1 later on. + might have been added to template1 later on. - Another common reason for copying template0 instead - of template1 is that new encoding and locale settings - can be specified when copying template0, whereas a copy - of template1 must use the same settings it does. - This is because template1 might contain encoding-specific - or locale-specific data, while template0 is known not to. + Another common reason for copying template0 instead + of template1 is that new encoding and locale settings + can be specified when copying template0, whereas a copy + of template1 must use the same settings it does. + This is because template1 might contain encoding-specific + or locale-specific data, while template0 is known not to. To create a database by copying template0, use: -CREATE DATABASE dbname TEMPLATE template0; +CREATE DATABASE dbname TEMPLATE template0; from the SQL environment, or: -createdb -T template0 dbname +createdb -T template0 dbname from the shell. @@ -232,49 +232,49 @@ createdb -T template0 dbname It is possible to create additional template databases, and indeed one can copy any database in a cluster by specifying its name - as the template for CREATE DATABASE. It is important to + as the template for CREATE DATABASE. It is important to understand, however, that this is not (yet) intended as a general-purpose COPY DATABASE facility. The principal limitation is that no other sessions can be connected to the source database while it is being copied. CREATE - DATABASE will fail if any other connection exists when it starts; + DATABASE will fail if any other connection exists when it starts; during the copy operation, new connections to the source database are prevented. - Two useful flags exist in pg_databasepg_database for each + Two useful flags exist in pg_databasepg_database for each database: the columns datistemplate and datallowconn. datistemplate can be set to indicate that a database is intended as a template for - CREATE DATABASE. If this flag is set, the database can be - cloned by any user with CREATEDB privileges; if it is not set, + CREATE DATABASE. If this flag is set, the database can be + cloned by any user with CREATEDB privileges; if it is not set, only superusers and the owner of the database can clone it. If datallowconn is false, then no new connections to that database will be allowed (but existing sessions are not terminated simply by setting the flag false). The template0 - database is normally marked datallowconn = false to prevent its modification. + database is normally marked datallowconn = false to prevent its modification. Both template0 and template1 - should always be marked with datistemplate = true. + should always be marked with datistemplate = true. - template1 and template0 do not have any special - status beyond the fact that the name template1 is the default - source database name for CREATE DATABASE. - For example, one could drop template1 and recreate it from - template0 without any ill effects. This course of action + template1 and template0 do not have any special + status beyond the fact that the name template1 is the default + source database name for CREATE DATABASE. + For example, one could drop template1 and recreate it from + template0 without any ill effects. This course of action might be advisable if one has carelessly added a bunch of junk in - template1. (To delete template1, - it must have pg_database.datistemplate = false.) + template1. (To delete template1, + it must have pg_database.datistemplate = false.) - The postgres database is also created when a database + The postgres database is also created when a database cluster is initialized. This database is meant as a default database for users and applications to connect to. It is simply a copy of - template1 and can be dropped and recreated if necessary. + template1 and can be dropped and recreated if necessary. @@ -284,7 +284,7 @@ createdb -T template0 dbname Recall from that the - PostgreSQL server provides a large number of + PostgreSQL server provides a large number of run-time configuration variables. You can set database-specific default values for many of these settings. @@ -305,8 +305,8 @@ ALTER DATABASE mydb SET geqo TO off; session started. Note that users can still alter this setting during their sessions; it will only be the default. To undo any such setting, use - ALTER DATABASE dbname RESET - varname. + ALTER DATABASE dbname RESET + varname. @@ -315,9 +315,9 @@ ALTER DATABASE mydb SET geqo TO off; Databases are destroyed with the command - :DROP DATABASE + :DROP DATABASE -DROP DATABASE name; +DROP DATABASE name; Only the owner of the database, or a superuser, can drop a database. Dropping a database removes all objects @@ -329,19 +329,19 @@ DROP DATABASE name; You cannot execute the DROP DATABASE command while connected to the victim database. You can, however, be - connected to any other database, including the template1 + connected to any other database, including the template1 database. - template1 would be the only option for dropping the last user database of a + template1 would be the only option for dropping the last user database of a given cluster. For convenience, there is also a shell program to drop - databases, :dropdb + databases, :dropdb dropdb dbname - (Unlike createdb, it is not the default action to drop + (Unlike createdb, it is not the default action to drop the database with the current user name.) @@ -354,7 +354,7 @@ dropdb dbname - Tablespaces in PostgreSQL allow database administrators to + Tablespaces in PostgreSQL allow database administrators to define locations in the file system where the files representing database objects can be stored. Once created, a tablespace can be referred to by name when creating database objects. @@ -362,7 +362,7 @@ dropdb dbname By using tablespaces, an administrator can control the disk layout - of a PostgreSQL installation. This is useful in at + of a PostgreSQL installation. This is useful in at least two ways. First, if the partition or volume on which the cluster was initialized runs out of space and cannot be extended, a tablespace can be created on a different partition and used @@ -397,12 +397,12 @@ dropdb dbname To define a tablespace, use the - command, for example:CREATE TABLESPACE: + command, for example:CREATE TABLESPACE: CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; The location must be an existing, empty directory that is owned by - the PostgreSQL operating system user. All objects subsequently + the PostgreSQL operating system user. All objects subsequently created within the tablespace will be stored in files underneath this directory. The location must not be on removable or transient storage, as the cluster might fail to function if the tablespace is missing @@ -414,7 +414,7 @@ CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; There is usually not much point in making more than one tablespace per logical file system, since you cannot control the location of individual files within a logical file system. However, - PostgreSQL does not enforce any such limitation, and + PostgreSQL does not enforce any such limitation, and indeed it is not directly aware of the file system boundaries on your system. It just stores files in the directories you tell it to use. @@ -423,15 +423,15 @@ CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; Creation of the tablespace itself must be done as a database superuser, but after that you can allow ordinary database users to use it. - To do that, grant them the CREATE privilege on it. + To do that, grant them the CREATE privilege on it. Tables, indexes, and entire databases can be assigned to - particular tablespaces. To do so, a user with the CREATE + particular tablespaces. To do so, a user with the CREATE privilege on a given tablespace must pass the tablespace name as a parameter to the relevant command. For example, the following creates - a table in the tablespace space1: + a table in the tablespace space1: CREATE TABLE foo(i int) TABLESPACE space1; @@ -443,9 +443,9 @@ CREATE TABLE foo(i int) TABLESPACE space1; SET default_tablespace = space1; CREATE TABLE foo(i int); - When default_tablespace is set to anything but an empty - string, it supplies an implicit TABLESPACE clause for - CREATE TABLE and CREATE INDEX commands that + When default_tablespace is set to anything but an empty + string, it supplies an implicit TABLESPACE clause for + CREATE TABLE and CREATE INDEX commands that do not have an explicit one. @@ -463,9 +463,9 @@ CREATE TABLE foo(i int); The tablespace associated with a database is used to store the system catalogs of that database. Furthermore, it is the default tablespace used for tables, indexes, and temporary files created within the database, - if no TABLESPACE clause is given and no other selection is - specified by default_tablespace or - temp_tablespaces (as appropriate). + if no TABLESPACE clause is given and no other selection is + specified by default_tablespace or + temp_tablespaces (as appropriate). If a database is created without specifying a tablespace for it, it uses the same tablespace as the template database it is copied from. @@ -473,12 +473,12 @@ CREATE TABLE foo(i int); Two tablespaces are automatically created when the database cluster is initialized. The - pg_global tablespace is used for shared system catalogs. The - pg_default tablespace is the default tablespace of the - template1 and template0 databases (and, therefore, + pg_global tablespace is used for shared system catalogs. The + pg_default tablespace is the default tablespace of the + template1 and template0 databases (and, therefore, will be the default tablespace for other databases as well, unless - overridden by a TABLESPACE clause in CREATE - DATABASE). + overridden by a TABLESPACE clause in CREATE + DATABASE). @@ -501,25 +501,25 @@ CREATE TABLE foo(i int); SELECT spcname FROM pg_tablespace; - The program's \db meta-command + The program's \db meta-command is also useful for listing the existing tablespaces. - PostgreSQL makes use of symbolic links + PostgreSQL makes use of symbolic links to simplify the implementation of tablespaces. This - means that tablespaces can be used only on systems + means that tablespaces can be used only on systems that support symbolic links. - The directory $PGDATA/pg_tblspc contains symbolic links that + The directory $PGDATA/pg_tblspc contains symbolic links that point to each of the non-built-in tablespaces defined in the cluster. Although not recommended, it is possible to adjust the tablespace layout by hand by redefining these links. Under no circumstances perform this operation while the server is running. Note that in PostgreSQL 9.1 - and earlier you will also need to update the pg_tablespace - catalog with the new locations. (If you do not, pg_dump will + and earlier you will also need to update the pg_tablespace + catalog with the new locations. (If you do not, pg_dump will continue to output the old tablespace locations.) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 18fb9c2aa6..6f8203355e 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -24,11 +24,11 @@ analyzing performance. Most of this chapter is devoted to describing PostgreSQL's statistics collector, but one should not neglect regular Unix monitoring programs such as - ps, top, iostat, and vmstat. + ps, top, iostat, and vmstat. Also, once one has identified a poorly-performing query, further investigation might be needed using PostgreSQL's command. - discusses EXPLAIN + discusses EXPLAIN and other methods for understanding the behavior of an individual query. @@ -43,7 +43,7 @@ On most Unix platforms, PostgreSQL modifies its - command title as reported by ps, so that individual server + command title as reported by ps, so that individual server processes can readily be identified. A sample display is @@ -59,29 +59,29 @@ postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction - (The appropriate invocation of ps varies across different + (The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the master server process. The command arguments shown for it are the same ones used when it was launched. The next five processes are background worker processes automatically launched by the - master process. (The stats collector process will not be present + master process. (The stats collector process will not be present if you have set the system not to start the statistics collector; likewise - the autovacuum launcher process can be disabled.) + the autovacuum launcher process can be disabled.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form -postgres: user database host activity +postgres: user database host activity The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. - The activity can be idle (i.e., waiting for a client command), - idle in transaction (waiting for client inside a BEGIN block), - or a command type name such as SELECT. Also, - waiting is appended if the server process is presently waiting + The activity can be idle (i.e., waiting for a client command), + idle in transaction (waiting for client inside a BEGIN block), + or a command type name such as SELECT. Also, + waiting is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because @@ -93,7 +93,7 @@ postgres: user database host If has been configured the - cluster name will also be shown in ps output: + cluster name will also be shown in ps output: $ psql -c 'SHOW cluster_name' cluster_name @@ -122,8 +122,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser flags, not just one. In addition, your original invocation of the postgres command must have a shorter ps status display than that provided by each - server process. If you fail to do all three things, the ps - output for each server process will be the original postgres + server process. If you fail to do all three things, the ps + output for each server process will be the original postgres command line. @@ -137,7 +137,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - PostgreSQL's statistics collector + PostgreSQL's statistics collector is a subsystem that supports collection and reporting of information about server activity. Presently, the collector can count accesses to tables and indexes in both disk-block and individual-row terms. It also tracks @@ -161,7 +161,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in - postgresql.conf. (See for + postgresql.conf. (See for details about setting configuration parameters.) @@ -186,13 +186,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - Normally these parameters are set in postgresql.conf so + Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with - SET.) + SET.) @@ -201,7 +201,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser These files are stored in the directory named by the parameter, pg_stat_tmp by default. - For better performance, stats_temp_directory can be + For better performance, stats_temp_directory can be pointed at a RAM-based file system, decreasing physical I/O requirements. When the server shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that @@ -261,10 +261,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser A transaction can also see its own statistics (as yet untransmitted to the - collector) in the views pg_stat_xact_all_tables, - pg_stat_xact_sys_tables, - pg_stat_xact_user_tables, and - pg_stat_xact_user_functions. These numbers do not act as + collector) in the views pg_stat_xact_all_tables, + pg_stat_xact_sys_tables, + pg_stat_xact_user_tables, and + pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction. @@ -293,7 +293,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_replicationpg_stat_replication + pg_stat_replicationpg_stat_replication One row per WAL sender process, showing statistics about replication to that sender's connected standby server. See for details. @@ -301,7 +301,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_wal_receiverpg_stat_wal_receiver + pg_stat_wal_receiverpg_stat_wal_receiver Only one row, showing statistics about the WAL receiver from that receiver's connected server. See for details. @@ -309,7 +309,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_subscriptionpg_stat_subscription + pg_stat_subscriptionpg_stat_subscription At least one row per subscription, showing information about the subscription workers. See for details. @@ -317,7 +317,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sslpg_stat_ssl + pg_stat_sslpg_stat_ssl One row per connection (regular and replication), showing information about SSL used on this connection. See for details. @@ -325,9 +325,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_progress_vacuumpg_stat_progress_vacuum + pg_stat_progress_vacuumpg_stat_progress_vacuum One row for each backend (including autovacuum worker processes) running - VACUUM, showing current progress. + VACUUM, showing current progress. See . @@ -349,7 +349,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_archiverpg_stat_archiver + pg_stat_archiverpg_stat_archiver One row only, showing statistics about the WAL archiver process's activity. See for details. @@ -357,7 +357,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_bgwriterpg_stat_bgwriter + pg_stat_bgwriterpg_stat_bgwriter One row only, showing statistics about the background writer process's activity. See for details. @@ -365,14 +365,14 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_databasepg_stat_database + pg_stat_databasepg_stat_database One row per database, showing database-wide statistics. See for details. - pg_stat_database_conflictspg_stat_database_conflicts + pg_stat_database_conflictspg_stat_database_conflicts One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. @@ -381,7 +381,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_all_tablespg_stat_all_tables + pg_stat_all_tablespg_stat_all_tables One row for each table in the current database, showing statistics about accesses to that specific table. @@ -390,40 +390,40 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sys_tablespg_stat_sys_tables - Same as pg_stat_all_tables, except that only + pg_stat_sys_tablespg_stat_sys_tables + Same as pg_stat_all_tables, except that only system tables are shown. - pg_stat_user_tablespg_stat_user_tables - Same as pg_stat_all_tables, except that only user + pg_stat_user_tablespg_stat_user_tables + Same as pg_stat_all_tables, except that only user tables are shown. - pg_stat_xact_all_tablespg_stat_xact_all_tables - Similar to pg_stat_all_tables, but counts actions - taken so far within the current transaction (which are not - yet included in pg_stat_all_tables and related views). + pg_stat_xact_all_tablespg_stat_xact_all_tables + Similar to pg_stat_all_tables, but counts actions + taken so far within the current transaction (which are not + yet included in pg_stat_all_tables and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view. - pg_stat_xact_sys_tablespg_stat_xact_sys_tables - Same as pg_stat_xact_all_tables, except that only + pg_stat_xact_sys_tablespg_stat_xact_sys_tables + Same as pg_stat_xact_all_tables, except that only system tables are shown. - pg_stat_xact_user_tablespg_stat_xact_user_tables - Same as pg_stat_xact_all_tables, except that only + pg_stat_xact_user_tablespg_stat_xact_user_tables + Same as pg_stat_xact_all_tables, except that only user tables are shown. - pg_stat_all_indexespg_stat_all_indexes + pg_stat_all_indexespg_stat_all_indexes One row for each index in the current database, showing statistics about accesses to that specific index. @@ -432,19 +432,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sys_indexespg_stat_sys_indexes - Same as pg_stat_all_indexes, except that only + pg_stat_sys_indexespg_stat_sys_indexes + Same as pg_stat_all_indexes, except that only indexes on system tables are shown. - pg_stat_user_indexespg_stat_user_indexes - Same as pg_stat_all_indexes, except that only + pg_stat_user_indexespg_stat_user_indexes + Same as pg_stat_all_indexes, except that only indexes on user tables are shown. - pg_statio_all_tablespg_statio_all_tables + pg_statio_all_tablespg_statio_all_tables One row for each table in the current database, showing statistics about I/O on that specific table. @@ -453,19 +453,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_tablespg_statio_sys_tables - Same as pg_statio_all_tables, except that only + pg_statio_sys_tablespg_statio_sys_tables + Same as pg_statio_all_tables, except that only system tables are shown. - pg_statio_user_tablespg_statio_user_tables - Same as pg_statio_all_tables, except that only + pg_statio_user_tablespg_statio_user_tables + Same as pg_statio_all_tables, except that only user tables are shown. - pg_statio_all_indexespg_statio_all_indexes + pg_statio_all_indexespg_statio_all_indexes One row for each index in the current database, showing statistics about I/O on that specific index. @@ -474,19 +474,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_indexespg_statio_sys_indexes - Same as pg_statio_all_indexes, except that only + pg_statio_sys_indexespg_statio_sys_indexes + Same as pg_statio_all_indexes, except that only indexes on system tables are shown. - pg_statio_user_indexespg_statio_user_indexes - Same as pg_statio_all_indexes, except that only + pg_statio_user_indexespg_statio_user_indexes + Same as pg_statio_all_indexes, except that only indexes on user tables are shown. - pg_statio_all_sequencespg_statio_all_sequences + pg_statio_all_sequencespg_statio_all_sequences One row for each sequence in the current database, showing statistics about I/O on that specific sequence. @@ -495,20 +495,20 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_sequencespg_statio_sys_sequences - Same as pg_statio_all_sequences, except that only + pg_statio_sys_sequencespg_statio_sys_sequences + Same as pg_statio_all_sequences, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.) - pg_statio_user_sequencespg_statio_user_sequences - Same as pg_statio_all_sequences, except that only + pg_statio_user_sequencespg_statio_user_sequences + Same as pg_statio_all_sequences, except that only user sequences are shown. - pg_stat_user_functionspg_stat_user_functions + pg_stat_user_functionspg_stat_user_functions One row for each tracked function, showing statistics about executions of that function. See @@ -517,10 +517,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_xact_user_functionspg_stat_xact_user_functions - Similar to pg_stat_user_functions, but counts only - calls during the current transaction (which are not - yet included in pg_stat_user_functions). + pg_stat_xact_user_functionspg_stat_xact_user_functions + Similar to pg_stat_user_functions, but counts only + calls during the current transaction (which are not + yet included in pg_stat_user_functions). @@ -533,18 +533,18 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - The pg_statio_ views are primarily useful to + The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the - entire story: due to the way in which PostgreSQL + entire story: due to the way in which PostgreSQL handles disk I/O, data that is not in the - PostgreSQL buffer cache might still reside in the + PostgreSQL buffer cache might still reside in the kernel's I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more - detailed information on PostgreSQL I/O behavior are - advised to use the PostgreSQL statistics collector + detailed information on PostgreSQL I/O behavior are + advised to use the PostgreSQL statistics collector in combination with operating system utilities that allow insight into the kernel's handling of I/O. @@ -564,39 +564,39 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - datid - oid + datid + oid OID of the database this backend is connected to - datname - name + datname + name Name of the database this backend is connected to - pid - integer + pid + integer Process ID of this backend - usesysid - oid + usesysid + oid OID of the user logged into this backend - usename - name + usename + name Name of the user logged into this backend - application_name - text + application_name + text Name of the application that is connected to this backend - client_addr - inet + client_addr + inet IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an @@ -604,78 +604,78 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - client_hostname - text + client_hostname + text Host name of the connected client, as reported by a - reverse DNS lookup of client_addr. This field will + reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. - client_port - integer + client_port + integer TCP port number that the client is using for communication - with this backend, or -1 if a Unix socket is used + with this backend, or -1 if a Unix socket is used - backend_start - timestamp with time zone + backend_start + timestamp with time zone Time when this process was started. For client backends, this is the time the client connected to the server. - xact_start - timestamp with time zone + xact_start + timestamp with time zone Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the - query_start column. + query_start column. - query_start - timestamp with time zone + query_start + timestamp with time zone Time when the currently active query was started, or if - state is not active, when the last query + state is not active, when the last query was started - state_change - timestamp with time zone - Time when the state was last changed + state_change + timestamp with time zone + Time when the state was last changed - wait_event_type - text + wait_event_type + text The type of event for which the backend is waiting, if any; otherwise NULL. Possible values are: - LWLock: The backend is waiting for a lightweight lock. + LWLock: The backend is waiting for a lightweight lock. Each such lock protects a particular data structure in shared memory. - wait_event will contain a name identifying the purpose + wait_event will contain a name identifying the purpose of the lightweight lock. (Some locks have specific names; others are part of a group of locks each with a similar purpose.) - Lock: The backend is waiting for a heavyweight lock. + Lock: The backend is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal - operations such as relation extension. wait_event will + operations such as relation extension. wait_event will identify the type of lock awaited. - BufferPin: The server process is waiting to access to + BufferPin: The server process is waiting to access to a data buffer during a period when no other process can be examining that buffer. Buffer pin waits can be protracted if another process holds an open cursor which last read data from the @@ -684,94 +684,94 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - Activity: The server process is idle. This is used by + Activity: The server process is idle. This is used by system processes waiting for activity in their main processing loop. - wait_event will identify the specific wait point. + wait_event will identify the specific wait point. - Extension: The server process is waiting for activity + Extension: The server process is waiting for activity in an extension module. This category is useful for modules to track custom waiting points. - Client: The server process is waiting for some activity + Client: The server process is waiting for some activity on a socket from user applications, and that the server expects something to happen that is independent from its internal processes. - wait_event will identify the specific wait point. + wait_event will identify the specific wait point. - IPC: The server process is waiting for some activity - from another process in the server. wait_event will + IPC: The server process is waiting for some activity + from another process in the server. wait_event will identify the specific wait point. - Timeout: The server process is waiting for a timeout - to expire. wait_event will identify the specific wait + Timeout: The server process is waiting for a timeout + to expire. wait_event will identify the specific wait point. - IO: The server process is waiting for a IO to complete. - wait_event will identify the specific wait point. + IO: The server process is waiting for a IO to complete. + wait_event will identify the specific wait point. - wait_event - text + wait_event + text Wait event name if backend is currently waiting, otherwise NULL. See for details. - state - text + state + text Current overall state of this backend. Possible values are: - active: The backend is executing a query. + active: The backend is executing a query. - idle: The backend is waiting for a new client command. + idle: The backend is waiting for a new client command. - idle in transaction: The backend is in a transaction, + idle in transaction: The backend is in a transaction, but is not currently executing a query. - idle in transaction (aborted): This state is similar to - idle in transaction, except one of the statements in + idle in transaction (aborted): This state is similar to + idle in transaction, except one of the statements in the transaction caused an error. - fastpath function call: The backend is executing a + fastpath function call: The backend is executing a fast-path function. - disabled: This state is reported if disabled: This state is reported if is disabled in this backend. @@ -786,13 +786,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser backend_xmin xid - The current backend's xmin horizon. + The current backend's xmin horizon. - query - text + query + text Text of this backend's most recent query. If - state is active this field shows the + state is active this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 characters; this value can be changed via the parameter @@ -803,11 +803,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser backend_type text Type of current backend. Possible types are - autovacuum launcher, autovacuum worker, - background worker, background writer, - client backend, checkpointer, - startup, walreceiver, - walsender and walwriter. + autovacuum launcher, autovacuum worker, + background worker, background writer, + client backend, checkpointer, + startup, walreceiver, + walsender and walwriter. @@ -822,10 +822,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - The wait_event and state columns are - independent. If a backend is in the active state, - it may or may not be waiting on some event. If the state - is active and wait_event is non-null, it + The wait_event and state columns are + independent. If a backend is in the active state, + it may or may not be waiting on some event. If the state + is active and wait_event is non-null, it means that a query is being executed, but is being blocked somewhere in the system. @@ -845,767 +845,767 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - LWLock - ShmemIndexLock + LWLock + ShmemIndexLock Waiting to find or allocate space in shared memory. - OidGenLock + OidGenLock Waiting to allocate or assign an OID. - XidGenLock + XidGenLock Waiting to allocate or assign a transaction id. - ProcArrayLock + ProcArrayLock Waiting to get a snapshot or clearing a transaction id at transaction end. - SInvalReadLock + SInvalReadLock Waiting to retrieve or remove messages from shared invalidation queue. - SInvalWriteLock + SInvalWriteLock Waiting to add a message in shared invalidation queue. - WALBufMappingLock + WALBufMappingLock Waiting to replace a page in WAL buffers. - WALWriteLock + WALWriteLock Waiting for WAL buffers to be written to disk. - ControlFileLock + ControlFileLock Waiting to read or update the control file or creation of a new WAL file. - CheckpointLock + CheckpointLock Waiting to perform checkpoint. - CLogControlLock + CLogControlLock Waiting to read or update transaction status. - SubtransControlLock + SubtransControlLock Waiting to read or update sub-transaction information. - MultiXactGenLock + MultiXactGenLock Waiting to read or update shared multixact state. - MultiXactOffsetControlLock + MultiXactOffsetControlLock Waiting to read or update multixact offset mappings. - MultiXactMemberControlLock + MultiXactMemberControlLock Waiting to read or update multixact member mappings. - RelCacheInitLock + RelCacheInitLock Waiting to read or write relation cache initialization file. - CheckpointerCommLock + CheckpointerCommLock Waiting to manage fsync requests. - TwoPhaseStateLock + TwoPhaseStateLock Waiting to read or update the state of prepared transactions. - TablespaceCreateLock + TablespaceCreateLock Waiting to create or drop the tablespace. - BtreeVacuumLock + BtreeVacuumLock Waiting to read or update vacuum-related information for a B-tree index. - AddinShmemInitLock + AddinShmemInitLock Waiting to manage space allocation in shared memory. - AutovacuumLock + AutovacuumLock Autovacuum worker or launcher waiting to update or read the current state of autovacuum workers. - AutovacuumScheduleLock + AutovacuumScheduleLock Waiting to ensure that the table it has selected for a vacuum still needs vacuuming. - SyncScanLock + SyncScanLock Waiting to get the start location of a scan on a table for synchronized scans. - RelationMappingLock + RelationMappingLock Waiting to update the relation map file used to store catalog to filenode mapping. - AsyncCtlLock + AsyncCtlLock Waiting to read or update shared notification state. - AsyncQueueLock + AsyncQueueLock Waiting to read or update notification messages. - SerializableXactHashLock + SerializableXactHashLock Waiting to retrieve or store information about serializable transactions. - SerializableFinishedListLock + SerializableFinishedListLock Waiting to access the list of finished serializable transactions. - SerializablePredicateLockListLock + SerializablePredicateLockListLock Waiting to perform an operation on a list of locks held by serializable transactions. - OldSerXidLock + OldSerXidLock Waiting to read or record conflicting serializable transactions. - SyncRepLock + SyncRepLock Waiting to read or update information about synchronous replicas. - BackgroundWorkerLock + BackgroundWorkerLock Waiting to read or update background worker state. - DynamicSharedMemoryControlLock + DynamicSharedMemoryControlLock Waiting to read or update dynamic shared memory state. - AutoFileLock - Waiting to update the postgresql.auto.conf file. + AutoFileLock + Waiting to update the postgresql.auto.conf file. - ReplicationSlotAllocationLock + ReplicationSlotAllocationLock Waiting to allocate or free a replication slot. - ReplicationSlotControlLock + ReplicationSlotControlLock Waiting to read or update replication slot state. - CommitTsControlLock + CommitTsControlLock Waiting to read or update transaction commit timestamps. - CommitTsLock + CommitTsLock Waiting to read or update the last value set for the transaction timestamp. - ReplicationOriginLock + ReplicationOriginLock Waiting to setup, drop or use replication origin. - MultiXactTruncationLock + MultiXactTruncationLock Waiting to read or truncate multixact information. - OldSnapshotTimeMapLock + OldSnapshotTimeMapLock Waiting to read or update old snapshot control information. - BackendRandomLock + BackendRandomLock Waiting to generate a random number. - LogicalRepWorkerLock + LogicalRepWorkerLock Waiting for action on logical replication worker to finish. - CLogTruncationLock + CLogTruncationLock Waiting to truncate the write-ahead log or waiting for write-ahead log truncation to finish. - clog + clog Waiting for I/O on a clog (transaction status) buffer. - commit_timestamp + commit_timestamp Waiting for I/O on commit timestamp buffer. - subtrans + subtrans Waiting for I/O a subtransaction buffer. - multixact_offset + multixact_offset Waiting for I/O on a multixact offset buffer. - multixact_member + multixact_member Waiting for I/O on a multixact_member buffer. - async + async Waiting for I/O on an async (notify) buffer. - oldserxid + oldserxid Waiting to I/O on an oldserxid buffer. - wal_insert + wal_insert Waiting to insert WAL into a memory buffer. - buffer_content + buffer_content Waiting to read or write a data page in memory. - buffer_io + buffer_io Waiting for I/O on a data page. - replication_origin + replication_origin Waiting to read or update the replication progress. - replication_slot_io + replication_slot_io Waiting for I/O on a replication slot. - proc + proc Waiting to read or update the fast-path lock information. - buffer_mapping + buffer_mapping Waiting to associate a data block with a buffer in the buffer pool. - lock_manager + lock_manager Waiting to add or examine locks for backends, or waiting to join or exit a locking group (used by parallel query). - predicate_lock_manager + predicate_lock_manager Waiting to add or examine predicate lock information. - parallel_query_dsa + parallel_query_dsa Waiting for parallel query dynamic shared memory allocation lock. - tbm + tbm Waiting for TBM shared iterator lock. - Lock - relation + Lock + relation Waiting to acquire a lock on a relation. - extend + extend Waiting to extend a relation. - page + page Waiting to acquire a lock on page of a relation. - tuple + tuple Waiting to acquire a lock on a tuple. - transactionid + transactionid Waiting for a transaction to finish. - virtualxid + virtualxid Waiting to acquire a virtual xid lock. - speculative token + speculative token Waiting to acquire a speculative insertion lock. - object + object Waiting to acquire a lock on a non-relation database object. - userlock + userlock Waiting to acquire a user lock. - advisory + advisory Waiting to acquire an advisory user lock. - BufferPin - BufferPin + BufferPin + BufferPin Waiting to acquire a pin on a buffer. - Activity - ArchiverMain + Activity + ArchiverMain Waiting in main loop of the archiver process. - AutoVacuumMain + AutoVacuumMain Waiting in main loop of autovacuum launcher process. - BgWriterHibernate + BgWriterHibernate Waiting in background writer process, hibernating. - BgWriterMain + BgWriterMain Waiting in main loop of background writer process background worker. - CheckpointerMain + CheckpointerMain Waiting in main loop of checkpointer process. - LogicalLauncherMain + LogicalLauncherMain Waiting in main loop of logical launcher process. - LogicalApplyMain + LogicalApplyMain Waiting in main loop of logical apply process. - PgStatMain + PgStatMain Waiting in main loop of the statistics collector process. - RecoveryWalAll + RecoveryWalAll Waiting for WAL from any kind of source (local, archive or stream) at recovery. - RecoveryWalStream + RecoveryWalStream Waiting for WAL from a stream at recovery. - SysLoggerMain + SysLoggerMain Waiting in main loop of syslogger process. - WalReceiverMain + WalReceiverMain Waiting in main loop of WAL receiver process. - WalSenderMain + WalSenderMain Waiting in main loop of WAL sender process. - WalWriterMain + WalWriterMain Waiting in main loop of WAL writer process. - Client - ClientRead + Client + ClientRead Waiting to read data from the client. - ClientWrite + ClientWrite Waiting to write data from the client. - LibPQWalReceiverConnect + LibPQWalReceiverConnect Waiting in WAL receiver to establish connection to remote server. - LibPQWalReceiverReceive + LibPQWalReceiverReceive Waiting in WAL receiver to receive data from remote server. - SSLOpenServer + SSLOpenServer Waiting for SSL while attempting connection. - WalReceiverWaitStart + WalReceiverWaitStart Waiting for startup process to send initial data for streaming replication. - WalSenderWaitForWAL + WalSenderWaitForWAL Waiting for WAL to be flushed in WAL sender process. - WalSenderWriteData + WalSenderWriteData Waiting for any activity when processing replies from WAL receiver in WAL sender process. - Extension - Extension + Extension + Extension Waiting in an extension. - IPC - BgWorkerShutdown + IPC + BgWorkerShutdown Waiting for background worker to shut down. - BgWorkerStartup + BgWorkerStartup Waiting for background worker to start up. - BtreePage + BtreePage Waiting for the page number needed to continue a parallel B-tree scan to become available. - ExecuteGather - Waiting for activity from child process when executing Gather node. + ExecuteGather + Waiting for activity from child process when executing Gather node. - LogicalSyncData + LogicalSyncData Waiting for logical replication remote server to send data for initial table synchronization. - LogicalSyncStateChange + LogicalSyncStateChange Waiting for logical replication remote server to change state. - MessageQueueInternal + MessageQueueInternal Waiting for other process to be attached in shared message queue. - MessageQueuePutMessage + MessageQueuePutMessage Waiting to write a protocol message to a shared message queue. - MessageQueueReceive + MessageQueueReceive Waiting to receive bytes from a shared message queue. - MessageQueueSend + MessageQueueSend Waiting to send bytes to a shared message queue. - ParallelFinish + ParallelFinish Waiting for parallel workers to finish computing. - ParallelBitmapScan + ParallelBitmapScan Waiting for parallel bitmap scan to become initialized. - ProcArrayGroupUpdate + ProcArrayGroupUpdate Waiting for group leader to clear transaction id at transaction end. - ClogGroupUpdate + ClogGroupUpdate Waiting for group leader to update transaction status at transaction end. - ReplicationOriginDrop + ReplicationOriginDrop Waiting for a replication origin to become inactive to be dropped. - ReplicationSlotDrop + ReplicationSlotDrop Waiting for a replication slot to become inactive to be dropped. - SafeSnapshot - Waiting for a snapshot for a READ ONLY DEFERRABLE transaction. + SafeSnapshot + Waiting for a snapshot for a READ ONLY DEFERRABLE transaction. - SyncRep + SyncRep Waiting for confirmation from remote server during synchronous replication. - Timeout - BaseBackupThrottle + Timeout + BaseBackupThrottle Waiting during base backup when throttling activity. - PgSleep - Waiting in process that called pg_sleep. + PgSleep + Waiting in process that called pg_sleep. - RecoveryApplyDelay + RecoveryApplyDelay Waiting to apply WAL at recovery because it is delayed. - IO - BufFileRead + IO + BufFileRead Waiting for a read from a buffered file. - BufFileWrite + BufFileWrite Waiting for a write to a buffered file. - ControlFileRead + ControlFileRead Waiting for a read from the control file. - ControlFileSync + ControlFileSync Waiting for the control file to reach stable storage. - ControlFileSyncUpdate + ControlFileSyncUpdate Waiting for an update to the control file to reach stable storage. - ControlFileWrite + ControlFileWrite Waiting for a write to the control file. - ControlFileWriteUpdate + ControlFileWriteUpdate Waiting for a write to update the control file. - CopyFileRead + CopyFileRead Waiting for a read during a file copy operation. - CopyFileWrite + CopyFileWrite Waiting for a write during a file copy operation. - DataFileExtend + DataFileExtend Waiting for a relation data file to be extended. - DataFileFlush + DataFileFlush Waiting for a relation data file to reach stable storage. - DataFileImmediateSync + DataFileImmediateSync Waiting for an immediate synchronization of a relation data file to stable storage. - DataFilePrefetch + DataFilePrefetch Waiting for an asynchronous prefetch from a relation data file. - DataFileRead + DataFileRead Waiting for a read from a relation data file. - DataFileSync + DataFileSync Waiting for changes to a relation data file to reach stable storage. - DataFileTruncate + DataFileTruncate Waiting for a relation data file to be truncated. - DataFileWrite + DataFileWrite Waiting for a write to a relation data file. - DSMFillZeroWrite + DSMFillZeroWrite Waiting to write zero bytes to a dynamic shared memory backing file. - LockFileAddToDataDirRead + LockFileAddToDataDirRead Waiting for a read while adding a line to the data directory lock file. - LockFileAddToDataDirSync + LockFileAddToDataDirSync Waiting for data to reach stable storage while adding a line to the data directory lock file. - LockFileAddToDataDirWrite + LockFileAddToDataDirWrite Waiting for a write while adding a line to the data directory lock file. - LockFileCreateRead + LockFileCreateRead Waiting to read while creating the data directory lock file. - LockFileCreateSync + LockFileCreateSync Waiting for data to reach stable storage while creating the data directory lock file. - LockFileCreateWrite + LockFileCreateWrite Waiting for a write while creating the data directory lock file. - LockFileReCheckDataDirRead + LockFileReCheckDataDirRead Waiting for a read during recheck of the data directory lock file. - LogicalRewriteCheckpointSync + LogicalRewriteCheckpointSync Waiting for logical rewrite mappings to reach stable storage during a checkpoint. - LogicalRewriteMappingSync + LogicalRewriteMappingSync Waiting for mapping data to reach stable storage during a logical rewrite. - LogicalRewriteMappingWrite + LogicalRewriteMappingWrite Waiting for a write of mapping data during a logical rewrite. - LogicalRewriteSync + LogicalRewriteSync Waiting for logical rewrite mappings to reach stable storage. - LogicalRewriteWrite + LogicalRewriteWrite Waiting for a write of logical rewrite mappings. - RelationMapRead + RelationMapRead Waiting for a read of the relation map file. - RelationMapSync + RelationMapSync Waiting for the relation map file to reach stable storage. - RelationMapWrite + RelationMapWrite Waiting for a write to the relation map file. - ReorderBufferRead + ReorderBufferRead Waiting for a read during reorder buffer management. - ReorderBufferWrite + ReorderBufferWrite Waiting for a write during reorder buffer management. - ReorderLogicalMappingRead + ReorderLogicalMappingRead Waiting for a read of a logical mapping during reorder buffer management. - ReplicationSlotRead + ReplicationSlotRead Waiting for a read from a replication slot control file. - ReplicationSlotRestoreSync + ReplicationSlotRestoreSync Waiting for a replication slot control file to reach stable storage while restoring it to memory. - ReplicationSlotSync + ReplicationSlotSync Waiting for a replication slot control file to reach stable storage. - ReplicationSlotWrite + ReplicationSlotWrite Waiting for a write to a replication slot control file. - SLRUFlushSync + SLRUFlushSync Waiting for SLRU data to reach stable storage during a checkpoint or database shutdown. - SLRURead + SLRURead Waiting for a read of an SLRU page. - SLRUSync + SLRUSync Waiting for SLRU data to reach stable storage following a page write. - SLRUWrite + SLRUWrite Waiting for a write of an SLRU page. - SnapbuildRead + SnapbuildRead Waiting for a read of a serialized historical catalog snapshot. - SnapbuildSync + SnapbuildSync Waiting for a serialized historical catalog snapshot to reach stable storage. - SnapbuildWrite + SnapbuildWrite Waiting for a write of a serialized historical catalog snapshot. - TimelineHistoryFileSync + TimelineHistoryFileSync Waiting for a timeline history file received via streaming replication to reach stable storage. - TimelineHistoryFileWrite + TimelineHistoryFileWrite Waiting for a write of a timeline history file received via streaming replication. - TimelineHistoryRead + TimelineHistoryRead Waiting for a read of a timeline history file. - TimelineHistorySync + TimelineHistorySync Waiting for a newly created timeline history file to reach stable storage. - TimelineHistoryWrite + TimelineHistoryWrite Waiting for a write of a newly created timeline history file. - TwophaseFileRead + TwophaseFileRead Waiting for a read of a two phase state file. - TwophaseFileSync + TwophaseFileSync Waiting for a two phase state file to reach stable storage. - TwophaseFileWrite + TwophaseFileWrite Waiting for a write of a two phase state file. - WALBootstrapSync + WALBootstrapSync Waiting for WAL to reach stable storage during bootstrapping. - WALBootstrapWrite + WALBootstrapWrite Waiting for a write of a WAL page during bootstrapping. - WALCopyRead + WALCopyRead Waiting for a read when creating a new WAL segment by copying an existing one. - WALCopySync + WALCopySync Waiting a new WAL segment created by copying an existing one to reach stable storage. - WALCopyWrite + WALCopyWrite Waiting for a write when creating a new WAL segment by copying an existing one. - WALInitSync + WALInitSync Waiting for a newly initialized WAL file to reach stable storage. - WALInitWrite + WALInitWrite Waiting for a write while initializing a new WAL file. - WALRead + WALRead Waiting for a read from a WAL file. - WALSenderTimelineHistoryRead + WALSenderTimelineHistoryRead Waiting for a read from a timeline history file during walsender timeline command. - WALSyncMethodAssign + WALSyncMethodAssign Waiting for data to reach stable storage while assigning WAL sync method. - WALWrite + WALWrite Waiting for a write to a WAL file. @@ -1615,10 +1615,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser For tranches registered by extensions, the name is specified by extension - and this will be displayed as wait_event. It is quite + and this will be displayed as wait_event. It is quite possible that user has registered the tranche in one of the backends (by having allocation in dynamic shared memory) in which case other backends - won't have that information, so we display extension for such + won't have that information, so we display extension for such cases. @@ -1649,53 +1649,53 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of a WAL sender process - usesysid - oid + usesysid + oid OID of the user logged into this WAL sender process - usename - name + usename + name Name of the user logged into this WAL sender process - application_name - text + application_name + text Name of the application that is connected to this WAL sender - client_addr - inet + client_addr + inet IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. - client_hostname - text + client_hostname + text Host name of the connected client, as reported by a - reverse DNS lookup of client_addr. This field will + reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. - client_port - integer + client_port + integer TCP port number that the client is using for communication - with this WAL sender, or -1 if a Unix socket is used + with this WAL sender, or -1 if a Unix socket is used - backend_start - timestamp with time zone + backend_start + timestamp with time zone Time when this process was started, i.e., when the client connected to this WAL sender @@ -1703,71 +1703,71 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i backend_xmin xid - This standby's xmin horizon reported + This standby's xmin horizon reported by . - state - text + state + text Current WAL sender state. Possible values are: - startup: This WAL sender is starting up. + startup: This WAL sender is starting up. - catchup: This WAL sender's connected standby is + catchup: This WAL sender's connected standby is catching up with the primary. - streaming: This WAL sender is streaming changes + streaming: This WAL sender is streaming changes after its connected standby server has caught up with the primary. - backup: This WAL sender is sending a backup. + backup: This WAL sender is sending a backup. - stopping: This WAL sender is stopping. + stopping: This WAL sender is stopping. - sent_lsn - pg_lsn + sent_lsn + pg_lsn Last write-ahead log location sent on this connection - write_lsn - pg_lsn + write_lsn + pg_lsn Last write-ahead log location written to disk by this standby server - flush_lsn - pg_lsn + flush_lsn + pg_lsn Last write-ahead log location flushed to disk by this standby server - replay_lsn - pg_lsn + replay_lsn + pg_lsn Last write-ahead log location replayed into the database on this standby server - write_lag - interval + write_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that @@ -1776,8 +1776,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - flush_lag - interval + flush_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that @@ -1786,8 +1786,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - replay_lag - interval + replay_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that @@ -1796,38 +1796,38 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - sync_priority - integer + sync_priority + integer Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. - sync_state - text + sync_state + text Synchronous state of this standby server. Possible values are: - async: This standby server is asynchronous. + async: This standby server is asynchronous. - potential: This standby server is now asynchronous, + potential: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails. - sync: This standby server is synchronous. + sync: This standby server is synchronous. - quorum: This standby server is considered as a candidate + quorum: This standby server is considered as a candidate for quorum standbys. @@ -1897,69 +1897,69 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of the WAL receiver process - status - text + status + text Activity status of the WAL receiver process - receive_start_lsn - pg_lsn + receive_start_lsn + pg_lsn First write-ahead log location used when WAL receiver is started - receive_start_tli - integer + receive_start_tli + integer First timeline number used when WAL receiver is started - received_lsn - pg_lsn + received_lsn + pg_lsn Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started - received_tli - integer + received_tli + integer Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started - last_msg_send_time - timestamp with time zone + last_msg_send_time + timestamp with time zone Send time of last message received from origin WAL sender - last_msg_receipt_time - timestamp with time zone + last_msg_receipt_time + timestamp with time zone Receipt time of last message received from origin WAL sender - latest_end_lsn - pg_lsn + latest_end_lsn + pg_lsn Last write-ahead log location reported to origin WAL sender - latest_end_time - timestamp with time zone + latest_end_time + timestamp with time zone Time of last write-ahead log location reported to origin WAL sender - slot_name - text + slot_name + text Replication slot name used by this WAL receiver - conninfo - text + conninfo + text Connection string used by this WAL receiver, with security-sensitive fields obfuscated. @@ -1988,52 +1988,52 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - subid - oid + subid + oid OID of the subscription - subname - text + subname + text Name of the subscription - pid - integer + pid + integer Process ID of the subscription worker process - relid - Oid + relid + Oid OID of the relation that the worker is synchronizing; null for the main apply worker - received_lsn - pg_lsn + received_lsn + pg_lsn Last write-ahead log location received, the initial value of this field being 0 - last_msg_send_time - timestamp with time zone + last_msg_send_time + timestamp with time zone Send time of last message received from origin WAL sender - last_msg_receipt_time - timestamp with time zone + last_msg_receipt_time + timestamp with time zone Receipt time of last message received from origin WAL sender - latest_end_lsn - pg_lsn + latest_end_lsn + pg_lsn Last write-ahead log location reported to origin WAL sender - latest_end_time - timestamp with time zone + latest_end_time + timestamp with time zone Time of last write-ahead log location reported to origin WAL sender @@ -2061,42 +2061,42 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of a backend or WAL sender process - ssl - boolean + ssl + boolean True if SSL is used on this connection - version - text + version + text Version of SSL in use, or NULL if SSL is not in use on this connection - cipher - text + cipher + text Name of SSL cipher in use, or NULL if SSL is not in use on this connection - bits - integer + bits + integer Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection - compression - boolean + compression + boolean True if SSL compression is in use, false if not, or NULL if SSL is not in use on this connection - clientdn - text + clientdn + text Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the @@ -2132,37 +2132,37 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - archived_count + archived_count bigint Number of WAL files that have been successfully archived - last_archived_wal + last_archived_wal text Name of the last WAL file successfully archived - last_archived_time + last_archived_time timestamp with time zone Time of the last successful archive operation - failed_count + failed_count bigint Number of failed attempts for archiving WAL files - last_failed_wal + last_failed_wal text Name of the WAL file of the last failed archival operation - last_failed_time + last_failed_time timestamp with time zone Time of the last failed archival operation - stats_reset + stats_reset timestamp with time zone Time at which these statistics were last reset @@ -2189,17 +2189,17 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - checkpoints_timed + checkpoints_timed bigint Number of scheduled checkpoints that have been performed - checkpoints_req + checkpoints_req bigint Number of requested checkpoints that have been performed - checkpoint_write_time + checkpoint_write_time double precision Total amount of time that has been spent in the portion of @@ -2207,7 +2207,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - checkpoint_sync_time + checkpoint_sync_time double precision Total amount of time that has been spent in the portion of @@ -2216,40 +2216,40 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - buffers_checkpoint + buffers_checkpoint bigint Number of buffers written during checkpoints - buffers_clean + buffers_clean bigint Number of buffers written by the background writer - maxwritten_clean + maxwritten_clean bigint Number of times the background writer stopped a cleaning scan because it had written too many buffers - buffers_backend + buffers_backend bigint Number of buffers written directly by a backend - buffers_backend_fsync + buffers_backend_fsync bigint Number of times a backend had to execute its own - fsync call (normally the background writer handles those + fsync call (normally the background writer handles those even when the backend does its own write) - buffers_alloc + buffers_alloc bigint Number of buffers allocated - stats_reset + stats_reset timestamp with time zone Time at which these statistics were last reset @@ -2275,84 +2275,84 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - datid - oid + datid + oid OID of a database - datname - name + datname + name Name of this database - numbackends - integer + numbackends + integer Number of backends currently connected to this database. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset. - xact_commit - bigint + xact_commit + bigint Number of transactions in this database that have been committed - xact_rollback - bigint + xact_rollback + bigint Number of transactions in this database that have been rolled back - blks_read - bigint + blks_read + bigint Number of disk blocks read in this database - blks_hit - bigint + blks_hit + bigint Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache) - tup_returned - bigint + tup_returned + bigint Number of rows returned by queries in this database - tup_fetched - bigint + tup_fetched + bigint Number of rows fetched by queries in this database - tup_inserted - bigint + tup_inserted + bigint Number of rows inserted by queries in this database - tup_updated - bigint + tup_updated + bigint Number of rows updated by queries in this database - tup_deleted - bigint + tup_deleted + bigint Number of rows deleted by queries in this database - conflicts - bigint + conflicts + bigint Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see for details.) - temp_files - bigint + temp_files + bigint Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the @@ -2360,8 +2360,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - temp_bytes - bigint + temp_bytes + bigint Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and @@ -2369,25 +2369,25 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - deadlocks - bigint + deadlocks + bigint Number of deadlocks detected in this database - blk_read_time - double precision + blk_read_time + double precision Time spent reading data file blocks by backends in this database, in milliseconds - blk_write_time - double precision + blk_write_time + double precision Time spent writing data file blocks by backends in this database, in milliseconds - stats_reset - timestamp with time zone + stats_reset + timestamp with time zone Time at which these statistics were last reset @@ -2412,42 +2412,42 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - datid - oid + datid + oid OID of a database - datname - name + datname + name Name of this database - confl_tablespace - bigint + confl_tablespace + bigint Number of queries in this database that have been canceled due to dropped tablespaces - confl_lock - bigint + confl_lock + bigint Number of queries in this database that have been canceled due to lock timeouts - confl_snapshot - bigint + confl_snapshot + bigint Number of queries in this database that have been canceled due to old snapshots - confl_bufferpin - bigint + confl_bufferpin + bigint Number of queries in this database that have been canceled due to pinned buffers - confl_deadlock - bigint + confl_deadlock + bigint Number of queries in this database that have been canceled due to deadlocks @@ -2476,119 +2476,119 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a table - schemaname - name + schemaname + name Name of the schema that this table is in - relname - name + relname + name Name of this table - seq_scan - bigint + seq_scan + bigint Number of sequential scans initiated on this table - seq_tup_read - bigint + seq_tup_read + bigint Number of live rows fetched by sequential scans - idx_scan - bigint + idx_scan + bigint Number of index scans initiated on this table - idx_tup_fetch - bigint + idx_tup_fetch + bigint Number of live rows fetched by index scans - n_tup_ins - bigint + n_tup_ins + bigint Number of rows inserted - n_tup_upd - bigint + n_tup_upd + bigint Number of rows updated (includes HOT updated rows) - n_tup_del - bigint + n_tup_del + bigint Number of rows deleted - n_tup_hot_upd - bigint + n_tup_hot_upd + bigint Number of rows HOT updated (i.e., with no separate index update required) - n_live_tup - bigint + n_live_tup + bigint Estimated number of live rows - n_dead_tup - bigint + n_dead_tup + bigint Estimated number of dead rows - n_mod_since_analyze - bigint + n_mod_since_analyze + bigint Estimated number of rows modified since this table was last analyzed - last_vacuum - timestamp with time zone + last_vacuum + timestamp with time zone Last time at which this table was manually vacuumed - (not counting VACUUM FULL) + (not counting VACUUM FULL) - last_autovacuum - timestamp with time zone + last_autovacuum + timestamp with time zone Last time at which this table was vacuumed by the autovacuum daemon - last_analyze - timestamp with time zone + last_analyze + timestamp with time zone Last time at which this table was manually analyzed - last_autoanalyze - timestamp with time zone + last_autoanalyze + timestamp with time zone Last time at which this table was analyzed by the autovacuum daemon - vacuum_count - bigint + vacuum_count + bigint Number of times this table has been manually vacuumed - (not counting VACUUM FULL) + (not counting VACUUM FULL) - autovacuum_count - bigint + autovacuum_count + bigint Number of times this table has been vacuumed by the autovacuum daemon - analyze_count - bigint + analyze_count + bigint Number of times this table has been manually analyzed - autoanalyze_count - bigint + autoanalyze_count + bigint Number of times this table has been analyzed by the autovacuum daemon @@ -2619,43 +2619,43 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of the table for this index - indexrelid - oid + indexrelid + oid OID of this index - schemaname - name + schemaname + name Name of the schema this index is in - relname - name + relname + name Name of the table for this index - indexrelname - name + indexrelname + name Name of this index - idx_scan - bigint + idx_scan + bigint Number of index scans initiated on this index - idx_tup_read - bigint + idx_tup_read + bigint Number of index entries returned by scans on this index - idx_tup_fetch - bigint + idx_tup_fetch + bigint Number of live table rows fetched by simple index scans using this index @@ -2674,17 +2674,17 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - Indexes can be used by simple index scans, bitmap index scans, + Indexes can be used by simple index scans, bitmap index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the - pg_stat_all_indexes.idx_tup_read + pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the - pg_stat_all_tables.idx_tup_fetch + pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect - pg_stat_all_indexes.idx_tup_fetch. + pg_stat_all_indexes.idx_tup_fetch. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale. @@ -2692,10 +2692,10 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - The idx_tup_read and idx_tup_fetch counts + The idx_tup_read and idx_tup_fetch counts can be different even without any use of bitmap scans, - because idx_tup_read counts - index entries retrieved from the index while idx_tup_fetch + because idx_tup_read counts + index entries retrieved from the index while idx_tup_fetch counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan. @@ -2715,58 +2715,58 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a table - schemaname - name + schemaname + name Name of the schema that this table is in - relname - name + relname + name Name of this table - heap_blks_read - bigint + heap_blks_read + bigint Number of disk blocks read from this table - heap_blks_hit - bigint + heap_blks_hit + bigint Number of buffer hits in this table - idx_blks_read - bigint + idx_blks_read + bigint Number of disk blocks read from all indexes on this table - idx_blks_hit - bigint + idx_blks_hit + bigint Number of buffer hits in all indexes on this table - toast_blks_read - bigint + toast_blks_read + bigint Number of disk blocks read from this table's TOAST table (if any) - toast_blks_hit - bigint + toast_blks_hit + bigint Number of buffer hits in this table's TOAST table (if any) - tidx_blks_read - bigint + tidx_blks_read + bigint Number of disk blocks read from this table's TOAST table indexes (if any) - tidx_blks_hit - bigint + tidx_blks_hit + bigint Number of buffer hits in this table's TOAST table indexes (if any) @@ -2796,38 +2796,38 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of the table for this index - indexrelid - oid + indexrelid + oid OID of this index - schemaname - name + schemaname + name Name of the schema this index is in - relname - name + relname + name Name of the table for this index - indexrelname - name + indexrelname + name Name of this index - idx_blks_read - bigint + idx_blks_read + bigint Number of disk blocks read from this index - idx_blks_hit - bigint + idx_blks_hit + bigint Number of buffer hits in this index @@ -2857,28 +2857,28 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a sequence - schemaname - name + schemaname + name Name of the schema this sequence is in - relname - name + relname + name Name of this sequence - blks_read - bigint + blks_read + bigint Number of disk blocks read from this sequence - blks_hit - bigint + blks_hit + bigint Number of buffer hits in this sequence @@ -2904,34 +2904,34 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - funcid - oid + funcid + oid OID of a function - schemaname - name + schemaname + name Name of the schema this function is in - funcname - name + funcname + name Name of this function - calls - bigint + calls + bigint Number of times this function has been called - total_time - double precision + total_time + double precision Total time spent in this function and all other functions called by it, in milliseconds - self_time - double precision + self_time + double precision Total time spent in this function itself, not including other functions called by it, in milliseconds @@ -2956,7 +2956,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in - psql you could issue \d+ pg_stat_activity.) + psql you could issue \d+ pg_stat_activity.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. @@ -3037,10 +3037,10 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Reset some cluster-wide statistics counters to zero, depending on the argument (requires superuser privileges by default, but EXECUTE for this function can be granted to others). - Calling pg_stat_reset_shared('bgwriter') will zero all the - counters shown in the pg_stat_bgwriter view. - Calling pg_stat_reset_shared('archiver') will zero all the - counters shown in the pg_stat_archiver view. + Calling pg_stat_reset_shared('bgwriter') will zero all the + counters shown in the pg_stat_bgwriter view. + Calling pg_stat_reset_shared('archiver') will zero all the + counters shown in the pg_stat_archiver view. @@ -3069,7 +3069,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i pg_stat_get_activity, the underlying function of - the pg_stat_activity view, returns a set of records + the pg_stat_activity view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics @@ -3079,7 +3079,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i to the number of currently active backends. The function pg_stat_get_backend_idset provides a convenient way to generate one row for each active backend for - invoking these functions. For example, to show the PIDs and + invoking these functions. For example, to show the PIDs and current queries of all backends: @@ -3113,7 +3113,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, pg_stat_get_backend_activity(integer) text - Text of this backend's most recent query + Text of this backend's most recent query @@ -3240,9 +3240,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, Progress Reporting - PostgreSQL has the ability to report the progress of + PostgreSQL has the ability to report the progress of certain commands during command execution. Currently, the only command - which supports progress reporting is VACUUM. This may be + which supports progress reporting is VACUUM. This may be expanded in the future. @@ -3250,13 +3250,13 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, VACUUM Progress Reporting - Whenever VACUUM is running, the + Whenever VACUUM is running, the pg_stat_progress_vacuum view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. - Progress reporting is not currently supported for VACUUM FULL - and backends running VACUUM FULL will not be listed in this + Progress reporting is not currently supported for VACUUM FULL + and backends running VACUUM FULL will not be listed in this view. @@ -3273,73 +3273,73 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, - pid - integer + pid + integer Process ID of backend. - datid - oid + datid + oid OID of the database to which this backend is connected. - datname - name + datname + name Name of the database to which this backend is connected. - relid - oid + relid + oid OID of the table being vacuumed. - phase - text + phase + text Current processing phase of vacuum. See . - heap_blks_total - bigint + heap_blks_total + bigint Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and - need not be) visited by this VACUUM. + need not be) visited by this VACUUM. - heap_blks_scanned - bigint + heap_blks_scanned + bigint Number of heap blocks scanned. Because the - visibility map is used to optimize scans, + visibility map is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become - equal to heap_blks_total when the vacuum is complete. - This counter only advances when the phase is scanning heap. + equal to heap_blks_total when the vacuum is complete. + This counter only advances when the phase is scanning heap. - heap_blks_vacuumed - bigint + heap_blks_vacuumed + bigint Number of heap blocks vacuumed. Unless the table has no indexes, this - counter only advances when the phase is vacuuming heap. + counter only advances when the phase is vacuuming heap. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments. - index_vacuum_count - bigint + index_vacuum_count + bigint Number of completed index vacuum cycles. - max_dead_tuples - bigint + max_dead_tuples + bigint Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on @@ -3347,8 +3347,8 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, - num_dead_tuples - bigint + num_dead_tuples + bigint Number of dead tuples collected since the last index vacuum cycle. @@ -3371,23 +3371,23 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, initializing - VACUUM is preparing to begin scanning the heap. This + VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief. scanning heap - VACUUM is currently scanning the heap. It will prune and + VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing - activity. The heap_blks_scanned column can be used + activity. The heap_blks_scanned column can be used to monitor the progress of the scan. vacuuming indexes - VACUUM is currently vacuuming the indexes. If a table has + VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if is insufficient to @@ -3397,10 +3397,10 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, vacuuming heap - VACUUM is currently vacuuming the heap. Vacuuming the heap + VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of - vacuuming indexes. If heap_blks_scanned is less than - heap_blks_total, the system will return to scanning + vacuuming indexes. If heap_blks_scanned is less than + heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. @@ -3408,7 +3408,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, cleaning up indexes - VACUUM is currently cleaning up indexes. This occurs after + VACUUM is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed. @@ -3416,7 +3416,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, truncating heap - VACUUM is currently truncating the heap so as to return + VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes. @@ -3424,10 +3424,10 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, performing final cleanup - VACUUM is performing final cleanup. During this phase, - VACUUM will vacuum the free space map, update statistics - in pg_class, and report statistics to the statistics - collector. When this phase is completed, VACUUM will end. + VACUUM is performing final cleanup. During this phase, + VACUUM will vacuum the free space map, update statistics + in pg_class, and report statistics to the statistics + collector. When this phase is completed, VACUUM will end. @@ -3467,7 +3467,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for - the macros in src/include/utils/probes.h. + the macros in src/include/utils/probes.h. @@ -3477,7 +3477,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in PostgreSQL. To include DTrace support - specify to configure. See for further information. @@ -3490,7 +3490,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, as shown in ; shows the types used in the probes. More probes can certainly be - added to enhance PostgreSQL's observability. + added to enhance PostgreSQL's observability.
@@ -3584,7 +3584,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, statement-status(const char *)Probe that fires anytime the server process updates its - pg_stat_activity.status. + pg_stat_activity.status. arg0 is the new status string. @@ -3978,7 +3978,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting - pg_stat_database before and after a performance test: + pg_stat_database before and after a performance test: #!/usr/sbin/dtrace -qs @@ -4050,15 +4050,15 @@ Total time (ns) 2312105013 - Add the probe definitions to src/backend/utils/probes.d + Add the probe definitions to src/backend/utils/probes.d - Include pg_trace.h if it is not already present in the + Include pg_trace.h if it is not already present in the module(s) containing the probe points, and insert - TRACE_POSTGRESQL probe macros at the desired locations + TRACE_POSTGRESQL probe macros at the desired locations in the source code @@ -4081,30 +4081,30 @@ Total time (ns) 2312105013 - Decide that the probe will be named transaction-start and + Decide that the probe will be named transaction-start and requires a parameter of type LocalTransactionId - Add the probe definition to src/backend/utils/probes.d: + Add the probe definition to src/backend/utils/probes.d: probe transaction__start(LocalTransactionId); Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a - hyphen, so transaction-start is the name to document for + hyphen, so transaction-start is the name to document for users. - At compile time, transaction__start is converted to a macro - called TRACE_POSTGRESQL_TRANSACTION_START (notice the + At compile time, transaction__start is converted to a macro + called TRACE_POSTGRESQL_TRANSACTION_START (notice the underscores are single here), which is available by including - pg_trace.h. Add the macro call to the appropriate location + pg_trace.h. Add the macro call to the appropriate location in the source code. In this case, it looks like the following: @@ -4148,9 +4148,9 @@ TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId); On most platforms, if PostgreSQL is - built with , the arguments to a trace macro will be evaluated whenever control passes through the - macro, even if no tracing is being done. This is + macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, @@ -4162,7 +4162,7 @@ if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED()) TRACE_POSTGRESQL_TRANSACTION_START(some_function(...)); - Each trace macro has a corresponding ENABLED macro. + Each trace macro has a corresponding ENABLED macro. diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index dda0170886..75cb39359f 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -279,7 +279,7 @@ The table also shows that PostgreSQL's Repeatable Read implementation does not allow phantom reads. Stricter behavior is permitted by the SQL standard: the four isolation levels only define which phenomena - must not happen, not which phenomena must happen. + must not happen, not which phenomena must happen. The behavior of the available isolation levels is detailed in the following subsections. @@ -317,7 +317,7 @@ Read Committed is the default isolation level in PostgreSQL. When a transaction uses this isolation level, a SELECT query - (without a FOR UPDATE/SHARE clause) sees only data + (without a FOR UPDATE/SHARE clause) sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions. In effect, a SELECT query sees @@ -345,7 +345,7 @@ updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of - the row. The search condition of the command (the WHERE clause) is + the row. The search condition of the command (the WHERE clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation using the updated version of the row. In the case of @@ -355,19 +355,19 @@ - INSERT with an ON CONFLICT DO UPDATE clause + INSERT with an ON CONFLICT DO UPDATE clause behaves similarly. In Read Committed mode, each row proposed for insertion will either insert or update. Unless there are unrelated errors, one of those two outcomes is guaranteed. If a conflict originates in another transaction whose effects are not yet visible to the INSERT , the UPDATE clause will affect that row, - even though possibly no version of that row is + even though possibly no version of that row is conventionally visible to the command. INSERT with an ON CONFLICT DO - NOTHING clause may have insertion not proceed for a row due to + NOTHING clause may have insertion not proceed for a row due to the outcome of another transaction whose effects are not visible to the INSERT snapshot. Again, this is only the case in Read Committed mode. @@ -416,10 +416,10 @@ COMMIT; The DELETE will have no effect even though there is a website.hits = 10 row before and after the UPDATE. This occurs because the - pre-update row value 9 is skipped, and when the + pre-update row value 9 is skipped, and when the UPDATE completes and DELETE - obtains a lock, the new row value is no longer 10 but - 11, which no longer matches the criteria. + obtains a lock, the new row value is no longer 10 but + 11, which no longer matches the criteria. @@ -427,7 +427,7 @@ COMMIT; that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point - at issue above is whether or not a single command + at issue above is whether or not a single command sees an absolutely consistent view of the database. @@ -472,9 +472,9 @@ COMMIT; This level is different from Read Committed in that a query in a repeatable read transaction sees a snapshot as of the start of the first non-transaction-control statement in the - transaction, not as of the start + transaction, not as of the start of the current statement within the transaction. Thus, successive - SELECT commands within a single + SELECT commands within a single transaction see the same data, i.e., they do not see changes made by other transactions that committed after their own transaction started. @@ -587,7 +587,7 @@ ERROR: could not serialize access due to concurrent update As an example, - consider a table mytab, initially containing: + consider a table mytab, initially containing: class | value -------+------- @@ -600,14 +600,14 @@ ERROR: could not serialize access due to concurrent update SELECT SUM(value) FROM mytab WHERE class = 1; - and then inserts the result (30) as the value in a - new row with class = 2. Concurrently, serializable + and then inserts the result (30) as the value in a + new row with class = 2. Concurrently, serializable transaction B computes: SELECT SUM(value) FROM mytab WHERE class = 2; and obtains the result 300, which it inserts in a new row with - class = 1. Then both transactions try to commit. + class = 1. Then both transactions try to commit. If either transaction were running at the Repeatable Read isolation level, both would be allowed to commit; but since there is no serial order of execution consistent with the result, using Serializable transactions will allow one @@ -639,11 +639,11 @@ ERROR: could not serialize access due to read/write dependencies among transact To guarantee true serializability PostgreSQL - uses predicate locking, which means that it keeps locks + uses predicate locking, which means that it keeps locks which allow it to determine when a write would have had an impact on the result of a previous read from a concurrent transaction, had it run first. In PostgreSQL these locks do not - cause any blocking and therefore can not play any part in + cause any blocking and therefore can not play any part in causing a deadlock. They are used to identify and flag dependencies among concurrent Serializable transactions which in certain combinations can lead to serialization anomalies. In contrast, a Read Committed or @@ -659,20 +659,20 @@ ERROR: could not serialize access due to read/write dependencies among transact other database systems, are based on data actually accessed by a transaction. These will show up in the pg_locks - system view with a mode of SIReadLock. The + system view with a mode of SIReadLock. The particular locks acquired during execution of a query will depend on the plan used by the query, and multiple finer-grained locks (e.g., tuple locks) may be combined into fewer coarser-grained locks (e.g., page locks) during the course of the transaction to prevent exhaustion of the memory used to - track the locks. A READ ONLY transaction may be able to + track the locks. A READ ONLY transaction may be able to release its SIRead locks before completion, if it detects that no conflicts can still occur which could lead to a serialization anomaly. - In fact, READ ONLY transactions will often be able to + In fact, READ ONLY transactions will often be able to establish that fact at startup and avoid taking any predicate locks. - If you explicitly request a SERIALIZABLE READ ONLY DEFERRABLE + If you explicitly request a SERIALIZABLE READ ONLY DEFERRABLE transaction, it will block until it can establish this fact. (This is - the only case where Serializable transactions block but + the only case where Serializable transactions block but Repeatable Read transactions don't.) On the other hand, SIRead locks often need to be kept past transaction commit, until overlapping read write transactions complete. @@ -695,13 +695,13 @@ ERROR: could not serialize access due to read/write dependencies among transact anomalies. The monitoring of read/write dependencies has a cost, as does the restart of transactions which are terminated with a serialization failure, but balanced against the cost and blocking involved in use of - explicit locks and SELECT FOR UPDATE or SELECT FOR - SHARE, Serializable transactions are the best performance choice + explicit locks and SELECT FOR UPDATE or SELECT FOR + SHARE, Serializable transactions are the best performance choice for some environments. - While PostgreSQL's Serializable transaction isolation + While PostgreSQL's Serializable transaction isolation level only allows concurrent transactions to commit if it can prove there is a serial order of execution that would produce the same effect, it doesn't always prevent errors from being raised that would not occur in @@ -709,7 +709,7 @@ ERROR: could not serialize access due to read/write dependencies among transact constraint violations caused by conflicts with overlapping Serializable transactions even after explicitly checking that the key isn't present before attempting to insert it. This can be avoided by making sure - that all Serializable transactions that insert potentially + that all Serializable transactions that insert potentially conflicting keys explicitly check if they can do so first. For example, imagine an application that asks the user for a new key and then checks that it doesn't exist already by trying to select it first, or generates @@ -727,7 +727,7 @@ ERROR: could not serialize access due to read/write dependencies among transact - Declare transactions as READ ONLY when possible. + Declare transactions as READ ONLY when possible. @@ -754,8 +754,8 @@ ERROR: could not serialize access due to read/write dependencies among transact - Eliminate explicit locks, SELECT FOR UPDATE, and - SELECT FOR SHARE where no longer needed due to the + Eliminate explicit locks, SELECT FOR UPDATE, and + SELECT FOR SHARE where no longer needed due to the protections automatically provided by Serializable transactions. @@ -801,7 +801,7 @@ ERROR: could not serialize access due to read/write dependencies among transact most PostgreSQL commands automatically acquire locks of appropriate modes to ensure that referenced tables are not dropped or modified in incompatible ways while the - command executes. (For example, TRUNCATE cannot safely be + command executes. (For example, TRUNCATE cannot safely be executed concurrently with other operations on the same table, so it obtains an exclusive lock on the table to enforce that.) @@ -860,7 +860,7 @@ ERROR: could not serialize access due to read/write dependencies among transact The SELECT command acquires a lock of this mode on - referenced tables. In general, any query that only reads a table + referenced tables. In general, any query that only reads a table and does not modify it will acquire this lock mode. @@ -904,7 +904,7 @@ ERROR: could not serialize access due to read/write dependencies among transact acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any - command that modifies data in a table. + command that modifies data in a table. @@ -920,13 +920,13 @@ ERROR: could not serialize access due to read/write dependencies among transact EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against - concurrent schema changes and VACUUM runs. + concurrent schema changes and VACUUM runs. Acquired by VACUUM (without ), - ANALYZE, CREATE INDEX CONCURRENTLY, - CREATE STATISTICS and + ANALYZE, CREATE INDEX CONCURRENTLY, + CREATE STATISTICS and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see ). @@ -1016,12 +1016,12 @@ ERROR: could not serialize access due to read/write dependencies among transact - Acquired by the DROP TABLE, + Acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, VACUUM FULL, and REFRESH MATERIALIZED VIEW (without ) - commands. Many forms of ALTER TABLE also acquire + commands. Many forms of ALTER TABLE also acquire a lock at this level. This is also the default lock mode for LOCK TABLE statements that do not specify a mode explicitly. @@ -1042,9 +1042,9 @@ ERROR: could not serialize access due to read/write dependencies among transact Once acquired, a lock is normally held till end of transaction. But if a lock is acquired after establishing a savepoint, the lock is released immediately if the savepoint is rolled back to. This is consistent with - the principle that ROLLBACK cancels all effects of the + the principle that ROLLBACK cancels all effects of the commands since the savepoint. The same holds for locks acquired within a - PL/pgSQL exception block: an error escape from the block + PL/pgSQL exception block: an error escape from the block releases locks acquired within it. @@ -1204,17 +1204,17 @@ ERROR: could not serialize access due to read/write dependencies among transact concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the - row was deleted). Within a REPEATABLE READ or - SERIALIZABLE transaction, + row was deleted). Within a REPEATABLE READ or + SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see . - The FOR UPDATE lock mode - is also acquired by any DELETE on a row, and also by an - UPDATE that modifies the values on certain columns. Currently, - the set of columns considered for the UPDATE case are those that + The FOR UPDATE lock mode + is also acquired by any DELETE on a row, and also by an + UPDATE that modifies the values on certain columns. Currently, + the set of columns considered for the UPDATE case are those that have a unique index on them that can be used in a foreign key (so partial indexes and expressional indexes are not considered), but this may change in the future. @@ -1228,11 +1228,11 @@ ERROR: could not serialize access due to read/write dependencies among transact - Behaves similarly to FOR UPDATE, except that the lock + Behaves similarly to FOR UPDATE, except that the lock acquired is weaker: this lock will not block - SELECT FOR KEY SHARE commands that attempt to acquire + SELECT FOR KEY SHARE commands that attempt to acquire a lock on the same rows. This lock mode is also acquired by any - UPDATE that does not acquire a FOR UPDATE lock. + UPDATE that does not acquire a FOR UPDATE lock. @@ -1243,12 +1243,12 @@ ERROR: could not serialize access due to read/write dependencies among transact - Behaves similarly to FOR NO KEY UPDATE, except that it + Behaves similarly to FOR NO KEY UPDATE, except that it acquires a shared lock rather than exclusive lock on each retrieved row. A shared lock blocks other transactions from performing UPDATE, DELETE, SELECT FOR UPDATE or - SELECT FOR NO KEY UPDATE on these rows, but it does not + SELECT FOR NO KEY UPDATE on these rows, but it does not prevent them from performing SELECT FOR SHARE or SELECT FOR KEY SHARE. @@ -1262,13 +1262,13 @@ ERROR: could not serialize access due to read/write dependencies among transact Behaves similarly to FOR SHARE, except that the - lock is weaker: SELECT FOR UPDATE is blocked, but not - SELECT FOR NO KEY UPDATE. A key-shared lock blocks + lock is weaker: SELECT FOR UPDATE is blocked, but not + SELECT FOR NO KEY UPDATE. A key-shared lock blocks other transactions from performing DELETE or any UPDATE that changes the key values, but not - other UPDATE, and neither does it prevent - SELECT FOR NO KEY UPDATE, SELECT FOR SHARE, - or SELECT FOR KEY SHARE. + other UPDATE, and neither does it prevent + SELECT FOR NO KEY UPDATE, SELECT FOR SHARE, + or SELECT FOR KEY SHARE. @@ -1357,7 +1357,7 @@ ERROR: could not serialize access due to read/write dependencies among transact The use of explicit locking can increase the likelihood of - deadlocks, wherein two (or more) transactions each + deadlocks, wherein two (or more) transactions each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A and then tries to acquire an exclusive lock on table B, while transaction 2 has already @@ -1447,12 +1447,12 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; PostgreSQL provides a means for creating locks that have application-defined meanings. These are - called advisory locks, because the system does not + called advisory locks, because the system does not enforce their use — it is up to the application to use them correctly. Advisory locks can be useful for locking strategies that are an awkward fit for the MVCC model. For example, a common use of advisory locks is to emulate pessimistic - locking strategies typical of so-called flat file data + locking strategies typical of so-called flat file data management systems. While a flag stored in a table could be used for the same purpose, advisory locks are faster, avoid table bloat, and are automatically @@ -1506,7 +1506,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; In certain cases using advisory locking methods, especially in queries - involving explicit ordering and LIMIT clauses, care must be + involving explicit ordering and LIMIT clauses, care must be taken to control the locks acquired because of the order in which SQL expressions are evaluated. For example: @@ -1518,7 +1518,7 @@ SELECT pg_advisory_lock(q.id) FROM ) q; -- ok In the above queries, the second form is dangerous because the - LIMIT is not guaranteed to be applied before the locking + LIMIT is not guaranteed to be applied before the locking function is executed. This might cause some locks to be acquired that the application was not expecting, and hence would fail to release (until it ends the session). @@ -1590,7 +1590,7 @@ SELECT pg_advisory_lock(q.id) FROM for application programmers if the application software goes through a framework which automatically retries transactions which are rolled back with a serialization failure. It may be a good idea to set - default_transaction_isolation to serializable. + default_transaction_isolation to serializable. It would also be wise to take some action to ensure that no other transaction isolation level is used, either inadvertently or to subvert integrity checks, through checks of the transaction isolation @@ -1660,7 +1660,7 @@ SELECT pg_advisory_lock(q.id) FROM includes some but not all post-transaction-start changes. In such cases a careful person might wish to lock all tables needed for the check, in order to get an indisputable picture of current reality. A - SHARE mode (or higher) lock guarantees that there are no + SHARE mode (or higher) lock guarantees that there are no uncommitted changes in the locked table, other than those of the current transaction. @@ -1675,8 +1675,8 @@ SELECT pg_advisory_lock(q.id) FROM transaction predates obtaining the lock, it might predate some now-committed changes in the table. A repeatable read transaction's snapshot is actually frozen at the start of its first query or data-modification command - (SELECT, INSERT, - UPDATE, or DELETE), so + (SELECT, INSERT, + UPDATE, or DELETE), so it is possible to obtain locks explicitly before the snapshot is frozen. diff --git a/doc/src/sgml/nls.sgml b/doc/src/sgml/nls.sgml index 1d331473af..f312b5bfb5 100644 --- a/doc/src/sgml/nls.sgml +++ b/doc/src/sgml/nls.sgml @@ -7,12 +7,12 @@ For the Translator - PostgreSQL + PostgreSQL programs (server and client) can issue their messages in your favorite language — if the messages have been translated. Creating and maintaining translated message sets needs the help of people who speak their own language well and want to contribute to - the PostgreSQL effort. You do not have to be a + the PostgreSQL effort. You do not have to be a programmer at all to do this. This section explains how to help. @@ -170,8 +170,8 @@ make init-po This will create a file progname.pot. (.pot to distinguish it from PO files that - are in production. The T stands for - template.) + are in production. The T stands for + template.) Copy this file to language.po and edit it. To make it known that the new language is available, @@ -234,7 +234,7 @@ make update-po - If the original is a printf format string, the translation + If the original is a printf format string, the translation also needs to be. The translation also needs to have the same format specifiers in the same order. Sometimes the natural rules of the language make this impossible or at least awkward. @@ -301,7 +301,7 @@ msgstr "Die Datei %2$s hat %1$u Zeichen." This section describes how to implement native language support in a program or library that is part of the - PostgreSQL distribution. + PostgreSQL distribution. Currently, it only applies to C programs. @@ -447,7 +447,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); printf("Files were %s.\n", flag ? "copied" : "removed"); The word order within the sentence might be different in other - languages. Also, even if you remember to call gettext() on + languages. Also, even if you remember to call gettext() on each fragment, the fragments might not translate well separately. It's better to duplicate a little code so that each message to be translated is a coherent whole. Only numbers, file names, and @@ -481,7 +481,7 @@ printf("number of copied files: %d", n); If you really want to construct a properly pluralized message, there is support for this, but it's a bit awkward. When generating - a primary or detail error message in ereport(), you can + a primary or detail error message in ereport(), you can write something like this: errmsg_plural("copied %d file", @@ -496,17 +496,17 @@ errmsg_plural("copied %d file", are formatted per the format string as usual. (Normally, the pluralization control value will also be one of the values to be formatted, so it has to be written twice.) In English it only - matters whether n is 1 or not 1, but in other + matters whether n is 1 or not 1, but in other languages there can be many different plural forms. The translator sees the two English forms as a group and has the opportunity to supply multiple substitute strings, with the appropriate one being - selected based on the run-time value of n. + selected based on the run-time value of n. If you need to pluralize a message that isn't going directly to an - errmsg or errdetail report, you have to use - the underlying function ngettext. See the gettext + errmsg or errdetail report, you have to use + the underlying function ngettext. See the gettext documentation. diff --git a/doc/src/sgml/notation.sgml b/doc/src/sgml/notation.sgml index 2f350a329d..bd1e8f629a 100644 --- a/doc/src/sgml/notation.sgml +++ b/doc/src/sgml/notation.sgml @@ -7,17 +7,17 @@ The following conventions are used in the synopsis of a command: brackets ([ and ]) indicate optional parts. (In the synopsis of a Tcl command, question marks - (?) are used instead, as is usual in Tcl.) Braces + (?) are used instead, as is usual in Tcl.) Braces ({ and }) and vertical lines (|) indicate that you must choose one - alternative. Dots (...) mean that the preceding element + alternative. Dots (...) mean that the preceding element can be repeated. Where it enhances the clarity, SQL commands are preceded by the - prompt =>, and shell commands are preceded by the - prompt $. Normally, prompts are not shown, though. + prompt =>, and shell commands are preceded by the + prompt $. Normally, prompts are not shown, though. diff --git a/doc/src/sgml/oid2name.sgml b/doc/src/sgml/oid2name.sgml index 97b170a23f..4ab2cf1a85 100644 --- a/doc/src/sgml/oid2name.sgml +++ b/doc/src/sgml/oid2name.sgml @@ -27,7 +27,7 @@ Description - oid2name is a utility program that helps administrators to + oid2name is a utility program that helps administrators to examine the file structure used by PostgreSQL. To make use of it, you need to be familiar with the database file structure, which is described in . @@ -35,7 +35,7 @@ - The name oid2name is historical, and is actually rather + The name oid2name is historical, and is actually rather misleading, since most of the time when you use it, you will really be concerned with tables' filenode numbers (which are the file names visible in the database directories). Be sure you understand the @@ -60,8 +60,8 @@ - filenode - show info for table with filenode filenode + filenode + show info for table with filenode filenode @@ -70,8 +70,8 @@ - oid - show info for table with OID oid + oid + show info for table with OID oid @@ -93,13 +93,13 @@ - tablename_pattern - show info for table(s) matching tablename_pattern + tablename_pattern + show info for table(s) matching tablename_pattern - - + + Print the oid2name version and exit. @@ -115,8 +115,8 @@ - - + + Show help about oid2name command line @@ -133,27 +133,27 @@ - database + database database to connect to - host + host database server's host - port + port database server's port - username + username user name to connect as - password + password password (deprecated — putting this on the command line is a security hazard) @@ -163,27 +163,27 @@ To display specific tables, select which tables to show by - using - If you don't give any of , or , + but do give , it will list all tables in the database + named by . In this mode, the and + options control what gets listed. - If you don't give either, it will show a listing of database + OIDs. Alternatively you can give to get a tablespace listing. @@ -192,7 +192,7 @@ Notes - oid2name requires a running database server with + oid2name requires a running database server with non-corrupt system catalogs. It is therefore of only limited use for recovering from catastrophic database corruption situations. diff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml index e46f5ca6bc..23570af4bf 100644 --- a/doc/src/sgml/pageinspect.sgml +++ b/doc/src/sgml/pageinspect.sgml @@ -8,7 +8,7 @@ - The pageinspect module provides functions that allow you to + The pageinspect module provides functions that allow you to inspect the contents of database pages at a low level, which is useful for debugging purposes. All of these functions may be used only by superusers. @@ -28,7 +28,7 @@ get_raw_page reads the specified block of the named - relation and returns a copy as a bytea value. This allows a + relation and returns a copy as a bytea value. This allows a single time-consistent copy of the block to be obtained. fork should be 'main' for the main data fork, 'fsm' for the free space map, @@ -63,7 +63,7 @@ page_header shows fields that are common to all - PostgreSQL heap and index pages. + PostgreSQL heap and index pages. @@ -76,8 +76,8 @@ test=# SELECT * FROM page_header(get_raw_page('pg_class', 0)); 0/24A1B50 | 0 | 1 | 232 | 368 | 8192 | 8192 | 4 | 0 The returned columns correspond to the fields in the - PageHeaderData struct. - See src/include/storage/bufpage.h for details. + PageHeaderData struct. + See src/include/storage/bufpage.h for details. @@ -147,8 +147,8 @@ test=# SELECT page_checksum(get_raw_page('pg_class', 0), 0); test=# SELECT * FROM heap_page_items(get_raw_page('pg_class', 0)); - See src/include/storage/itemid.h and - src/include/access/htup_details.h for explanations of the fields + See src/include/storage/itemid.h and + src/include/access/htup_details.h for explanations of the fields returned. @@ -221,7 +221,7 @@ test=# SELECT * FROM heap_page_item_attrs(get_raw_page('pg_class', 0), 'pg_class next slot to be returned from the page, is also printed. - See src/backend/storage/freespace/README for more + See src/backend/storage/freespace/README for more information on the structure of an FSM page. @@ -315,21 +315,21 @@ test=# SELECT * FROM bt_page_items('pg_cast_oid_index', 1); 7 | (0,7) | 12 | f | f | 29 27 00 00 8 | (0,8) | 12 | f | f | 2a 27 00 00 - In a B-tree leaf page, ctid points to a heap tuple. - In an internal page, the block number part of ctid + In a B-tree leaf page, ctid points to a heap tuple. + In an internal page, the block number part of ctid points to another page in the index itself, while the offset part (the second number) is ignored and is usually 1. Note that the first item on any non-rightmost page (any page with - a non-zero value in the btpo_next field) is the - page's high key, meaning its data + a non-zero value in the btpo_next field) is the + page's high key, meaning its data serves as an upper bound on all items appearing on the page, while - its ctid field is meaningless. Also, on non-leaf + its ctid field is meaningless. Also, on non-leaf pages, the first real data item (the first item that is not a high key) is a minus infinity item, with no actual value - in its data field. Such an item does have a valid - downlink in its ctid field, however. + in its data field. Such an item does have a valid + downlink in its ctid field, however. @@ -345,7 +345,7 @@ test=# SELECT * FROM bt_page_items('pg_cast_oid_index', 1); It is also possible to pass a page to bt_page_items - as a bytea value. A page image obtained + as a bytea value. A page image obtained with get_raw_page should be passed as argument. So the last example could also be rewritten like this: @@ -470,8 +470,8 @@ test=# SELECT * FROM brin_page_items(get_raw_page('brinidx', 5), 139 | 8 | 2 | f | f | f | {177 .. 264} The returned columns correspond to the fields in the - BrinMemTuple and BrinValues structs. - See src/include/access/brin_tuple.h for details. + BrinMemTuple and BrinValues structs. + See src/include/access/brin_tuple.h for details. diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml index 1f5efd9e6d..6aac506942 100644 --- a/doc/src/sgml/parallel.sgml +++ b/doc/src/sgml/parallel.sgml @@ -8,7 +8,7 @@ - PostgreSQL can devise query plans which can leverage + PostgreSQL can devise query plans which can leverage multiple CPUs in order to answer queries faster. This feature is known as parallel query. Many queries cannot benefit from parallel query, either due to limitations of the current implementation or because there is no @@ -47,18 +47,18 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; In all cases, the Gather or Gather Merge node will have exactly one child plan, which is the portion of the plan that will be executed in - parallel. If the Gather or Gather Merge node is + parallel. If the Gather or Gather Merge node is at the very top of the plan tree, then the entire query will execute in parallel. If it is somewhere else in the plan tree, then only the portion of the plan below it will run in parallel. In the example above, the query accesses only one table, so there is only one plan node other than - the Gather node itself; since that plan node is a child of the - Gather node, it will run in parallel. + the Gather node itself; since that plan node is a child of the + Gather node, it will run in parallel. - Using EXPLAIN, you can see the number of - workers chosen by the planner. When the Gather node is reached + Using EXPLAIN, you can see the number of + workers chosen by the planner. When the Gather node is reached during query execution, the process which is implementing the user's session will request a number of background worker processes equal to the number @@ -72,7 +72,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance. If this occurrence is frequent, consider increasing - max_worker_processes and max_parallel_workers + max_worker_processes and max_parallel_workers so that more workers can be run simultaneously or alternatively reducing max_parallel_workers_per_gather so that the planner requests fewer workers. @@ -96,10 +96,10 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; When the node at the top of the parallel portion of the plan is - Gather Merge rather than Gather, it indicates that + Gather Merge rather than Gather, it indicates that each process executing the parallel portion of the plan is producing tuples in sorted order, and that the leader is performing an - order-preserving merge. In contrast, Gather reads tuples + order-preserving merge. In contrast, Gather reads tuples from the workers in whatever order is convenient, destroying any sort order that may have existed. @@ -128,7 +128,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; must be set to a - value other than none. Parallel query requires dynamic + value other than none. Parallel query requires dynamic shared memory in order to pass data between cooperating processes. @@ -152,8 +152,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; The query writes any data or locks any database rows. If a query contains a data-modifying operation either at the top level or within a CTE, no parallel plans for that query will be generated. As an - exception, the commands CREATE TABLE, SELECT - INTO, and CREATE MATERIALIZED VIEW which create a new + exception, the commands CREATE TABLE, SELECT + INTO, and CREATE MATERIALIZED VIEW which create a new table and populate it can use a parallel plan. @@ -205,8 +205,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Even when parallel query plan is generated for a particular query, there are several circumstances under which it will be impossible to execute that plan in parallel at execution time. If this occurs, the leader - will execute the portion of the plan below the Gather - node entirely by itself, almost as if the Gather node were + will execute the portion of the plan below the Gather + node entirely by itself, almost as if the Gather node were not present. This will happen if any of the following conditions are met: @@ -264,7 +264,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; copy of the output result set, so the query would not run any faster than normal but would produce incorrect results. Instead, the parallel portion of the plan must be what is known internally to the query - optimizer as a partial plan; that is, it must be constructed + optimizer as a partial plan; that is, it must be constructed so that each process which executes the plan will generate only a subset of the output rows in such a way that each required output row is guaranteed to be generated by exactly one of the cooperating processes. @@ -281,14 +281,14 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - In a parallel sequential scan, the table's blocks will + In a parallel sequential scan, the table's blocks will be divided among the cooperating processes. Blocks are handed out one at a time, so that access to the table remains sequential. - In a parallel bitmap heap scan, one process is chosen + In a parallel bitmap heap scan, one process is chosen as the leader. That process performs a scan of one or more indexes and builds a bitmap indicating which table blocks need to be visited. These blocks are then divided among the cooperating processes as in @@ -298,8 +298,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - In a parallel index scan or parallel index-only - scan, the cooperating processes take turns reading data from the + In a parallel index scan or parallel index-only + scan, the cooperating processes take turns reading data from the index. Currently, parallel index scans are supported only for btree indexes. Each process will claim a single index block and will scan and return all tuples referenced by that block; other process can @@ -345,25 +345,25 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Parallel Aggregation - PostgreSQL supports parallel aggregation by aggregating in + PostgreSQL supports parallel aggregation by aggregating in two stages. First, each process participating in the parallel portion of the query performs an aggregation step, producing a partial result for each group of which that process is aware. This is reflected in the plan - as a Partial Aggregate node. Second, the partial results are - transferred to the leader via Gather or Gather - Merge. Finally, the leader re-aggregates the results across all + as a Partial Aggregate node. Second, the partial results are + transferred to the leader via Gather or Gather + Merge. Finally, the leader re-aggregates the results across all workers in order to produce the final result. This is reflected in the - plan as a Finalize Aggregate node. + plan as a Finalize Aggregate node. - Because the Finalize Aggregate node runs on the leader + Because the Finalize Aggregate node runs on the leader process, queries which produce a relatively large number of groups in comparison to the number of input rows will appear less favorable to the query planner. For example, in the worst-case scenario the number of - groups seen by the Finalize Aggregate node could be as many as + groups seen by the Finalize Aggregate node could be as many as the number of input rows which were seen by all worker processes in the - Partial Aggregate stage. For such cases, there is clearly + Partial Aggregate stage. For such cases, there is clearly going to be no performance benefit to using parallel aggregation. The query planner takes this into account during the planning process and is unlikely to choose parallel aggregate in this scenario. @@ -371,14 +371,14 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Parallel aggregation is not supported in all situations. Each aggregate - must be safe for parallelism and must + must be safe for parallelism and must have a combine function. If the aggregate has a transition state of type - internal, it must have serialization and deserialization + internal, it must have serialization and deserialization functions. See for more details. Parallel aggregation is not supported if any aggregate function call - contains DISTINCT or ORDER BY clause and is also + contains DISTINCT or ORDER BY clause and is also not supported for ordered set aggregates or when the query involves - GROUPING SETS. It can only be used when all joins involved in + GROUPING SETS. It can only be used when all joins involved in the query are also part of the parallel portion of the plan. @@ -417,13 +417,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; The planner classifies operations involved in a query as either - parallel safe, parallel restricted, - or parallel unsafe. A parallel safe operation is one which + parallel safe, parallel restricted, + or parallel unsafe. A parallel safe operation is one which does not conflict with the use of parallel query. A parallel restricted operation is one which cannot be performed in a parallel worker, but which can be performed in the leader while parallel query is in use. Therefore, - parallel restricted operations can never occur below a Gather - or Gather Merge node, but can occur elsewhere in a plan which + parallel restricted operations can never occur below a Gather + or Gather Merge node, but can occur elsewhere in a plan which contains such a node. A parallel unsafe operation is one which cannot be performed while parallel query is in use, not even in the leader. When a query contains anything which is parallel unsafe, parallel query @@ -450,13 +450,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Scans of foreign tables, unless the foreign data wrapper has - an IsForeignScanParallelSafe API which indicates otherwise. + an IsForeignScanParallelSafe API which indicates otherwise. - Access to an InitPlan or correlated SubPlan. + Access to an InitPlan or correlated SubPlan. @@ -475,23 +475,23 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; be parallel unsafe unless otherwise marked. When using or , markings can be set by specifying - PARALLEL SAFE, PARALLEL RESTRICTED, or - PARALLEL UNSAFE as appropriate. When using + PARALLEL SAFE, PARALLEL RESTRICTED, or + PARALLEL UNSAFE as appropriate. When using , the - PARALLEL option can be specified with SAFE, - RESTRICTED, or UNSAFE as the corresponding value. + PARALLEL option can be specified with SAFE, + RESTRICTED, or UNSAFE as the corresponding value. - Functions and aggregates must be marked PARALLEL UNSAFE if + Functions and aggregates must be marked PARALLEL UNSAFE if they write to the database, access sequences, change the transaction state even temporarily (e.g. a PL/pgSQL function which establishes an - EXCEPTION block to catch errors), or make persistent changes to + EXCEPTION block to catch errors), or make persistent changes to settings. Similarly, functions must be marked PARALLEL - RESTRICTED if they access temporary tables, client connection state, + RESTRICTED if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize across workers. For example, - setseed and random are parallel restricted for + setseed and random are parallel restricted for this last reason. @@ -503,7 +503,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; mislabeled, since there is no way for the system to protect itself against arbitrary C code, but in most likely cases the result will be no worse than for any other function. If in doubt, it is probably best to label functions - as UNSAFE. + as UNSAFE. @@ -519,13 +519,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Note that the query planner does not consider deferring the evaluation of parallel-restricted functions or aggregates involved in the query in - order to obtain a superior plan. So, for example, if a WHERE + order to obtain a superior plan. So, for example, if a WHERE clause applied to a particular table is parallel restricted, the query planner will not consider performing a scan of that table in the parallel portion of a plan. In some cases, it would be possible (and perhaps even efficient) to include the scan of that table in the parallel portion of the query and defer the evaluation of the - WHERE clause so that it happens above the Gather + WHERE clause so that it happens above the Gather node. However, the planner does not do this. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index d3b47bc5a5..6a5182d85b 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -30,7 +30,7 @@ plan for each query it receives. Choosing the right plan to match the query structure and the properties of the data is absolutely critical for good performance, so the system includes - a complex planner that tries to choose good plans. + a complex planner that tries to choose good plans. You can use the command to see what query plan the planner creates for any query. Plan-reading is an art that requires some experience to master, @@ -39,17 +39,17 @@ Examples in this section are drawn from the regression test database - after doing a VACUUM ANALYZE, using 9.3 development sources. + after doing a VACUUM ANALYZE, using 9.3 development sources. You should be able to get similar results if you try the examples yourself, but your estimated costs and row counts might vary slightly - because ANALYZE's statistics are random samples rather + because ANALYZE's statistics are random samples rather than exact, and because costs are inherently somewhat platform-dependent. - The examples use EXPLAIN's default text output + The examples use EXPLAIN's default text output format, which is compact and convenient for humans to read. - If you want to feed EXPLAIN's output to a program for further + If you want to feed EXPLAIN's output to a program for further analysis, you should use one of its machine-readable output formats (XML, JSON, or YAML) instead. @@ -58,12 +58,12 @@ <command>EXPLAIN</command> Basics - The structure of a query plan is a tree of plan nodes. + The structure of a query plan is a tree of plan nodes. Nodes at the bottom level of the tree are scan nodes: they return raw rows from a table. There are different types of scan nodes for different table access methods: sequential scans, index scans, and bitmap index - scans. There are also non-table row sources, such as VALUES - clauses and set-returning functions in FROM, which have their + scans. There are also non-table row sources, such as VALUES + clauses and set-returning functions in FROM, which have their own scan node types. If the query requires joining, aggregation, sorting, or other operations on the raw rows, then there will be additional nodes @@ -93,7 +93,7 @@ EXPLAIN SELECT * FROM tenk1; - Since this query has no WHERE clause, it must scan all the + Since this query has no WHERE clause, it must scan all the rows of the table, so the planner has chosen to use a simple sequential scan plan. The numbers that are quoted in parentheses are (left to right): @@ -111,7 +111,7 @@ EXPLAIN SELECT * FROM tenk1; Estimated total cost. This is stated on the assumption that the plan node is run to completion, i.e., all available rows are retrieved. In practice a node's parent node might stop short of reading all - available rows (see the LIMIT example below). + available rows (see the LIMIT example below). @@ -135,7 +135,7 @@ EXPLAIN SELECT * FROM tenk1; cost parameters (see ). Traditional practice is to measure the costs in units of disk page fetches; that is, is conventionally - set to 1.0 and the other cost parameters are set relative + set to 1.0 and the other cost parameters are set relative to that. The examples in this section are run with the default cost parameters. @@ -152,11 +152,11 @@ EXPLAIN SELECT * FROM tenk1; - The rows value is a little tricky because it is + The rows value is a little tricky because it is not the number of rows processed or scanned by the plan node, but rather the number emitted by the node. This is often less than the number scanned, as a result of filtering by any - WHERE-clause conditions that are being applied at the node. + WHERE-clause conditions that are being applied at the node. Ideally the top-level rows estimate will approximate the number of rows actually returned, updated, or deleted by the query. @@ -184,12 +184,12 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; pages and 10000 rows. The estimated cost is computed as (disk pages read * ) + (rows scanned * ). By default, - seq_page_cost is 1.0 and cpu_tuple_cost is 0.01, + seq_page_cost is 1.0 and cpu_tuple_cost is 0.01, so the estimated cost is (358 * 1.0) + (10000 * 0.01) = 458. - Now let's modify the query to add a WHERE condition: + Now let's modify the query to add a WHERE condition: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000; @@ -200,21 +200,21 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000; Filter: (unique1 < 7000) - Notice that the EXPLAIN output shows the WHERE - clause being applied as a filter condition attached to the Seq + Notice that the EXPLAIN output shows the WHERE + clause being applied as a filter condition attached to the Seq Scan plan node. This means that the plan node checks the condition for each row it scans, and outputs only the ones that pass the condition. The estimate of output rows has been reduced because of the - WHERE clause. + WHERE clause. However, the scan will still have to visit all 10000 rows, so the cost hasn't decreased; in fact it has gone up a bit (by 10000 * , to be exact) to reflect the extra CPU - time spent checking the WHERE condition. + time spent checking the WHERE condition. - The actual number of rows this query would select is 7000, but the rows + The actual number of rows this query would select is 7000, but the rows estimate is only approximate. If you try to duplicate this experiment, you will probably get a slightly different estimate; moreover, it can change after each ANALYZE command, because the @@ -245,12 +245,12 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100; scan. (The reason for using two plan levels is that the upper plan node sorts the row locations identified by the index into physical order before reading them, to minimize the cost of separate fetches. - The bitmap mentioned in the node names is the mechanism that + The bitmap mentioned in the node names is the mechanism that does the sorting.) - Now let's add another condition to the WHERE clause: + Now let's add another condition to the WHERE clause: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND stringu1 = 'xxx'; @@ -266,15 +266,15 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND stringu1 = 'xxx'; The added condition stringu1 = 'xxx' reduces the output row count estimate, but not the cost because we still have to visit - the same set of rows. Notice that the stringu1 clause + the same set of rows. Notice that the stringu1 clause cannot be applied as an index condition, since this index is only on - the unique1 column. Instead it is applied as a filter on + the unique1 column. Instead it is applied as a filter on the rows retrieved by the index. Thus the cost has actually gone up slightly to reflect this extra checking. - In some cases the planner will prefer a simple index scan plan: + In some cases the planner will prefer a simple index scan plan: EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42; @@ -289,14 +289,14 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42; makes them even more expensive to read, but there are so few that the extra cost of sorting the row locations is not worth it. You'll most often see this plan type for queries that fetch just a single row. It's - also often used for queries that have an ORDER BY condition + also often used for queries that have an ORDER BY condition that matches the index order, because then no extra sorting step is needed - to satisfy the ORDER BY. + to satisfy the ORDER BY. If there are separate indexes on several of the columns referenced - in WHERE, the planner might choose to use an AND or OR + in WHERE, the planner might choose to use an AND or OR combination of the indexes: @@ -320,7 +320,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000; - Here is an example showing the effects of LIMIT: + Here is an example showing the effects of LIMIT: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; @@ -335,7 +335,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2 - This is the same query as above, but we added a LIMIT so that + This is the same query as above, but we added a LIMIT so that not all the rows need be retrieved, and the planner changed its mind about what to do. Notice that the total cost and row count of the Index Scan node are shown as if it were run to completion. However, the Limit node @@ -370,23 +370,23 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In this plan, we have a nested-loop join node with two table scans as inputs, or children. The indentation of the node summary lines reflects - the plan tree structure. The join's first, or outer, child + the plan tree structure. The join's first, or outer, child is a bitmap scan similar to those we saw before. Its cost and row count - are the same as we'd get from SELECT ... WHERE unique1 < 10 + are the same as we'd get from SELECT ... WHERE unique1 < 10 because we are - applying the WHERE clause unique1 < 10 + applying the WHERE clause unique1 < 10 at that node. The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't affect the row count of the outer scan. The nested-loop join node will run its second, - or inner child once for each row obtained from the outer child. + or inner child once for each row obtained from the outer child. Column values from the current outer row can be plugged into the inner - scan; here, the t1.unique2 value from the outer row is available, + scan; here, the t1.unique2 value from the outer row is available, so we get a plan and costs similar to what we saw above for a simple - SELECT ... WHERE t2.unique2 = constant case. + SELECT ... WHERE t2.unique2 = constant case. (The estimated cost is actually a bit lower than what was seen above, as a result of caching that's expected to occur during the repeated - index scans on t2.) The + index scans on t2.) The costs of the loop node are then set on the basis of the cost of the outer scan, plus one repetition of the inner scan for each outer row (10 * 7.87, here), plus a little CPU time for join processing. @@ -395,7 +395,7 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In this example the join's output row count is the same as the product of the two scans' row counts, but that's not true in all cases because - there can be additional WHERE clauses that mention both tables + there can be additional WHERE clauses that mention both tables and so can only be applied at the join point, not to either input scan. Here's an example: @@ -418,15 +418,15 @@ WHERE t1.unique1 < 10 AND t2.unique2 < 10 AND t1.hundred < t2.hundred; The condition t1.hundred < t2.hundred can't be - tested in the tenk2_unique2 index, so it's applied at the + tested in the tenk2_unique2 index, so it's applied at the join node. This reduces the estimated output row count of the join node, but does not change either input scan. - Notice that here the planner has chosen to materialize the inner + Notice that here the planner has chosen to materialize the inner relation of the join, by putting a Materialize plan node atop it. This - means that the t2 index scan will be done just once, even + means that the t2 index scan will be done just once, even though the nested-loop join node needs to read that data ten times, once for each row from the outer relation. The Materialize node saves the data in memory as it's read, and then returns the data from memory on each @@ -435,8 +435,8 @@ WHERE t1.unique1 < 10 AND t2.unique2 < 10 AND t1.hundred < t2.hundred; When dealing with outer joins, you might see join plan nodes with both - Join Filter and plain Filter conditions attached. - Join Filter conditions come from the outer join's ON clause, + Join Filter and plain Filter conditions attached. + Join Filter conditions come from the outer join's ON clause, so a row that fails the Join Filter condition could still get emitted as a null-extended row. But a plain Filter condition is applied after the outer-join rules and so acts to remove rows unconditionally. In an inner @@ -470,7 +470,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; table are entered into an in-memory hash table, after which the other table is scanned and the hash table is probed for matches to each row. Again note how the indentation reflects the plan structure: the bitmap - scan on tenk1 is the input to the Hash node, which constructs + scan on tenk1 is the input to the Hash node, which constructs the hash table. That's then returned to the Hash Join node, which reads rows from its outer child plan and searches the hash table for each one. @@ -497,9 +497,9 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; Merge join requires its input data to be sorted on the join keys. In this - plan the tenk1 data is sorted by using an index scan to visit + plan the tenk1 data is sorted by using an index scan to visit the rows in the correct order, but a sequential scan and sort is preferred - for onek, because there are many more rows to be visited in + for onek, because there are many more rows to be visited in that table. (Sequential-scan-and-sort frequently beats an index scan for sorting many rows, because of the nonsequential disk access required by the index scan.) @@ -512,7 +512,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; (This is a crude tool, but useful. See also .) For example, if we're unconvinced that sequential-scan-and-sort is the best way to - deal with table onek in the previous example, we could try + deal with table onek in the previous example, we could try SET enable_sort = off; @@ -530,10 +530,10 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; -> Index Scan using onek_unique2 on onek t2 (cost=0.28..224.79 rows=1000 width=244) - which shows that the planner thinks that sorting onek by + which shows that the planner thinks that sorting onek by index-scanning is about 12% more expensive than sequential-scan-and-sort. Of course, the next question is whether it's right about that. - We can investigate that using EXPLAIN ANALYZE, as discussed + We can investigate that using EXPLAIN ANALYZE, as discussed below. @@ -544,8 +544,8 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; It is possible to check the accuracy of the planner's estimates - by using EXPLAIN's ANALYZE option. With this - option, EXPLAIN actually executes the query, and then displays + by using EXPLAIN's ANALYZE option. With this + option, EXPLAIN actually executes the query, and then displays the true row counts and true run time accumulated within each plan node, along with the same estimates that a plain EXPLAIN shows. For example, we might get a result like this: @@ -569,7 +569,7 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; Note that the actual time values are in milliseconds of - real time, whereas the cost estimates are expressed in + real time, whereas the cost estimates are expressed in arbitrary units; so they are unlikely to match up. The thing that's usually most important to look for is whether the estimated row counts are reasonably close to reality. In this example @@ -580,17 +580,17 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In some query plans, it is possible for a subplan node to be executed more than once. For example, the inner index scan will be executed once per outer row in the above nested-loop plan. In such cases, the - loops value reports the + loops value reports the total number of executions of the node, and the actual time and rows values shown are averages per-execution. This is done to make the numbers comparable with the way that the cost estimates are shown. Multiply by - the loops value to get the total time actually spent in + the loops value to get the total time actually spent in the node. In the above example, we spent a total of 0.220 milliseconds - executing the index scans on tenk2. + executing the index scans on tenk2. - In some cases EXPLAIN ANALYZE shows additional execution + In some cases EXPLAIN ANALYZE shows additional execution statistics beyond the plan node execution times and row counts. For example, Sort and Hash nodes provide extra information: @@ -642,13 +642,13 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE ten < 7; These counts can be particularly valuable for filter conditions applied at - join nodes. The Rows Removed line only appears when at least + join nodes. The Rows Removed line only appears when at least one scanned row, or potential join pair in the case of a join node, is rejected by the filter condition. - A case similar to filter conditions occurs with lossy + A case similar to filter conditions occurs with lossy index scans. For example, consider this search for polygons containing a specific point: @@ -685,14 +685,14 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @> polygon '(0.5,2.0)'; Here we can see that the index returned one candidate row, which was then rejected by a recheck of the index condition. This happens because a - GiST index is lossy for polygon containment tests: it actually + GiST index is lossy for polygon containment tests: it actually returns the rows with polygons that overlap the target, and then we have to do the exact containment test on those rows. - EXPLAIN has a BUFFERS option that can be used with - ANALYZE to get even more run time statistics: + EXPLAIN has a BUFFERS option that can be used with + ANALYZE to get even more run time statistics: EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000; @@ -714,7 +714,7 @@ EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique Execution time: 0.423 ms - The numbers provided by BUFFERS help to identify which parts + The numbers provided by BUFFERS help to identify which parts of the query are the most I/O-intensive. @@ -722,7 +722,7 @@ EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique Keep in mind that because EXPLAIN ANALYZE actually runs the query, any side-effects will happen as usual, even though whatever results the query might output are discarded in favor of - printing the EXPLAIN data. If you want to analyze a + printing the EXPLAIN data. If you want to analyze a data-modifying query without changing your tables, you can roll the command back afterwards, for example: @@ -746,8 +746,8 @@ ROLLBACK; - As seen in this example, when the query is an INSERT, - UPDATE, or DELETE command, the actual work of + As seen in this example, when the query is an INSERT, + UPDATE, or DELETE command, the actual work of applying the table changes is done by a top-level Insert, Update, or Delete plan node. The plan nodes underneath this node perform the work of locating the old rows and/or computing the new data. @@ -762,7 +762,7 @@ ROLLBACK; - When an UPDATE or DELETE command affects an + When an UPDATE or DELETE command affects an inheritance hierarchy, the output might look like this: @@ -789,7 +789,7 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; scanning subplans, one per table. For clarity, the Update node is annotated to show the specific target tables that will be updated, in the same order as the corresponding subplans. (These annotations are new as - of PostgreSQL 9.5; in prior versions the reader had to + of PostgreSQL 9.5; in prior versions the reader had to intuit the target tables by inspecting the subplans.) @@ -804,12 +804,12 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; ANALYZE includes executor start-up and shut-down time, as well as the time to run any triggers that are fired, but it does not include parsing, rewriting, or planning time. - Time spent executing BEFORE triggers, if any, is included in + Time spent executing BEFORE triggers, if any, is included in the time for the related Insert, Update, or Delete node; but time - spent executing AFTER triggers is not counted there because - AFTER triggers are fired after completion of the whole plan. + spent executing AFTER triggers is not counted there because + AFTER triggers are fired after completion of the whole plan. The total time spent in each trigger - (either BEFORE or AFTER) is also shown separately. + (either BEFORE or AFTER) is also shown separately. Note that deferred constraint triggers will not be executed until end of transaction and are thus not considered at all by EXPLAIN ANALYZE. @@ -827,13 +827,13 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; network transmission costs and I/O conversion costs are not included. Second, the measurement overhead added by EXPLAIN ANALYZE can be significant, especially on machines with slow - gettimeofday() operating-system calls. You can use the + gettimeofday() operating-system calls. You can use the tool to measure the overhead of timing on your system. - EXPLAIN results should not be extrapolated to situations + EXPLAIN results should not be extrapolated to situations much different from the one you are actually testing; for example, results on a toy-sized table cannot be assumed to apply to large tables. The planner's cost estimates are not linear and so it might choose @@ -843,14 +843,14 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; The planner realizes that it's going to take one disk page read to process the table in any case, so there's no value in expending additional page reads to look at an index. (We saw this happening in the - polygon_tbl example above.) + polygon_tbl example above.) There are cases in which the actual and estimated values won't match up well, but nothing is really wrong. One such case occurs when - plan node execution is stopped short by a LIMIT or similar - effect. For example, in the LIMIT query we used before, + plan node execution is stopped short by a LIMIT or similar + effect. For example, in the LIMIT query we used before, EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; @@ -880,10 +880,10 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 and the next key value in the one input is greater than the last key value of the other input; in such a case there can be no more matches and so no need to scan the rest of the first input. This results in not reading all - of one child, with results like those mentioned for LIMIT. + of one child, with results like those mentioned for LIMIT. Also, if the outer (first) child contains rows with duplicate key values, the inner (second) child is backed up and rescanned for the portion of its - rows matching that key value. EXPLAIN ANALYZE counts these + rows matching that key value. EXPLAIN ANALYZE counts these repeated emissions of the same inner rows as if they were real additional rows. When there are many outer duplicates, the reported actual row count for the inner child plan node can be significantly larger than the number @@ -948,9 +948,9 @@ WHERE relname LIKE 'tenk1%'; For efficiency reasons, reltuples and relpages are not updated on-the-fly, and so they usually contain somewhat out-of-date values. - They are updated by VACUUM, ANALYZE, and a - few DDL commands such as CREATE INDEX. A VACUUM - or ANALYZE operation that does not scan the entire table + They are updated by VACUUM, ANALYZE, and a + few DDL commands such as CREATE INDEX. A VACUUM + or ANALYZE operation that does not scan the entire table (which is commonly the case) will incrementally update the reltuples count on the basis of the part of the table it did scan, resulting in an approximate value. @@ -966,16 +966,16 @@ WHERE relname LIKE 'tenk1%'; Most queries retrieve only a fraction of the rows in a table, due - to WHERE clauses that restrict the rows to be + to WHERE clauses that restrict the rows to be examined. The planner thus needs to make an estimate of the - selectivity of WHERE clauses, that is, + selectivity of WHERE clauses, that is, the fraction of rows that match each condition in the - WHERE clause. The information used for this task is + WHERE clause. The information used for this task is stored in the pg_statistic system catalog. Entries in pg_statistic - are updated by the ANALYZE and VACUUM - ANALYZE commands, and are always approximate even when freshly + are updated by the ANALYZE and VACUUM + ANALYZE commands, and are always approximate even when freshly updated. @@ -1020,17 +1020,17 @@ WHERE tablename = 'road'; Note that two rows are displayed for the same column, one corresponding to the complete inheritance hierarchy starting at the - road table (inherited=t), + road table (inherited=t), and another one including only the road table itself - (inherited=f). + (inherited=f). The amount of information stored in pg_statistic - by ANALYZE, in particular the maximum number of entries in the - most_common_vals and histogram_bounds + by ANALYZE, in particular the maximum number of entries in the + most_common_vals and histogram_bounds arrays for each column, can be set on a - column-by-column basis using the ALTER TABLE SET STATISTICS + column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the configuration variable. The default limit is presently 100 entries. Raising the limit @@ -1072,7 +1072,7 @@ WHERE tablename = 'road'; an assumption that does not hold when column values are correlated. Regular statistics, because of their per-individual-column nature, cannot capture any knowledge about cross-column correlation. - However, PostgreSQL has the ability to compute + However, PostgreSQL has the ability to compute multivariate statistics, which can capture such information. @@ -1081,7 +1081,7 @@ WHERE tablename = 'road'; Because the number of possible column combinations is very large, it's impractical to compute multivariate statistics automatically. Instead, extended statistics objects, more often - called just statistics objects, can be created to instruct + called just statistics objects, can be created to instruct the server to obtain statistics across interesting sets of columns. @@ -1116,12 +1116,12 @@ WHERE tablename = 'road'; The simplest kind of extended statistics tracks functional - dependencies, a concept used in definitions of database normal forms. - We say that column b is functionally dependent on - column a if knowledge of the value of - a is sufficient to determine the value - of b, that is there are no two rows having the same value - of a but different values of b. + dependencies, a concept used in definitions of database normal forms. + We say that column b is functionally dependent on + column a if knowledge of the value of + a is sufficient to determine the value + of b, that is there are no two rows having the same value + of a but different values of b. In a fully normalized database, functional dependencies should exist only on primary keys and superkeys. However, in practice many data sets are not fully normalized for various reasons; intentional @@ -1142,15 +1142,15 @@ WHERE tablename = 'road'; - To inform the planner about functional dependencies, ANALYZE + To inform the planner about functional dependencies, ANALYZE can collect measurements of cross-column dependency. Assessing the degree of dependency between all sets of columns would be prohibitively expensive, so data collection is limited to those groups of columns appearing together in a statistics object defined with - the dependencies option. It is advisable to create - dependencies statistics only for column groups that are + the dependencies option. It is advisable to create + dependencies statistics only for column groups that are strongly correlated, to avoid unnecessary overhead in both - ANALYZE and later query planning. + ANALYZE and later query planning. @@ -1189,7 +1189,7 @@ SELECT stxname, stxkeys, stxdependencies simple equality conditions that compare columns to constant values. They are not used to improve estimates for equality conditions comparing two columns or comparing a column to an expression, nor for - range clauses, LIKE or any other type of condition. + range clauses, LIKE or any other type of condition. @@ -1200,7 +1200,7 @@ SELECT stxname, stxkeys, stxdependencies SELECT * FROM zipcodes WHERE city = 'San Francisco' AND zip = '94105'; - the planner will disregard the city clause as not + the planner will disregard the city clause as not changing the selectivity, which is correct. However, it will make the same assumption about @@ -1233,11 +1233,11 @@ SELECT * FROM zipcodes WHERE city = 'San Francisco' AND zip = '90210'; - To improve such estimates, ANALYZE can collect n-distinct + To improve such estimates, ANALYZE can collect n-distinct statistics for groups of columns. As before, it's impractical to do this for every possible column grouping, so data is collected only for those groups of columns appearing together in a statistics object - defined with the ndistinct option. Data will be collected + defined with the ndistinct option. Data will be collected for each possible combination of two or more columns from the set of listed columns. @@ -1267,17 +1267,17 @@ nd | {"1, 2": 33178, "1, 5": 33178, "2, 5": 27435, "1, 2, 5": 33178} - It's advisable to create ndistinct statistics objects only + It's advisable to create ndistinct statistics objects only on combinations of columns that are actually used for grouping, and for which misestimation of the number of groups is resulting in bad - plans. Otherwise, the ANALYZE cycles are just wasted. + plans. Otherwise, the ANALYZE cycles are just wasted. - Controlling the Planner with Explicit <literal>JOIN</> Clauses + Controlling the Planner with Explicit <literal>JOIN</literal> Clauses join @@ -1286,7 +1286,7 @@ nd | {"1, 2": 33178, "1, 5": 33178, "2, 5": 27435, "1, 2, 5": 33178} It is possible - to control the query planner to some extent by using the explicit JOIN + to control the query planner to some extent by using the explicit JOIN syntax. To see why this matters, we first need some background. @@ -1297,13 +1297,13 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; the planner is free to join the given tables in any order. For example, it could generate a query plan that joins A to B, using - the WHERE condition a.id = b.id, and then - joins C to this joined table, using the other WHERE + the WHERE condition a.id = b.id, and then + joins C to this joined table, using the other WHERE condition. Or it could join B to C and then join A to that result. Or it could join A to C and then join them with B — but that would be inefficient, since the full Cartesian product of A and C would have to be formed, there being no applicable condition in the - WHERE clause to allow optimization of the join. (All + WHERE clause to allow optimization of the join. (All joins in the PostgreSQL executor happen between two input tables, so it's necessary to build up the result in one or another of these fashions.) The important point is that @@ -1347,30 +1347,30 @@ SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); SELECT * FROM a LEFT JOIN b ON (a.bid = b.id) LEFT JOIN c ON (a.cid = c.id); it is valid to join A to either B or C first. Currently, only - FULL JOIN completely constrains the join order. Most - practical cases involving LEFT JOIN or RIGHT JOIN + FULL JOIN completely constrains the join order. Most + practical cases involving LEFT JOIN or RIGHT JOIN can be rearranged to some extent. - Explicit inner join syntax (INNER JOIN, CROSS - JOIN, or unadorned JOIN) is semantically the same as - listing the input relations in FROM, so it does not + Explicit inner join syntax (INNER JOIN, CROSS + JOIN, or unadorned JOIN) is semantically the same as + listing the input relations in FROM, so it does not constrain the join order. - Even though most kinds of JOIN don't completely constrain + Even though most kinds of JOIN don't completely constrain the join order, it is possible to instruct the PostgreSQL query planner to treat all - JOIN clauses as constraining the join order anyway. + JOIN clauses as constraining the join order anyway. For example, these three queries are logically equivalent: SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); - But if we tell the planner to honor the JOIN order, + But if we tell the planner to honor the JOIN order, the second and third take less time to plan than the first. This effect is not worth worrying about for only three tables, but it can be a lifesaver with many tables. @@ -1378,19 +1378,19 @@ SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); To force the planner to follow the join order laid out by explicit - JOINs, + JOINs, set the run-time parameter to 1. (Other possible values are discussed below.) You do not need to constrain the join order completely in order to - cut search time, because it's OK to use JOIN operators - within items of a plain FROM list. For example, consider: + cut search time, because it's OK to use JOIN operators + within items of a plain FROM list. For example, consider: SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; - With join_collapse_limit = 1, this + With join_collapse_limit = 1, this forces the planner to join A to B before joining them to other tables, but doesn't constrain its choices otherwise. In this example, the number of possible join orders is reduced by a factor of 5. @@ -1400,7 +1400,7 @@ SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; Constraining the planner's search in this way is a useful technique both for reducing planning time and for directing the planner to a good query plan. If the planner chooses a bad join order by default, - you can force it to choose a better order via JOIN syntax + you can force it to choose a better order via JOIN syntax — assuming that you know of a better order, that is. Experimentation is recommended. @@ -1415,22 +1415,22 @@ FROM x, y, WHERE somethingelse; This situation might arise from use of a view that contains a join; - the view's SELECT rule will be inserted in place of the view + the view's SELECT rule will be inserted in place of the view reference, yielding a query much like the above. Normally, the planner will try to collapse the subquery into the parent, yielding: SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; This usually results in a better plan than planning the subquery - separately. (For example, the outer WHERE conditions might be such that + separately. (For example, the outer WHERE conditions might be such that joining X to A first eliminates many rows of A, thus avoiding the need to form the full logical output of the subquery.) But at the same time, we have increased the planning time; here, we have a five-way join problem replacing two separate three-way join problems. Because of the exponential growth of the number of possibilities, this makes a big difference. The planner tries to avoid getting stuck in huge join search - problems by not collapsing a subquery if more than from_collapse_limit - FROM items would result in the parent + problems by not collapsing a subquery if more than from_collapse_limit + FROM items would result in the parent query. You can trade off planning time against quality of plan by adjusting this run-time parameter up or down. @@ -1439,11 +1439,11 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; and are similarly named because they do almost the same thing: one controls - when the planner will flatten out subqueries, and the + when the planner will flatten out subqueries, and the other controls when it will flatten out explicit joins. Typically - you would either set join_collapse_limit equal to - from_collapse_limit (so that explicit joins and subqueries - act similarly) or set join_collapse_limit to 1 (if you want + you would either set join_collapse_limit equal to + from_collapse_limit (so that explicit joins and subqueries + act similarly) or set join_collapse_limit to 1 (if you want to control join order with explicit joins). But you might set them differently if you are trying to fine-tune the trade-off between planning time and run time. @@ -1468,7 +1468,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - When using multiple INSERTs, turn off autocommit and just do + When using multiple INSERTs, turn off autocommit and just do one commit at the end. (In plain SQL, this means issuing BEGIN at the start and COMMIT at the end. Some client libraries might @@ -1505,14 +1505,14 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; EXECUTE as many times as required. This avoids some of the overhead of repeatedly parsing and planning INSERT. Different interfaces provide this facility - in different ways; look for prepared statements in the interface + in different ways; look for prepared statements in the interface documentation. Note that loading a large number of rows using COPY is almost always faster than using - INSERT, even if PREPARE is used and + INSERT, even if PREPARE is used and multiple insertions are batched into a single transaction. @@ -1523,7 +1523,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway. However, this consideration only applies when - is minimal as all commands + is minimal as all commands must write WAL otherwise. @@ -1557,7 +1557,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Just as with indexes, a foreign key constraint can be checked - in bulk more efficiently than row-by-row. So it might be + in bulk more efficiently than row-by-row. So it might be useful to drop foreign key constraints, load data, and re-create the constraints. Again, there is a trade-off between data load speed and loss of error checking while the constraint is missing. @@ -1570,7 +1570,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; the row's foreign key constraint). Loading many millions of rows can cause the trigger event queue to overflow available memory, leading to intolerable swapping or even outright failure of the command. Therefore - it may be necessary, not just desirable, to drop and re-apply + it may be necessary, not just desirable, to drop and re-apply foreign keys when loading large amounts of data. If temporarily removing the constraint isn't acceptable, the only other recourse may be to split up the load operation into smaller transactions. @@ -1584,8 +1584,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Temporarily increasing the configuration variable when loading large amounts of data can lead to improved performance. This will help to speed up CREATE - INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. - It won't do much for COPY itself, so this advice is + INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. + It won't do much for COPY itself, so this advice is only useful when you are using one or both of the above techniques. @@ -1617,8 +1617,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; new base backup after the load has completed than to process a large amount of incremental WAL data. To prevent incremental WAL logging while loading, disable archiving and streaming replication, by setting - to minimal, - to off, and + to minimal, + to off, and to zero. But note that changing these settings requires a server restart. @@ -1628,8 +1628,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; process the WAL data, doing this will actually make certain commands faster, because they are designed not to write WAL at all if wal_level - is minimal. (They can guarantee crash safety more cheaply - by doing an fsync at the end than by writing WAL.) + is minimal. (They can guarantee crash safety more cheaply + by doing an fsync at the end than by writing WAL.) This applies to the following commands: @@ -1683,21 +1683,21 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Some Notes About <application>pg_dump</> + Some Notes About <application>pg_dump</application> - Dump scripts generated by pg_dump automatically apply + Dump scripts generated by pg_dump automatically apply several, but not all, of the above guidelines. To reload a - pg_dump dump as quickly as possible, you need to + pg_dump dump as quickly as possible, you need to do a few extra things manually. (Note that these points apply while - restoring a dump, not while creating it. + restoring a dump, not while creating it. The same points apply whether loading a text dump with - psql or using pg_restore to load - from a pg_dump archive file.) + psql or using pg_restore to load + from a pg_dump archive file.) - By default, pg_dump uses COPY, and when + By default, pg_dump uses COPY, and when it is generating a complete schema-and-data dump, it is careful to load data before creating indexes and foreign keys. So in this case several guidelines are handled automatically. What is left @@ -1713,10 +1713,10 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; If using WAL archiving or streaming replication, consider disabling - them during the restore. To do that, set archive_mode - to off, - wal_level to minimal, and - max_wal_senders to zero before loading the dump. + them during the restore. To do that, set archive_mode + to off, + wal_level to minimal, and + max_wal_senders to zero before loading the dump. Afterwards, set them back to the right values and take a fresh base backup. @@ -1724,49 +1724,49 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Experiment with the parallel dump and restore modes of both - pg_dump and pg_restore and find the + pg_dump and pg_restore and find the optimal number of concurrent jobs to use. Dumping and restoring in - parallel by means of the option should give you a significantly higher performance over the serial mode. Consider whether the whole dump should be restored as a single - transaction. To do that, pass the If multiple CPUs are available in the database server, consider using - pg_restore's option. This allows concurrent data loading and index creation. - Run ANALYZE afterwards. + Run ANALYZE afterwards. - A data-only dump will still use COPY, but it does not + A data-only dump will still use COPY, but it does not drop or recreate indexes, and it does not normally touch foreign keys. You can get the effect of disabling foreign keys by using - the option — but realize that that eliminates, rather than just postpones, foreign key validation, and so it is possible to insert bad data if you use it. @@ -1778,7 +1778,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; while loading the data, but don't bother increasing maintenance_work_mem; rather, you'd do that while manually recreating indexes and foreign keys afterwards. - And don't forget to ANALYZE when you're done; see + And don't forget to ANALYZE when you're done; see and for more information. @@ -1808,7 +1808,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Place the database cluster's data directory in a memory-backed - file system (i.e. RAM disk). This eliminates all + file system (i.e. RAM disk). This eliminates all database disk I/O, but limits data storage to the amount of available memory (and perhaps swap). @@ -1826,7 +1826,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Turn off ; there might be no need to force WAL writes to disk on every commit. This setting does risk transaction loss (though not data - corruption) in case of a crash of the database. + corruption) in case of a crash of the database. @@ -1842,7 +1842,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Increase and ; this reduces the frequency of checkpoints, but increases the storage requirements of - /pg_wal. + /pg_wal. diff --git a/doc/src/sgml/pgbuffercache.sgml b/doc/src/sgml/pgbuffercache.sgml index 4e53009ae0..18ac781d0d 100644 --- a/doc/src/sgml/pgbuffercache.sgml +++ b/doc/src/sgml/pgbuffercache.sgml @@ -37,7 +37,7 @@
- <structname>pg_buffercache</> Columns + <structname>pg_buffercache</structname> Columns @@ -54,7 +54,7 @@ bufferidinteger - ID, in the range 1..shared_buffers + ID, in the range 1..shared_buffers @@ -83,7 +83,7 @@ smallint Fork number within the relation; see - include/common/relpath.h + include/common/relpath.h @@ -120,22 +120,22 @@ There is one row for each buffer in the shared cache. Unused buffers are - shown with all fields null except bufferid. Shared system + shown with all fields null except bufferid. Shared system catalogs are shown as belonging to database zero. Because the cache is shared by all the databases, there will normally be pages from relations not belonging to the current database. This means - that there may not be matching join rows in pg_class for + that there may not be matching join rows in pg_class for some rows, or that there could even be incorrect joins. If you are - trying to join against pg_class, it's a good idea to - restrict the join to rows having reldatabase equal to + trying to join against pg_class, it's a good idea to + restrict the join to rows having reldatabase equal to the current database's OID or zero. - When the pg_buffercache view is accessed, internal buffer + When the pg_buffercache view is accessed, internal buffer manager locks are taken for long enough to copy all the buffer state data that the view will display. This ensures that the view produces a consistent set of results, while not diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml index 34d8621958..80595e193b 100644 --- a/doc/src/sgml/pgcrypto.sgml +++ b/doc/src/sgml/pgcrypto.sgml @@ -13,8 +13,8 @@ - The pgcrypto module provides cryptographic functions for - PostgreSQL. + The pgcrypto module provides cryptographic functions for + PostgreSQL. @@ -33,19 +33,19 @@ digest(data bytea, type text) returns bytea - Computes a binary hash of the given data. - type is the algorithm to use. + Computes a binary hash of the given data. + type is the algorithm to use. Standard algorithms are md5, sha1, sha224, sha256, sha384 and sha512. - If pgcrypto was built with + If pgcrypto was built with OpenSSL, more algorithms are available, as detailed in . If you want the digest as a hexadecimal string, use - encode() on the result. For example: + encode() on the result. For example: CREATE OR REPLACE FUNCTION sha1(bytea) returns text AS $$ SELECT encode(digest($1, 'sha1'), 'hex') @@ -67,12 +67,12 @@ hmac(data bytea, key text, type text) returns bytea - Calculates hashed MAC for data with key key. - type is the same as in digest(). + Calculates hashed MAC for data with key key. + type is the same as in digest(). - This is similar to digest() but the hash can only be + This is similar to digest() but the hash can only be recalculated knowing the key. This prevents the scenario of someone altering data and also changing the hash to match. @@ -88,14 +88,14 @@ hmac(data bytea, key text, type text) returns bytea Password Hashing Functions - The functions crypt() and gen_salt() + The functions crypt() and gen_salt() are specifically designed for hashing passwords. - crypt() does the hashing and gen_salt() + crypt() does the hashing and gen_salt() prepares algorithm parameters for it. - The algorithms in crypt() differ from the usual + The algorithms in crypt() differ from the usual MD5 or SHA1 hashing algorithms in the following respects: @@ -108,7 +108,7 @@ hmac(data bytea, key text, type text) returns bytea - They use a random value, called the salt, so that users + They use a random value, called the salt, so that users having the same password will have different encrypted passwords. This is also an additional defense against reversing the algorithm. @@ -134,7 +134,7 @@ hmac(data bytea, key text, type text) returns bytea
- Supported Algorithms for <function>crypt()</> + Supported Algorithms for <function>crypt()</function> @@ -148,7 +148,7 @@ hmac(data bytea, key text, type text) returns bytea - bf + bf 72 yes 128 @@ -156,7 +156,7 @@ hmac(data bytea, key text, type text) returns bytea Blowfish-based, variant 2a - md5 + md5 unlimited no 48 @@ -164,7 +164,7 @@ hmac(data bytea, key text, type text) returns bytea MD5-based crypt - xdes + xdes 8 yes 24 @@ -172,7 +172,7 @@ hmac(data bytea, key text, type text) returns bytea Extended DES - des + des 8 no 12 @@ -184,7 +184,7 @@ hmac(data bytea, key text, type text) returns bytea
- <function>crypt()</> + <function>crypt()</function> crypt @@ -195,10 +195,10 @@ crypt(password text, salt text) returns text - Calculates a crypt(3)-style hash of password. + Calculates a crypt(3)-style hash of password. When storing a new password, you need to use - gen_salt() to generate a new salt value. - To check a password, pass the stored hash value as salt, + gen_salt() to generate a new salt value. + To check a password, pass the stored hash value as salt, and test whether the result matches the stored value. @@ -212,12 +212,12 @@ UPDATE ... SET pswhash = crypt('new password', gen_salt('md5')); SELECT (pswhash = crypt('entered password', pswhash)) AS pswmatch FROM ... ; - This returns true if the entered password is correct. + This returns true if the entered password is correct. - <function>gen_salt()</> + <function>gen_salt()</function> gen_salt @@ -228,30 +228,30 @@ gen_salt(type text [, iter_count integer ]) returns text - Generates a new random salt string for use in crypt(). - The salt string also tells crypt() which algorithm to use. + Generates a new random salt string for use in crypt(). + The salt string also tells crypt() which algorithm to use. - The type parameter specifies the hashing algorithm. + The type parameter specifies the hashing algorithm. The accepted types are: des, xdes, md5 and bf. - The iter_count parameter lets the user specify the iteration + The iter_count parameter lets the user specify the iteration count, for algorithms that have one. The higher the count, the more time it takes to hash the password and therefore the more time to break it. Although with too high a count the time to calculate a hash may be several years - — which is somewhat impractical. If the iter_count + — which is somewhat impractical. If the iter_count parameter is omitted, the default iteration count is used. - Allowed values for iter_count depend on the algorithm and + Allowed values for iter_count depend on the algorithm and are shown in . - Iteration Counts for <function>crypt()</> + Iteration Counts for <function>crypt()</function> @@ -263,13 +263,13 @@ gen_salt(type text [, iter_count integer ]) returns text - xdes + xdes 725 1 16777215 - bf + bf 6 4 31 @@ -310,63 +310,63 @@ gen_salt(type text [, iter_count integer ]) returns text Algorithm Hashes/sec - For [a-z] - For [A-Za-z0-9] - Duration relative to md5 hash + For [a-z] + For [A-Za-z0-9] + Duration relative to md5 hash - crypt-bf/8 + crypt-bf/8 1792 4 years 3927 years 100k - crypt-bf/7 + crypt-bf/7 3648 2 years 1929 years 50k - crypt-bf/6 + crypt-bf/6 7168 1 year 982 years 25k - crypt-bf/5 + crypt-bf/5 13504 188 days 521 years 12.5k - crypt-md5 + crypt-md5 171584 15 days 41 years 1k - crypt-des + crypt-des 23221568 157.5 minutes 108 days 7 - sha1 + sha1 37774272 90 minutes 68 days 4 - md5 (hash) + md5 (hash) 150085504 22.5 minutes 17 days @@ -388,18 +388,18 @@ gen_salt(type text [, iter_count integer ]) returns text - crypt-des and crypt-md5 algorithm numbers are - taken from John the Ripper v1.6.38 -test output. + crypt-des and crypt-md5 algorithm numbers are + taken from John the Ripper v1.6.38 -test output. - md5 hash numbers are from mdcrack 1.2. + md5 hash numbers are from mdcrack 1.2. - sha1 numbers are from lcrack-20031130-beta. + sha1 numbers are from lcrack-20031130-beta. @@ -407,10 +407,10 @@ gen_salt(type text [, iter_count integer ]) returns text crypt-bf numbers are taken using a simple program that loops over 1000 8-character passwords. That way I can show the speed with different numbers of iterations. For reference: john - -test shows 13506 loops/sec for crypt-bf/5. + -test shows 13506 loops/sec for crypt-bf/5. (The very small difference in results is in accordance with the fact that the - crypt-bf implementation in pgcrypto + crypt-bf implementation in pgcrypto is the same one used in John the Ripper.) @@ -436,7 +436,7 @@ gen_salt(type text [, iter_count integer ]) returns text - An encrypted PGP message consists of 2 parts, or packets: + An encrypted PGP message consists of 2 parts, or packets: @@ -459,7 +459,7 @@ gen_salt(type text [, iter_count integer ]) returns text The given password is hashed using a String2Key (S2K) algorithm. This is - rather similar to crypt() algorithms — purposefully + rather similar to crypt() algorithms — purposefully slow and with random salt — but it produces a full-length binary key. @@ -540,8 +540,8 @@ pgp_sym_encrypt(data text, psw text [, options text ]) returns bytea pgp_sym_encrypt_bytea(data bytea, psw text [, options text ]) returns bytea - Encrypt data with a symmetric PGP key psw. - The options parameter can contain option settings, + Encrypt data with a symmetric PGP key psw. + The options parameter can contain option settings, as described below. @@ -565,12 +565,12 @@ pgp_sym_decrypt_bytea(msg bytea, psw text [, options text ]) returns bytea Decrypt a symmetric-key-encrypted PGP message. - Decrypting bytea data with pgp_sym_decrypt is disallowed. + Decrypting bytea data with pgp_sym_decrypt is disallowed. This is to avoid outputting invalid character data. Decrypting - originally textual data with pgp_sym_decrypt_bytea is fine. + originally textual data with pgp_sym_decrypt_bytea is fine. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -591,11 +591,11 @@ pgp_pub_encrypt(data text, key bytea [, options text ]) returns bytea pgp_pub_encrypt_bytea(data bytea, key bytea [, options text ]) returns bytea - Encrypt data with a public PGP key key. + Encrypt data with a public PGP key key. Giving this function a secret key will produce an error. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -616,19 +616,19 @@ pgp_pub_decrypt(msg bytea, key bytea [, psw text [, options text ]]) returns tex pgp_pub_decrypt_bytea(msg bytea, key bytea [, psw text [, options text ]]) returns bytea - Decrypt a public-key-encrypted message. key must be the + Decrypt a public-key-encrypted message. key must be the secret key corresponding to the public key that was used to encrypt. If the secret key is password-protected, you must give the password in - psw. If there is no password, but you want to specify + psw. If there is no password, but you want to specify options, you need to give an empty password. - Decrypting bytea data with pgp_pub_decrypt is disallowed. + Decrypting bytea data with pgp_pub_decrypt is disallowed. This is to avoid outputting invalid character data. Decrypting - originally textual data with pgp_pub_decrypt_bytea is fine. + originally textual data with pgp_pub_decrypt_bytea is fine. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -644,7 +644,7 @@ pgp_pub_decrypt_bytea(msg bytea, key bytea [, psw text [, options text ]]) retur pgp_key_id(bytea) returns text - pgp_key_id extracts the key ID of a PGP public or secret key. + pgp_key_id extracts the key ID of a PGP public or secret key. Or it gives the key ID that was used for encrypting the data, if given an encrypted message. @@ -654,7 +654,7 @@ pgp_key_id(bytea) returns text - SYMKEY + SYMKEY The message is encrypted with a symmetric key. @@ -662,12 +662,12 @@ pgp_key_id(bytea) returns text - ANYKEY + ANYKEY The message is public-key encrypted, but the key ID has been removed. That means you will need to try all your secret keys on it to see - which one decrypts it. pgcrypto itself does not produce + which one decrypts it. pgcrypto itself does not produce such messages. @@ -675,7 +675,7 @@ pgp_key_id(bytea) returns text Note that different keys may have the same ID. This is rare but a normal event. The client application should then try to decrypt with each one, - to see which fits — like handling ANYKEY. + to see which fits — like handling ANYKEY. @@ -700,8 +700,8 @@ dearmor(data text) returns bytea - If the keys and values arrays are specified, - an armor header is added to the armored format for each + If the keys and values arrays are specified, + an armor header is added to the armored format for each key/value pair. Both arrays must be single-dimensional, and they must be of the same length. The keys and values cannot contain any non-ASCII characters. @@ -719,8 +719,8 @@ dearmor(data text) returns bytea pgp_armor_headers(data text, key out text, value out text) returns setof record - pgp_armor_headers() extracts the armor headers from - data. The return value is a set of rows with two columns, + pgp_armor_headers() extracts the armor headers from + data. The return value is a set of rows with two columns, key and value. If the keys or values contain any non-ASCII characters, they are treated as UTF-8. @@ -924,7 +924,7 @@ gpg --gen-key - The preferred key type is DSA and Elgamal. + The preferred key type is DSA and Elgamal. For RSA encryption you must create either DSA or RSA sign-only key @@ -950,7 +950,7 @@ gpg -a --export-secret-keys KEYID > secret.key - You need to use dearmor() on these keys before giving them to + You need to use dearmor() on these keys before giving them to the PGP functions. Or if you can handle binary data, you can drop -a from the command. @@ -982,7 +982,7 @@ gpg -a --export-secret-keys KEYID > secret.key No support for several subkeys. This may seem like a problem, as this is common practice. On the other hand, you should not use your regular - GPG/PGP keys with pgcrypto, but create new ones, + GPG/PGP keys with pgcrypto, but create new ones, as the usage scenario is rather different. @@ -1056,15 +1056,15 @@ decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea type string is: -algorithm - mode /pad: padding +algorithm - mode /pad: padding - where algorithm is one of: + where algorithm is one of: bf — Blowfish aes — AES (Rijndael-128) - and mode is one of: + and mode is one of: @@ -1078,7 +1078,7 @@ decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea - and padding is one of: + and padding is one of: @@ -1100,8 +1100,8 @@ encrypt(data, 'fooz', 'bf-cbc/pad:pkcs') - In encrypt_iv and decrypt_iv, the - iv parameter is the initial value for the CBC mode; + In encrypt_iv and decrypt_iv, the + iv parameter is the initial value for the CBC mode; it is ignored for ECB. It is clipped or padded with zeroes if not exactly block size. It defaults to all zeroes in the functions without this parameter. @@ -1119,7 +1119,7 @@ encrypt(data, 'fooz', 'bf-cbc/pad:pkcs') gen_random_bytes(count integer) returns bytea - Returns count cryptographically strong random bytes. + Returns count cryptographically strong random bytes. At most 1024 bytes can be extracted at a time. This is to avoid draining the randomness generator pool. @@ -1143,7 +1143,7 @@ gen_random_uuid() returns uuid Configuration - pgcrypto configures itself according to the findings of the + pgcrypto configures itself according to the findings of the main PostgreSQL configure script. The options that affect it are --with-zlib and --with-openssl. @@ -1253,9 +1253,9 @@ gen_random_uuid() returns uuid Security Limitations - All pgcrypto functions run inside the database server. + All pgcrypto functions run inside the database server. That means that all - the data and passwords move between pgcrypto and client + the data and passwords move between pgcrypto and client applications in clear text. Thus you must: @@ -1276,7 +1276,7 @@ gen_random_uuid() returns uuid The implementation does not resist side-channel attacks. For example, the time required for - a pgcrypto decryption function to complete varies among + a pgcrypto decryption function to complete varies among ciphertexts of a given size. @@ -1342,7 +1342,7 @@ gen_random_uuid() returns uuid - Jean-Luc Cooke Fortuna-based /dev/random driver for Linux. + Jean-Luc Cooke Fortuna-based /dev/random driver for Linux. diff --git a/doc/src/sgml/pgfreespacemap.sgml b/doc/src/sgml/pgfreespacemap.sgml index 43e154a2f3..0122d278e3 100644 --- a/doc/src/sgml/pgfreespacemap.sgml +++ b/doc/src/sgml/pgfreespacemap.sgml @@ -8,7 +8,7 @@ - The pg_freespacemap module provides a means for examining the + The pg_freespacemap module provides a means for examining the free space map (FSM). It provides a function called pg_freespace, or two overloaded functions, to be precise. The functions show the value recorded in the free space map for @@ -36,7 +36,7 @@ Returns the amount of free space on the page of the relation, specified - by blkno, according to the FSM. + by blkno, according to the FSM. @@ -50,7 +50,7 @@ Displays the amount of free space on each page of the relation, - according to the FSM. A set of (blkno bigint, avail int2) + according to the FSM. A set of (blkno bigint, avail int2) tuples is returned, one tuple for each page in the relation. @@ -59,7 +59,7 @@ The values stored in the free space map are not exact. They're rounded - to precision of 1/256th of BLCKSZ (32 bytes with default BLCKSZ), and + to precision of 1/256th of BLCKSZ (32 bytes with default BLCKSZ), and they're not kept fully up-to-date as tuples are inserted and updated. diff --git a/doc/src/sgml/pgprewarm.sgml b/doc/src/sgml/pgprewarm.sgml index c6b94a8b72..e0a6d0503f 100644 --- a/doc/src/sgml/pgprewarm.sgml +++ b/doc/src/sgml/pgprewarm.sgml @@ -11,11 +11,11 @@ The pg_prewarm module provides a convenient way to load relation data into either the operating system buffer cache or the PostgreSQL buffer cache. Prewarming - can be performed manually using the pg_prewarm function, - or can be performed automatically by including pg_prewarm in + can be performed manually using the pg_prewarm function, + or can be performed automatically by including pg_prewarm in . In the latter case, the system will run a background worker which periodically records the contents - of shared buffers in a file called autoprewarm.blocks and + of shared buffers in a file called autoprewarm.blocks and will, using 2 background workers, reload those same blocks after a restart. @@ -77,10 +77,10 @@ autoprewarm_dump_now() RETURNS int8 - Update autoprewarm.blocks immediately. This may be useful + Update autoprewarm.blocks immediately. This may be useful if the autoprewarm worker is not running but you anticipate running it after the next restart. The return value is the number of records written - to autoprewarm.blocks. + to autoprewarm.blocks. @@ -92,7 +92,7 @@ autoprewarm_dump_now() RETURNS int8 pg_prewarm.autoprewarm (boolean) - pg_prewarm.autoprewarm configuration parameter + pg_prewarm.autoprewarm configuration parameter @@ -109,12 +109,12 @@ autoprewarm_dump_now() RETURNS int8 pg_prewarm.autoprewarm_interval (int) - pg_prewarm.autoprewarm_interval configuration parameter + pg_prewarm.autoprewarm_interval configuration parameter - This is the interval between updates to autoprewarm.blocks. + This is the interval between updates to autoprewarm.blocks. The default is 300 seconds. If set to 0, the file will not be dumped at regular intervals, but only when the server is shut down. diff --git a/doc/src/sgml/pgrowlocks.sgml b/doc/src/sgml/pgrowlocks.sgml index 65d532e081..57dcf6beb2 100644 --- a/doc/src/sgml/pgrowlocks.sgml +++ b/doc/src/sgml/pgrowlocks.sgml @@ -37,7 +37,7 @@ pgrowlocks(text) returns setof record
- <function>pgrowlocks</> Output Columns + <function>pgrowlocks</function> Output Columns @@ -73,9 +73,9 @@ pgrowlocks(text) returns setof record lock_typetext[]Lock mode of lockers (more than one if multitransaction), - an array of Key Share, Share, - For No Key Update, No Key Update, - For Update, Update. + an array of Key Share, Share, + For No Key Update, No Key Update, + For Update, Update. @@ -89,7 +89,7 @@ pgrowlocks(text) returns setof record
- pgrowlocks takes AccessShareLock for the + pgrowlocks takes AccessShareLock for the target table and reads each row one by one to collect the row locking information. This is not very speedy for a large table. Note that: diff --git a/doc/src/sgml/pgstandby.sgml b/doc/src/sgml/pgstandby.sgml index bf4edea9f1..7feba8cdd6 100644 --- a/doc/src/sgml/pgstandby.sgml +++ b/doc/src/sgml/pgstandby.sgml @@ -31,14 +31,14 @@ Description - pg_standby supports creation of a warm standby + pg_standby supports creation of a warm standby database server. It is designed to be a production-ready program, as well as a customizable template should you require specific modifications. - pg_standby is designed to be a waiting - restore_command, which is needed to turn a standard + pg_standby is designed to be a waiting + restore_command, which is needed to turn a standard archive recovery into a warm standby operation. Other configuration is required as well, all of which is described in the main server manual (see ). @@ -46,33 +46,33 @@ To configure a standby - server to use pg_standby, put this into its + server to use pg_standby, put this into its recovery.conf configuration file: -restore_command = 'pg_standby archiveDir %f %p %r' +restore_command = 'pg_standby archiveDir %f %p %r' - where archiveDir is the directory from which WAL segment + where archiveDir is the directory from which WAL segment files should be restored. - If restartwalfile is specified, normally by using the + If restartwalfile is specified, normally by using the %r macro, then all WAL files logically preceding this - file will be removed from archivelocation. This minimizes + file will be removed from archivelocation. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of this parameter is appropriate if the - archivelocation is a transient staging area for this - particular standby server, but not when the - archivelocation is intended as a long-term WAL archive area. + archivelocation is a transient staging area for this + particular standby server, but not when the + archivelocation is intended as a long-term WAL archive area. pg_standby assumes that - archivelocation is a directory readable by the - server-owning user. If restartwalfile (or -k) + archivelocation is a directory readable by the + server-owning user. If restartwalfile (or -k) is specified, - the archivelocation directory must be writable too. + the archivelocation directory must be writable too. - There are two ways to fail over to a warm standby database server + There are two ways to fail over to a warm standby database server when the master server fails: @@ -85,7 +85,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' the standby server has fallen behind, but if there is a lot of unapplied WAL it can be a long time before the standby server becomes ready. To trigger a smart failover, create a trigger file containing - the word smart, or just create it and leave it empty. + the word smart, or just create it and leave it empty.
@@ -96,8 +96,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' In fast failover, the server is brought up immediately. Any WAL files in the archive that have not yet been applied will be ignored, and all transactions in those files are lost. To trigger a fast failover, - create a trigger file and write the word fast into it. - pg_standby can also be configured to execute a fast + create a trigger file and write the word fast into it. + pg_standby can also be configured to execute a fast failover automatically if no new WAL file appears within a defined interval. @@ -120,7 +120,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - Use cp or copy command to restore WAL files + Use cp or copy command to restore WAL files from archive. This is the only supported behavior so this option is useless. @@ -130,7 +130,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - Print lots of debug logging output on stderr. + Print lots of debug logging output on stderr. @@ -147,8 +147,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' restartwalfile is specified, since that specification method is more accurate in determining the correct archive cut-off point. - Use of this parameter is deprecated as of - PostgreSQL 8.3; it is safer and more efficient to + Use of this parameter is deprecated as of + PostgreSQL 8.3; it is safer and more efficient to specify a restartwalfile parameter. A too small setting could result in removal of files that are still needed for a restart of the standby server, while a too large setting wastes @@ -158,12 +158,12 @@ restore_command = 'pg_standby archiveDir %f %p %r' - maxretries + maxretries Set the maximum number of times to retry the copy command if it fails (default 3). After each failure, we wait for - sleeptime * num_retries + sleeptime * num_retries so that the wait time increases progressively. So by default, we will wait 5 secs, 10 secs, then 15 secs before reporting the failure back to the standby server. This will be @@ -174,7 +174,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - sleeptime + sleeptime Set the number of seconds (up to 60, default 5) to sleep between @@ -186,21 +186,21 @@ restore_command = 'pg_standby archiveDir %f %p %r' - triggerfile + triggerfile Specify a trigger file whose presence should cause failover. It is recommended that you use a structured file name to avoid confusion as to which server is being triggered when multiple servers exist on the same system; for example - /tmp/pgsql.trigger.5432. + /tmp/pgsql.trigger.5432. - - + + Print the pg_standby version and exit. @@ -209,7 +209,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - maxwaittime + maxwaittime Set the maximum number of seconds to wait for the next WAL file, @@ -222,8 +222,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' - - + + Show help about pg_standby command line @@ -241,18 +241,18 @@ restore_command = 'pg_standby archiveDir %f %p %r' pg_standby is designed to work with - PostgreSQL 8.2 and later. + PostgreSQL 8.2 and later. - PostgreSQL 8.3 provides the %r macro, + PostgreSQL 8.3 provides the %r macro, which is designed to let pg_standby know the - last file it needs to keep. With PostgreSQL 8.2, the + last file it needs to keep. With PostgreSQL 8.2, the -k option must be used if archive cleanup is required. This option remains available in 8.3, but its use is deprecated. - PostgreSQL 8.4 provides the - recovery_end_command option. Without this option + PostgreSQL 8.4 provides the + recovery_end_command option. Without this option a leftover trigger file can be hazardous. @@ -276,13 +276,13 @@ restore_command = 'pg_standby -d -s 2 -t /tmp/pgsql.trigger.5442 .../archive %f recovery_end_command = 'rm -f /tmp/pgsql.trigger.5442' where the archive directory is physically located on the standby server, - so that the archive_command is accessing it across NFS, - but the files are local to the standby (enabling use of ln). + so that the archive_command is accessing it across NFS, + but the files are local to the standby (enabling use of ln). This will: - produce debugging output in standby.log + produce debugging output in standby.log @@ -293,7 +293,7 @@ recovery_end_command = 'rm -f /tmp/pgsql.trigger.5442' stop waiting only when a trigger file called - /tmp/pgsql.trigger.5442 appears, + /tmp/pgsql.trigger.5442 appears, and perform failover according to its content @@ -320,18 +320,18 @@ restore_command = 'pg_standby -d -s 5 -t C:\pgsql.trigger.5442 ...\archive %f %p recovery_end_command = 'del C:\pgsql.trigger.5442' Note that backslashes need to be doubled in the - archive_command, but not in the - restore_command or recovery_end_command. + archive_command, but not in the + restore_command or recovery_end_command. This will: - use the copy command to restore WAL files from archive + use the copy command to restore WAL files from archive - produce debugging output in standby.log + produce debugging output in standby.log @@ -342,7 +342,7 @@ recovery_end_command = 'del C:\pgsql.trigger.5442' stop waiting only when a trigger file called - C:\pgsql.trigger.5442 appears, + C:\pgsql.trigger.5442 appears, and perform failover according to its content @@ -360,16 +360,16 @@ recovery_end_command = 'del C:\pgsql.trigger.5442' - The copy command on Windows sets the final file size + The copy command on Windows sets the final file size before the file is completely copied, which would ordinarily confuse pg_standby. Therefore - pg_standby waits sleeptime - seconds once it sees the proper file size. GNUWin32's cp + pg_standby waits sleeptime + seconds once it sees the proper file size. GNUWin32's cp sets the file size only after the file copy is complete. - Since the Windows example uses copy at both ends, either + Since the Windows example uses copy at both ends, either or both servers might be accessing the archive directory across the network. diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml index f9dd43e891..4b15a268cd 100644 --- a/doc/src/sgml/pgstatstatements.sgml +++ b/doc/src/sgml/pgstatstatements.sgml @@ -13,20 +13,20 @@ - The module must be loaded by adding pg_stat_statements to + The module must be loaded by adding pg_stat_statements to in - postgresql.conf, because it requires additional shared memory. + postgresql.conf, because it requires additional shared memory. This means that a server restart is needed to add or remove the module. When pg_stat_statements is loaded, it tracks statistics across all databases of the server. To access and manipulate - these statistics, the module provides a view, pg_stat_statements, - and the utility functions pg_stat_statements_reset and - pg_stat_statements. These are not available globally but + these statistics, the module provides a view, pg_stat_statements, + and the utility functions pg_stat_statements_reset and + pg_stat_statements. These are not available globally but can be enabled for a specific database with - CREATE EXTENSION pg_stat_statements. + CREATE EXTENSION pg_stat_statements. @@ -34,7 +34,7 @@ The statistics gathered by the module are made available via a - view named pg_stat_statements. This view + view named pg_stat_statements. This view contains one row for each distinct database ID, user ID and query ID (up to the maximum number of distinct statements that the module can track). The columns of the view are shown in @@ -42,7 +42,7 @@ - <structname>pg_stat_statements</> Columns + <structname>pg_stat_statements</structname> Columns @@ -234,9 +234,9 @@ - Plannable queries (that is, SELECT, INSERT, - UPDATE, and DELETE) are combined into a single - pg_stat_statements entry whenever they have identical query + Plannable queries (that is, SELECT, INSERT, + UPDATE, and DELETE) are combined into a single + pg_stat_statements entry whenever they have identical query structures according to an internal hash calculation. Typically, two queries will be considered the same for this purpose if they are semantically equivalent except for the values of literal constants @@ -247,16 +247,16 @@ When a constant's value has been ignored for purposes of matching the query to other queries, the constant is replaced by a parameter symbol, such - as $1, in the pg_stat_statements + as $1, in the pg_stat_statements display. The rest of the query text is that of the first query that had the - particular queryid hash value associated with the - pg_stat_statements entry. + particular queryid hash value associated with the + pg_stat_statements entry. In some cases, queries with visibly different texts might get merged into a - single pg_stat_statements entry. Normally this will happen + single pg_stat_statements entry. Normally this will happen only for semantically equivalent queries, but there is a small chance of hash collisions causing unrelated queries to be merged into one entry. (This cannot happen for queries belonging to different users or databases, @@ -264,41 +264,41 @@ - Since the queryid hash value is computed on the + Since the queryid hash value is computed on the post-parse-analysis representation of the queries, the opposite is also possible: queries with identical texts might appear as separate entries, if they have different meanings as a result of - factors such as different search_path settings. + factors such as different search_path settings. - Consumers of pg_stat_statements may wish to use - queryid (perhaps in combination with - dbid and userid) as a more stable + Consumers of pg_stat_statements may wish to use + queryid (perhaps in combination with + dbid and userid) as a more stable and reliable identifier for each entry than its query text. However, it is important to understand that there are only limited - guarantees around the stability of the queryid hash + guarantees around the stability of the queryid hash value. Since the identifier is derived from the post-parse-analysis tree, its value is a function of, among other things, the internal object identifiers appearing in this representation. This has some counterintuitive implications. For example, - pg_stat_statements will consider two apparently-identical + pg_stat_statements will consider two apparently-identical queries to be distinct, if they reference a table that was dropped and recreated between the executions of the two queries. The hashing process is also sensitive to differences in machine architecture and other facets of the platform. - Furthermore, it is not safe to assume that queryid - will be stable across major versions of PostgreSQL. + Furthermore, it is not safe to assume that queryid + will be stable across major versions of PostgreSQL. - As a rule of thumb, queryid values can be assumed to be + As a rule of thumb, queryid values can be assumed to be stable and comparable only so long as the underlying server version and catalog metadata details stay exactly the same. Two servers participating in replication based on physical WAL replay can be expected - to have identical queryid values for the same query. + to have identical queryid values for the same query. However, logical replication schemes do not promise to keep replicas - identical in all relevant details, so queryid will + identical in all relevant details, so queryid will not be a useful identifier for accumulating costs across a set of logical replicas. If in doubt, direct testing is recommended. @@ -306,13 +306,13 @@ The parameter symbols used to replace constants in representative query texts start from the next number after the - highest $n parameter in the original query - text, or $1 if there was none. It's worth noting that in + highest $n parameter in the original query + text, or $1 if there was none. It's worth noting that in some cases there may be hidden parameter symbols that affect this - numbering. For example, PL/pgSQL uses hidden parameter + numbering. For example, PL/pgSQL uses hidden parameter symbols to insert values of function local variables into queries, so that - a PL/pgSQL statement like SELECT i + 1 INTO j - would have representative text like SELECT i + $2. + a PL/pgSQL statement like SELECT i + 1 INTO j + would have representative text like SELECT i + $2. @@ -320,11 +320,11 @@ not consume shared memory. Therefore, even very lengthy query texts can be stored successfully. However, if many long query texts are accumulated, the external file might grow unmanageably large. As a - recovery method if that happens, pg_stat_statements may + recovery method if that happens, pg_stat_statements may choose to discard the query texts, whereupon all existing entries in - the pg_stat_statements view will show - null query fields, though the statistics associated with - each queryid are preserved. If this happens, consider + the pg_stat_statements view will show + null query fields, though the statistics associated with + each queryid are preserved. If this happens, consider reducing pg_stat_statements.max to prevent recurrences. @@ -345,7 +345,7 @@ pg_stat_statements_reset discards all statistics - gathered so far by pg_stat_statements. + gathered so far by pg_stat_statements. By default, this function can only be executed by superusers. @@ -363,17 +363,17 @@ The pg_stat_statements view is defined in - terms of a function also named pg_stat_statements. + terms of a function also named pg_stat_statements. It is possible for clients to call the pg_stat_statements function directly, and by specifying showtext := false have query text be omitted (that is, the OUT argument that corresponds - to the view's query column will return nulls). This + to the view's query column will return nulls). This feature is intended to support external tools that might wish to avoid the overhead of repeatedly retrieving query texts of indeterminate length. Such tools can instead cache the first query text observed for each entry themselves, since that is - all pg_stat_statements itself does, and then retrieve + all pg_stat_statements itself does, and then retrieve query texts only as needed. Since the server stores query texts in a file, this approach may reduce physical I/O for repeated examination of the pg_stat_statements data. @@ -396,7 +396,7 @@ pg_stat_statements.max is the maximum number of statements tracked by the module (i.e., the maximum number of rows - in the pg_stat_statements view). If more distinct + in the pg_stat_statements view). If more distinct statements than that are observed, information about the least-executed statements is discarded. The default value is 5000. @@ -414,11 +414,11 @@ pg_stat_statements.track controls which statements are counted by the module. - Specify top to track top-level statements (those issued - directly by clients), all to also track nested statements - (such as statements invoked within functions), or none to + Specify top to track top-level statements (those issued + directly by clients), all to also track nested statements + (such as statements invoked within functions), or none to disable statement statistics collection. - The default value is top. + The default value is top. Only superusers can change this setting. @@ -433,9 +433,9 @@ pg_stat_statements.track_utility controls whether utility commands are tracked by the module. Utility commands are - all those other than SELECT, INSERT, - UPDATE and DELETE. - The default value is on. + all those other than SELECT, INSERT, + UPDATE and DELETE. + The default value is on. Only superusers can change this setting. @@ -450,10 +450,10 @@ pg_stat_statements.save specifies whether to save statement statistics across server shutdowns. - If it is off then statistics are not saved at + If it is off then statistics are not saved at shutdown nor reloaded at server start. - The default value is on. - This parameter can only be set in the postgresql.conf + The default value is on. + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -464,11 +464,11 @@ The module requires additional shared memory proportional to pg_stat_statements.max. Note that this memory is consumed whenever the module is loaded, even if - pg_stat_statements.track is set to none. + pg_stat_statements.track is set to none. - These parameters must be set in postgresql.conf. + These parameters must be set in postgresql.conf. Typical usage might be: diff --git a/doc/src/sgml/pgstattuple.sgml b/doc/src/sgml/pgstattuple.sgml index a7c67ae645..611df9d0bf 100644 --- a/doc/src/sgml/pgstattuple.sgml +++ b/doc/src/sgml/pgstattuple.sgml @@ -30,13 +30,13 @@ pgstattuple - pgstattuple(regclass) returns record + pgstattuple(regclass) returns record pgstattuple returns a relation's physical length, - percentage of dead tuples, and other info. This may help users + percentage of dead tuples, and other info. This may help users to determine whether vacuum is necessary or not. The argument is the target relation's name (optionally schema-qualified) or OID. For example: @@ -135,15 +135,15 @@ free_percent | 1.95 - pgstattuple judges a tuple is dead if - HeapTupleSatisfiesDirty returns false. + pgstattuple judges a tuple is dead if + HeapTupleSatisfiesDirty returns false. - pgstattuple(text) returns record + pgstattuple(text) returns record @@ -161,7 +161,7 @@ free_percent | 1.95 pgstatindex - pgstatindex(regclass) returns record + pgstatindex(regclass) returns record @@ -225,7 +225,7 @@ leaf_fragmentation | 0 internal_pages bigint - Number of internal (upper-level) pages + Number of internal (upper-level) pages @@ -264,14 +264,14 @@ leaf_fragmentation | 0 - The reported index_size will normally correspond to one more + The reported index_size will normally correspond to one more page than is accounted for by internal_pages + leaf_pages + empty_pages + deleted_pages, because it also includes the index's metapage. - As with pgstattuple, the results are accumulated + As with pgstattuple, the results are accumulated page-by-page, and should not be expected to represent an instantaneous snapshot of the whole index. @@ -280,7 +280,7 @@ leaf_fragmentation | 0 - pgstatindex(text) returns record + pgstatindex(text) returns record @@ -298,7 +298,7 @@ leaf_fragmentation | 0 pgstatginindex - pgstatginindex(regclass) returns record + pgstatginindex(regclass) returns record @@ -358,7 +358,7 @@ pending_tuples | 0 pgstathashindex - pgstathashindex(regclass) returns record + pgstathashindex(regclass) returns record @@ -453,7 +453,7 @@ free_percent | 61.8005949100872 pg_relpages - pg_relpages(regclass) returns bigint + pg_relpages(regclass) returns bigint @@ -466,7 +466,7 @@ free_percent | 61.8005949100872 - pg_relpages(text) returns bigint + pg_relpages(text) returns bigint @@ -484,7 +484,7 @@ free_percent | 61.8005949100872 pgstattuple_approx - pgstattuple_approx(regclass) returns record + pgstattuple_approx(regclass) returns record diff --git a/doc/src/sgml/pgtrgm.sgml b/doc/src/sgml/pgtrgm.sgml index 775a7b8be7..7903dc3d82 100644 --- a/doc/src/sgml/pgtrgm.sgml +++ b/doc/src/sgml/pgtrgm.sgml @@ -111,7 +111,7 @@ show_limit()show_limit real - Returns the current similarity threshold used by the % + Returns the current similarity threshold used by the % operator. This sets the minimum similarity between two words for them to be considered similar enough to be misspellings of each other, for example @@ -122,7 +122,7 @@ set_limit(real)set_limit real - Sets the current similarity threshold that is used by the % + Sets the current similarity threshold that is used by the % operator. The threshold must be between 0 and 1 (default is 0.3). Returns the same value passed in (deprecated). @@ -144,56 +144,56 @@ - text % text + text % text boolean - Returns true if its arguments have a similarity that is + Returns true if its arguments have a similarity that is greater than the current similarity threshold set by - pg_trgm.similarity_threshold. + pg_trgm.similarity_threshold. - text <% text + text <% text boolean - Returns true if its first argument has the similar word in + Returns true if its first argument has the similar word in the second argument and they have a similarity that is greater than the current word similarity threshold set by - pg_trgm.word_similarity_threshold parameter. + pg_trgm.word_similarity_threshold parameter. - text %> text + text %> text boolean - Commutator of the <% operator. + Commutator of the <% operator. - text <-> text + text <-> text real - Returns the distance between the arguments, that is - one minus the similarity() value. + Returns the distance between the arguments, that is + one minus the similarity() value. - text <<-> text + text <<-> text real - Returns the distance between the arguments, that is - one minus the word_similarity() value. + Returns the distance between the arguments, that is + one minus the word_similarity() value. - text <->> text + text <->> text real - Commutator of the <<-> operator. + Commutator of the <<-> operator. @@ -207,31 +207,31 @@ - pg_trgm.similarity_threshold (real) + pg_trgm.similarity_threshold (real) - pg_trgm.similarity_threshold configuration parameter + pg_trgm.similarity_threshold configuration parameter - Sets the current similarity threshold that is used by the % + Sets the current similarity threshold that is used by the % operator. The threshold must be between 0 and 1 (default is 0.3). - pg_trgm.word_similarity_threshold (real) + pg_trgm.word_similarity_threshold (real) - pg_trgm.word_similarity_threshold configuration parameter + pg_trgm.word_similarity_threshold configuration parameter Sets the current word similarity threshold that is used by - <% and %> operators. The threshold + <% and %> operators. The threshold must be between 0 and 1 (default is 0.6). @@ -247,8 +247,8 @@ operator classes that allow you to create an index over a text column for the purpose of very fast similarity searches. These index types support the above-described similarity operators, and additionally support - trigram-based index searches for LIKE, ILIKE, - ~ and ~* queries. (These indexes do not + trigram-based index searches for LIKE, ILIKE, + ~ and ~* queries. (These indexes do not support equality nor simple comparison operators, so you may need a regular B-tree index too.) @@ -267,16 +267,16 @@ CREATE INDEX trgm_idx ON test_trgm USING GIN (t gin_trgm_ops); - At this point, you will have an index on the t column that + At this point, you will have an index on the t column that you can use for similarity searching. A typical query is -SELECT t, similarity(t, 'word') AS sml +SELECT t, similarity(t, 'word') AS sml FROM test_trgm - WHERE t % 'word' + WHERE t % 'word' ORDER BY sml DESC, t; This will return all values in the text column that are sufficiently - similar to word, sorted from best match to worst. The + similar to word, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets. @@ -284,7 +284,7 @@ SELECT t, similarity(t, 'word') AS sml A variant of the above query is -SELECT t, t <-> 'word' AS dist +SELECT t, t <-> 'word' AS dist FROM test_trgm ORDER BY dist LIMIT 10; @@ -294,16 +294,16 @@ SELECT t, t <-> 'word' AS dist - Also you can use an index on the t column for word + Also you can use an index on the t column for word similarity. For example: -SELECT t, word_similarity('word', t) AS sml +SELECT t, word_similarity('word', t) AS sml FROM test_trgm - WHERE 'word' <% t + WHERE 'word' <% t ORDER BY sml DESC, t; This will return all values in the text column that have a word - which sufficiently similar to word, sorted from best + which sufficiently similar to word, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets. @@ -311,7 +311,7 @@ SELECT t, word_similarity('word', t) AS sml A variant of the above query is -SELECT t, 'word' <<-> t AS dist +SELECT t, 'word' <<-> t AS dist FROM test_trgm ORDER BY dist LIMIT 10; @@ -321,8 +321,8 @@ SELECT t, 'word' <<-> t AS dist - Beginning in PostgreSQL 9.1, these index types also support - index searches for LIKE and ILIKE, for example + Beginning in PostgreSQL 9.1, these index types also support + index searches for LIKE and ILIKE, for example SELECT * FROM test_trgm WHERE t LIKE '%foo%bar'; @@ -333,9 +333,9 @@ SELECT * FROM test_trgm WHERE t LIKE '%foo%bar'; - Beginning in PostgreSQL 9.3, these index types also support + Beginning in PostgreSQL 9.3, these index types also support index searches for regular-expression matches - (~ and ~* operators), for example + (~ and ~* operators), for example SELECT * FROM test_trgm WHERE t ~ '(foo|bar)'; @@ -347,7 +347,7 @@ SELECT * FROM test_trgm WHERE t ~ '(foo|bar)'; - For both LIKE and regular-expression searches, keep in mind + For both LIKE and regular-expression searches, keep in mind that a pattern with no extractable trigrams will degenerate to a full-index scan. @@ -377,9 +377,9 @@ CREATE TABLE words AS SELECT word FROM ts_stat('SELECT to_tsvector(''simple'', bodytext) FROM documents'); - where documents is a table that has a text field - bodytext that we wish to search. The reason for using - the simple configuration with the to_tsvector + where documents is a table that has a text field + bodytext that we wish to search. The reason for using + the simple configuration with the to_tsvector function, instead of using a language-specific configuration, is that we want a list of the original (unstemmed) words. @@ -399,7 +399,7 @@ CREATE INDEX words_idx ON words USING GIN (word gin_trgm_ops); - Since the words table has been generated as a separate, + Since the words table has been generated as a separate, static table, it will need to be periodically regenerated so that it remains reasonably up-to-date with the document collection. Keeping it exactly current is usually unnecessary. diff --git a/doc/src/sgml/pgvisibility.sgml b/doc/src/sgml/pgvisibility.sgml index d466a3bce8..75336946a6 100644 --- a/doc/src/sgml/pgvisibility.sgml +++ b/doc/src/sgml/pgvisibility.sgml @@ -8,7 +8,7 @@ - The pg_visibility module provides a means for examining the + The pg_visibility module provides a means for examining the visibility map (VM) and page-level visibility information of a table. It also provides functions to check the integrity of a visibility map and to force it to be rebuilt. @@ -28,13 +28,13 @@ These two bits will normally agree, but the page's all-visible bit can sometimes be set while the visibility map bit is clear after a crash recovery. The reported values can also disagree because of a change that - occurs after pg_visibility examines the visibility map and + occurs after pg_visibility examines the visibility map and before it examines the data page. Any event that causes data corruption can also cause these bits to disagree. - Functions that display information about PD_ALL_VISIBLE bits + Functions that display information about PD_ALL_VISIBLE bits are much more costly than those that only consult the visibility map, because they must read the relation's data blocks rather than only the (much smaller) visibility map. Functions that check the relation's @@ -61,7 +61,7 @@ Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation, plus the - PD_ALL_VISIBLE bit of that block. + PD_ALL_VISIBLE bit of that block. @@ -82,7 +82,7 @@ Returns the all-visible and all-frozen bits in the visibility map for - each block of the given relation, plus the PD_ALL_VISIBLE + each block of the given relation, plus the PD_ALL_VISIBLE bit of each block. @@ -130,7 +130,7 @@ Truncates the visibility map for the given relation. This function is useful if you believe that the visibility map for the relation is - corrupt and wish to force rebuilding it. The first VACUUM + corrupt and wish to force rebuilding it. The first VACUUM executed on the given relation after this function is executed will scan every page in the relation and rebuild the visibility map. (Until that is done, queries will treat the visibility map as containing all zeroes.) diff --git a/doc/src/sgml/planstats.sgml b/doc/src/sgml/planstats.sgml index 838fcda6d2..ee081308a9 100644 --- a/doc/src/sgml/planstats.sgml +++ b/doc/src/sgml/planstats.sgml @@ -28,13 +28,13 @@ - The examples shown below use tables in the PostgreSQL + The examples shown below use tables in the PostgreSQL regression test database. The outputs shown are taken from version 8.3. The behavior of earlier (or later) versions might vary. - Note also that since ANALYZE uses random sampling + Note also that since ANALYZE uses random sampling while producing statistics, the results will change slightly after - any new ANALYZE. + any new ANALYZE. @@ -61,8 +61,8 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; 358 | 10000 - These numbers are current as of the last VACUUM or - ANALYZE on the table. The planner then fetches the + These numbers are current as of the last VACUUM or + ANALYZE on the table. The planner then fetches the actual current number of pages in the table (this is a cheap operation, not requiring a table scan). If that is different from relpages then @@ -150,7 +150,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = 'CRAAAA'; and looks up the selectivity function for =, which is eqsel. For equality estimation the histogram is not useful; instead the list of most - common values (MCVs) is used to determine the + common values (MCVs) is used to determine the selectivity. Let's have a look at the MCVs, with some additional columns that will be useful later: @@ -165,7 +165,7 @@ most_common_freqs | {0.00333333,0.003,0.003,0.003,0.003,0.003,0.003,0.003,0.003, - Since CRAAAA appears in the list of MCVs, the selectivity is + Since CRAAAA appears in the list of MCVs, the selectivity is merely the corresponding entry in the list of most common frequencies (MCFs): @@ -225,18 +225,18 @@ rows = 10000 * 0.0014559 - The previous example with unique1 < 1000 was an + The previous example with unique1 < 1000 was an oversimplification of what scalarltsel really does; now that we have seen an example of the use of MCVs, we can fill in some more detail. The example was correct as far as it went, because since - unique1 is a unique column it has no MCVs (obviously, no + unique1 is a unique column it has no MCVs (obviously, no value is any more common than any other value). For a non-unique column, there will normally be both a histogram and an MCV list, and the histogram does not include the portion of the column - population represented by the MCVs. We do things this way because + population represented by the MCVs. We do things this way because it allows more precise estimation. In this situation scalarltsel directly applies the condition (e.g., - < 1000) to each value of the MCV list, and adds up the + < 1000) to each value of the MCV list, and adds up the frequencies of the MCVs for which the condition is true. This gives an exact estimate of the selectivity within the portion of the table that is MCVs. The histogram is then used in the same way as above @@ -253,7 +253,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 < 'IAAAAA'; Filter: (stringu1 < 'IAAAAA'::name) - We already saw the MCV information for stringu1, + We already saw the MCV information for stringu1, and here is its histogram: @@ -266,7 +266,7 @@ WHERE tablename='tenk1' AND attname='stringu1'; Checking the MCV list, we find that the condition stringu1 < - 'IAAAAA' is satisfied by the first six entries and not the last four, + 'IAAAAA' is satisfied by the first six entries and not the last four, so the selectivity within the MCV part of the population is @@ -279,11 +279,11 @@ selectivity = sum(relevant mvfs) population represented by MCVs is 0.03033333, and therefore the fraction represented by the histogram is 0.96966667 (again, there are no nulls, else we'd have to exclude them here). We can see - that the value IAAAAA falls nearly at the end of the + that the value IAAAAA falls nearly at the end of the third histogram bucket. Using some rather cheesy assumptions about the frequency of different characters, the planner arrives at the estimate 0.298387 for the portion of the histogram population - that is less than IAAAAA. We then combine the estimates + that is less than IAAAAA. We then combine the estimates for the MCV and non-MCV populations: @@ -372,7 +372,7 @@ rows = 10000 * 0.005035 = 50 (rounding off) - The restriction for the join is t2.unique2 = t1.unique2. + The restriction for the join is t2.unique2 = t1.unique2. The operator is just our familiar =, however the selectivity function is obtained from the oprjoin column of @@ -424,12 +424,12 @@ rows = (outer_cardinality * inner_cardinality) * selectivity - Notice that we showed inner_cardinality as 10000, that is, - the unmodified size of tenk2. It might appear from - inspection of the EXPLAIN output that the estimate of + Notice that we showed inner_cardinality as 10000, that is, + the unmodified size of tenk2. It might appear from + inspection of the EXPLAIN output that the estimate of join rows comes from 50 * 1, that is, the number of outer rows times the estimated number of rows obtained by each inner index scan on - tenk2. But this is not the case: the join relation size + tenk2. But this is not the case: the join relation size is estimated before any particular join plan has been considered. If everything is working well then the two ways of estimating the join size will produce about the same answer, but due to round-off error and @@ -438,7 +438,7 @@ rows = (outer_cardinality * inner_cardinality) * selectivity For those interested in further details, estimation of the size of - a table (before any WHERE clauses) is done in + a table (before any WHERE clauses) is done in src/backend/optimizer/util/plancat.c. The generic logic for clause selectivities is in src/backend/optimizer/path/clausesel.c. The @@ -485,8 +485,8 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 't'; - The following example shows the result of estimating a WHERE - condition on the a column: + The following example shows the result of estimating a WHERE + condition on the a column: EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1; @@ -501,9 +501,9 @@ EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1; of this clause to be 1%. By comparing this estimate and the actual number of rows, we see that the estimate is very accurate (in fact exact, as the table is very small). Changing the - WHERE condition to use the b column, an + WHERE condition to use the b column, an identical plan is generated. But observe what happens if we apply the same - condition on both columns, combining them with AND: + condition on both columns, combining them with AND: EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1 AND b = 1; @@ -524,7 +524,7 @@ EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1 AND b = 1; This problem can be fixed by creating a statistics object that - directs ANALYZE to calculate functional-dependency + directs ANALYZE to calculate functional-dependency multivariate statistics on the two columns: diff --git a/doc/src/sgml/plhandler.sgml b/doc/src/sgml/plhandler.sgml index 2573e67743..95e7dc9fc0 100644 --- a/doc/src/sgml/plhandler.sgml +++ b/doc/src/sgml/plhandler.sgml @@ -35,7 +35,7 @@ The call handler is called in the same way as any other function: It receives a pointer to a - FunctionCallInfoData struct containing + FunctionCallInfoData struct containing argument values and information about the called function, and it is expected to return a Datum result (and possibly set the isnull field of the @@ -54,7 +54,7 @@ It's up to the call handler to fetch the entry of the function from the pg_proc system catalog and to analyze the argument - and return types of the called function. The AS clause from the + and return types of the called function. The AS clause from the CREATE FUNCTION command for the function will be found in the prosrc column of the pg_proc row. This is commonly source @@ -68,9 +68,9 @@ A call handler can avoid repeated lookups of information about the called function by using the flinfo->fn_extra field. This will - initially be NULL, but can be set by the call handler to point at + initially be NULL, but can be set by the call handler to point at information about the called function. On subsequent calls, if - flinfo->fn_extra is already non-NULL + flinfo->fn_extra is already non-NULL then it can be used and the information lookup step skipped. The call handler must make sure that flinfo->fn_extra is made to point at @@ -90,7 +90,7 @@ are passed in the usual way, but the FunctionCallInfoData's context field points at a - TriggerData structure, rather than being NULL + TriggerData structure, rather than being NULL as it is in a plain function call. A language handler should provide mechanisms for procedural-language functions to get at the trigger information. @@ -170,21 +170,21 @@ CREATE LANGUAGE plsample If a validator is provided by a procedural language, it must be declared as a function taking a single parameter of type - oid. The validator's result is ignored, so it is customarily - declared to return void. The validator will be called at - the end of a CREATE FUNCTION command that has created + oid. The validator's result is ignored, so it is customarily + declared to return void. The validator will be called at + the end of a CREATE FUNCTION command that has created or updated a function written in the procedural language. - The passed-in OID is the OID of the function's pg_proc + The passed-in OID is the OID of the function's pg_proc row. The validator must fetch this row in the usual way, and do whatever checking is appropriate. - First, call CheckFunctionValidatorAccess() to diagnose + First, call CheckFunctionValidatorAccess() to diagnose explicit calls to the validator that the user could not achieve through - CREATE FUNCTION. Typical checks then include verifying + CREATE FUNCTION. Typical checks then include verifying that the function's argument and result types are supported by the language, and that the function's body is syntactically correct in the language. If the validator finds the function to be okay, it should just return. If it finds an error, it should report that - via the normal ereport() error reporting mechanism. + via the normal ereport() error reporting mechanism. Throwing an error will force a transaction rollback and thus prevent the incorrect function definition from being committed. @@ -195,40 +195,40 @@ CREATE LANGUAGE plsample any expensive or context-sensitive checking should be skipped. If the language provides for code execution at compilation time, the validator must suppress checks that would induce such execution. In particular, - this parameter is turned off by pg_dump so that it can + this parameter is turned off by pg_dump so that it can load procedural language functions without worrying about side effects or dependencies of the function bodies on other database objects. (Because of this requirement, the call handler should avoid assuming that the validator has fully checked the function. The point of having a validator is not to let the call handler omit checks, but to notify the user immediately if there are obvious errors in a - CREATE FUNCTION command.) + CREATE FUNCTION command.) While the choice of exactly what to check is mostly left to the discretion of the validator function, note that the core - CREATE FUNCTION code only executes SET clauses - attached to a function when check_function_bodies is on. + CREATE FUNCTION code only executes SET clauses + attached to a function when check_function_bodies is on. Therefore, checks whose results might be affected by GUC parameters - definitely should be skipped when check_function_bodies is + definitely should be skipped when check_function_bodies is off, to avoid false failures when reloading a dump. If an inline handler is provided by a procedural language, it must be declared as a function taking a single parameter of type - internal. The inline handler's result is ignored, so it is - customarily declared to return void. The inline handler - will be called when a DO statement is executed specifying + internal. The inline handler's result is ignored, so it is + customarily declared to return void. The inline handler + will be called when a DO statement is executed specifying the procedural language. The parameter actually passed is a pointer - to an InlineCodeBlock struct, which contains information - about the DO statement's parameters, in particular the + to an InlineCodeBlock struct, which contains information + about the DO statement's parameters, in particular the text of the anonymous code block to be executed. The inline handler should execute this code and return. It's recommended that you wrap all these function declarations, - as well as the CREATE LANGUAGE command itself, into - an extension so that a simple CREATE EXTENSION + as well as the CREATE LANGUAGE command itself, into + an extension so that a simple CREATE EXTENSION command is sufficient to install the language. See for information about writing extensions. @@ -237,7 +237,7 @@ CREATE LANGUAGE plsample The procedural languages included in the standard distribution are good references when trying to write your own language handler. - Look into the src/pl subdirectory of the source tree. + Look into the src/pl subdirectory of the source tree. The reference page also has some useful details. diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml index 37a3557d61..dfffa4077f 100644 --- a/doc/src/sgml/plperl.sgml +++ b/doc/src/sgml/plperl.sgml @@ -27,12 +27,12 @@ To install PL/Perl in a particular database, use - CREATE EXTENSION plperl. + CREATE EXTENSION plperl. - If a language is installed into template1, all subsequently + If a language is installed into template1, all subsequently created databases will have the language installed automatically. @@ -90,8 +90,8 @@ $$ LANGUAGE plperl; subroutines which you call via a coderef. For more information, see the entries for Variable "%s" will not stay shared and Variable "%s" is not available in the - perldiag man page, or - search the Internet for perl nested named subroutine. + perldiag man page, or + search the Internet for perl nested named subroutine. @@ -100,16 +100,16 @@ $$ LANGUAGE plperl; the function body to be written as a string constant. It is usually most convenient to use dollar quoting (see ) for the string constant. - If you choose to use escape string syntax E'', - you must double any single quote marks (') and backslashes - (\) used in the body of the function + If you choose to use escape string syntax E'', + you must double any single quote marks (') and backslashes + (\) used in the body of the function (see ). Arguments and results are handled as in any other Perl subroutine: arguments are passed in @_, and a result value - is returned with return or as the last expression + is returned with return or as the last expression evaluated in the function. @@ -134,12 +134,12 @@ $$ LANGUAGE plperl; - If an SQL null valuenull valuein PL/Perl is passed to a function, - the argument value will appear as undefined in Perl. The + If an SQL null valuenull valuein PL/Perl is passed to a function, + the argument value will appear as undefined in Perl. The above function definition will not behave very nicely with null inputs (in fact, it will act as though they are zeroes). We could - add STRICT to the function definition to make + add STRICT to the function definition to make PostgreSQL do something more reasonable: if a null value is passed, the function will not be called at all, but will just return a null result automatically. Alternatively, @@ -174,14 +174,14 @@ $$ LANGUAGE plperl; other cases the argument will need to be converted into a form that is more usable in Perl. For example, the decode_bytea function can be used to convert an argument of - type bytea into unescaped binary. + type bytea into unescaped binary. Similarly, values passed back to PostgreSQL must be in the external text representation format. For example, the encode_bytea function can be used to - escape binary data for a return value of type bytea. + escape binary data for a return value of type bytea. @@ -330,10 +330,10 @@ SELECT * FROM perl_set(); - If you wish to use the strict pragma with your code you - have a few options. For temporary global use you can SET + If you wish to use the strict pragma with your code you + have a few options. For temporary global use you can SET plperl.use_strict to true. - This will affect subsequent compilations of PL/Perl + This will affect subsequent compilations of PL/Perl functions, but not functions already compiled in the current session. For permanent global use you can set plperl.use_strict to true in the postgresql.conf file. @@ -348,7 +348,7 @@ use strict; - The feature pragma is also available to use if your Perl is version 5.10.0 or higher. + The feature pragma is also available to use if your Perl is version 5.10.0 or higher. @@ -380,7 +380,7 @@ use strict; - spi_exec_query(query [, max-rows]) + spi_exec_query(query [, max-rows]) spi_exec_query in PL/Perl @@ -524,13 +524,13 @@ SELECT * from lotsa_md5(500); - Normally, spi_fetchrow should be repeated until it + Normally, spi_fetchrow should be repeated until it returns undef, indicating that there are no more rows to read. The cursor returned by spi_query is automatically freed when - spi_fetchrow returns undef. + spi_fetchrow returns undef. If you do not wish to read all the rows, instead call - spi_cursor_close to free the cursor. + spi_cursor_close to free the cursor. Failure to do so will result in memory leaks. @@ -675,13 +675,13 @@ SELECT release_hosts_query(); Emit a log or error message. Possible levels are - DEBUG, LOG, INFO, - NOTICE, WARNING, and ERROR. - ERROR + DEBUG, LOG, INFO, + NOTICE, WARNING, and ERROR. + ERROR raises an error condition; if this is not trapped by the surrounding Perl code, the error propagates out to the calling query, causing the current transaction or subtransaction to be aborted. This - is effectively the same as the Perl die command. + is effectively the same as the Perl die command. The other levels only generate messages of different priority levels. Whether messages of a particular priority are reported to the client, @@ -706,8 +706,8 @@ SELECT release_hosts_query(); Return the given string suitably quoted to be used as a string literal in an SQL statement string. Embedded single-quotes and backslashes are properly doubled. - Note that quote_literal returns undef on undef input; if the argument - might be undef, quote_nullable is often more suitable. + Note that quote_literal returns undef on undef input; if the argument + might be undef, quote_nullable is often more suitable. @@ -849,7 +849,7 @@ SELECT release_hosts_query(); Returns a true value if the content of the given string looks like a number, according to Perl, returns false otherwise. Returns undef if the argument is undef. Leading and trailing space is - ignored. Inf and Infinity are regarded as numbers. + ignored. Inf and Infinity are regarded as numbers. @@ -865,8 +865,8 @@ SELECT release_hosts_query(); Returns a true value if the given argument may be treated as an - array reference, that is, if ref of the argument is ARRAY or - PostgreSQL::InServer::ARRAY. Returns false otherwise. + array reference, that is, if ref of the argument is ARRAY or + PostgreSQL::InServer::ARRAY. Returns false otherwise. @@ -941,11 +941,11 @@ $$ LANGUAGE plperl; PL/Perl functions will share the same value of %_SHARED if and only if they are executed by the same SQL role. In an application wherein a single session executes code under multiple SQL roles (via - SECURITY DEFINER functions, use of SET ROLE, etc) + SECURITY DEFINER functions, use of SET ROLE, etc) you may need to take explicit steps to ensure that PL/Perl functions can share data via %_SHARED. To do that, make sure that functions that should communicate are owned by the same user, and mark - them SECURITY DEFINER. You must of course take care that + them SECURITY DEFINER. You must of course take care that such functions can't be used to do anything unintended. @@ -959,8 +959,8 @@ $$ LANGUAGE plperl; - Normally, PL/Perl is installed as a trusted programming - language named plperl. In this setup, certain Perl + Normally, PL/Perl is installed as a trusted programming + language named plperl. In this setup, certain Perl operations are disabled to preserve security. In general, the operations that are restricted are those that interact with the environment. This includes file handle operations, @@ -993,15 +993,15 @@ $$ LANGUAGE plperl; Sometimes it is desirable to write Perl functions that are not restricted. For example, one might want a Perl function that sends mail. To handle these cases, PL/Perl can also be installed as an - untrusted language (usually called - PL/PerlUPL/PerlU). + untrusted language (usually called + PL/PerlUPL/PerlU). In this case the full Perl language is available. When installing the language, the language name plperlu will select the untrusted PL/Perl variant. - The writer of a PL/PerlU function must take care that the function + The writer of a PL/PerlU function must take care that the function cannot be used to do anything unwanted, since it will be able to do anything that could be done by a user logged in as the database administrator. Note that the database system allows only database @@ -1010,25 +1010,25 @@ $$ LANGUAGE plperl; If the above function was created by a superuser using the language - plperlu, execution would succeed. + plperlu, execution would succeed. In the same way, anonymous code blocks written in Perl can use restricted operations if the language is specified as - plperlu rather than plperl, but the caller + plperlu rather than plperl, but the caller must be a superuser. - While PL/Perl functions run in a separate Perl - interpreter for each SQL role, all PL/PerlU functions + While PL/Perl functions run in a separate Perl + interpreter for each SQL role, all PL/PerlU functions executed in a given session run in a single Perl interpreter (which is - not any of the ones used for PL/Perl functions). - This allows PL/PerlU functions to share data freely, - but no communication can occur between PL/Perl and - PL/PerlU functions. + not any of the ones used for PL/Perl functions). + This allows PL/PerlU functions to share data freely, + but no communication can occur between PL/Perl and + PL/PerlU functions. @@ -1036,14 +1036,14 @@ $$ LANGUAGE plperl; Perl cannot support multiple interpreters within one process unless it was built with the appropriate flags, namely either - usemultiplicity or useithreads. - (usemultiplicity is preferred unless you actually need + usemultiplicity or useithreads. + (usemultiplicity is preferred unless you actually need to use threads. For more details, see the - perlembed man page.) - If PL/Perl is used with a copy of Perl that was not built + perlembed man page.) + If PL/Perl is used with a copy of Perl that was not built this way, then it is only possible to have one Perl interpreter per session, and so any one session can only execute either - PL/PerlU functions, or PL/Perl functions + PL/PerlU functions, or PL/Perl functions that are all called by the same SQL role. @@ -1056,7 +1056,7 @@ $$ LANGUAGE plperl; PL/Perl can be used to write trigger functions. In a trigger function, the hash reference $_TD contains information about the - current trigger event. $_TD is a global variable, + current trigger event. $_TD is a global variable, which gets a separate local value for each invocation of the trigger. The fields of the $_TD hash reference are: @@ -1092,8 +1092,8 @@ $$ LANGUAGE plperl; $_TD->{event} - Trigger event: INSERT, UPDATE, - DELETE, TRUNCATE, or UNKNOWN + Trigger event: INSERT, UPDATE, + DELETE, TRUNCATE, or UNKNOWN @@ -1244,7 +1244,7 @@ CREATE TRIGGER test_valid_id_trig PL/Perl can be used to write event trigger functions. In an event trigger function, the hash reference $_TD contains information - about the current trigger event. $_TD is a global variable, + about the current trigger event. $_TD is a global variable, which gets a separate local value for each invocation of the trigger. The fields of the $_TD hash reference are: @@ -1295,7 +1295,7 @@ CREATE EVENT TRIGGER perl_a_snitch Configuration - This section lists configuration parameters that affect PL/Perl. + This section lists configuration parameters that affect PL/Perl. @@ -1304,14 +1304,14 @@ CREATE EVENT TRIGGER perl_a_snitch plperl.on_init (string) - plperl.on_init configuration parameter + plperl.on_init configuration parameter Specifies Perl code to be executed when a Perl interpreter is first - initialized, before it is specialized for use by plperl or - plperlu. + initialized, before it is specialized for use by plperl or + plperlu. The SPI functions are not available when this code is executed. If the code fails with an error it will abort the initialization of the interpreter and propagate out to the calling query, causing the @@ -1319,7 +1319,7 @@ CREATE EVENT TRIGGER perl_a_snitch The Perl code is limited to a single string. Longer code can be placed - into a module and loaded by the on_init string. + into a module and loaded by the on_init string. Examples: plperl.on_init = 'require "plperlinit.pl"' @@ -1327,8 +1327,8 @@ plperl.on_init = 'use lib "/my/app"; use MyApp::PgInit;' - Any modules loaded by plperl.on_init, either directly or - indirectly, will be available for use by plperl. This may + Any modules loaded by plperl.on_init, either directly or + indirectly, will be available for use by plperl. This may create a security risk. To see what modules have been loaded you can use: DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; @@ -1339,14 +1339,14 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; included in , in which case extra consideration should be given to the risk of destabilizing the postmaster. The principal reason for making use of this feature - is that Perl modules loaded by plperl.on_init need be + is that Perl modules loaded by plperl.on_init need be loaded only at postmaster start, and will be instantly available without loading overhead in individual database sessions. However, keep in mind that the overhead is avoided only for the first Perl interpreter used by a database session — either PL/PerlU, or PL/Perl for the first SQL role that calls a PL/Perl function. Any additional Perl interpreters created in a database session will have - to execute plperl.on_init afresh. Also, on Windows there + to execute plperl.on_init afresh. Also, on Windows there will be no savings whatsoever from preloading, since the Perl interpreter created in the postmaster process does not propagate to child processes. @@ -1361,27 +1361,27 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; plperl.on_plperl_init (string) - plperl.on_plperl_init configuration parameter + plperl.on_plperl_init configuration parameter plperl.on_plperlu_init (string) - plperl.on_plperlu_init configuration parameter + plperl.on_plperlu_init configuration parameter These parameters specify Perl code to be executed when a Perl - interpreter is specialized for plperl or - plperlu respectively. This will happen when a PL/Perl or + interpreter is specialized for plperl or + plperlu respectively. This will happen when a PL/Perl or PL/PerlU function is first executed in a database session, or when an additional interpreter has to be created because the other language is called or a PL/Perl function is called by a new SQL role. This - follows any initialization done by plperl.on_init. + follows any initialization done by plperl.on_init. The SPI functions are not available when this code is executed. - The Perl code in plperl.on_plperl_init is executed after - locking down the interpreter, and thus it can only perform + The Perl code in plperl.on_plperl_init is executed after + locking down the interpreter, and thus it can only perform trusted operations. @@ -1404,13 +1404,13 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; plperl.use_strict (boolean) - plperl.use_strict configuration parameter + plperl.use_strict configuration parameter When set true subsequent compilations of PL/Perl functions will have - the strict pragma enabled. This parameter does not affect + the strict pragma enabled. This parameter does not affect functions already compiled in the current session. @@ -1459,7 +1459,7 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; When a session ends normally, not due to a fatal error, any - END blocks that have been defined are executed. + END blocks that have been defined are executed. Currently no other actions are performed. Specifically, file handles are not automatically flushed and objects are not automatically destroyed. diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index d18b48c40c..7323c2f67d 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -13,7 +13,7 @@ PL/pgSQL is a loadable procedural language for the PostgreSQL database - system. The design goals of PL/pgSQL were to create + system. The design goals of PL/pgSQL were to create a loadable procedural language that @@ -59,7 +59,7 @@ - In PostgreSQL 9.0 and later, + In PostgreSQL 9.0 and later, PL/pgSQL is installed by default. However it is still a loadable module, so especially security-conscious administrators could choose to remove it. @@ -69,7 +69,7 @@ Advantages of Using <application>PL/pgSQL</application> - SQL is the language PostgreSQL + SQL is the language PostgreSQL and most other relational databases use as query language. It's portable and easy to learn. But every SQL statement must be executed individually by the database server. @@ -123,49 +123,49 @@ and they can return a result of any of these types. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL - function as returning record, which means that the result + function as returning record, which means that the result is a row type whose columns are determined by specification in the calling query, as discussed in . - PL/pgSQL functions can be declared to accept a variable - number of arguments by using the VARIADIC marker. This + PL/pgSQL functions can be declared to accept a variable + number of arguments by using the VARIADIC marker. This works exactly the same way as for SQL functions, as discussed in . - PL/pgSQL functions can also be declared to accept + PL/pgSQL functions can also be declared to accept and return the polymorphic types anyelement, anyarray, anynonarray, - anyenum, and anyrange. The actual + anyenum, and anyrange. The actual data types handled by a polymorphic function can vary from call to call, as discussed in . An example is shown in . - PL/pgSQL functions can also be declared to return - a set (or table) of any data type that can be returned as + PL/pgSQL functions can also be declared to return + a set (or table) of any data type that can be returned as a single instance. Such a function generates its output by executing - RETURN NEXT for each desired element of the result - set, or by using RETURN QUERY to output the result of + RETURN NEXT for each desired element of the result + set, or by using RETURN QUERY to output the result of evaluating a query. - Finally, a PL/pgSQL function can be declared to return - void if it has no useful return value. + Finally, a PL/pgSQL function can be declared to return + void if it has no useful return value. - PL/pgSQL functions can also be declared with output + PL/pgSQL functions can also be declared with output parameters in place of an explicit specification of the return type. This does not add any fundamental capability to the language, but it is often convenient, especially for returning multiple values. - The RETURNS TABLE notation can also be used in place - of RETURNS SETOF. + The RETURNS TABLE notation can also be used in place + of RETURNS SETOF. @@ -185,11 +185,11 @@ Such a command would normally look like, say, CREATE FUNCTION somefunc(integer, text) RETURNS integer -AS 'function body text' +AS 'function body text' LANGUAGE plpgsql; The function body is simply a string literal so far as CREATE - FUNCTION is concerned. It is often helpful to use dollar quoting + FUNCTION is concerned. It is often helpful to use dollar quoting (see ) to write the function body, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function body must be escaped by @@ -200,7 +200,7 @@ LANGUAGE plpgsql; PL/pgSQL is a block-structured language. The complete text of a function body must be a - block. A block is defined as: + block. A block is defined as: <<label>> @@ -223,16 +223,16 @@ END label ; A common mistake is to write a semicolon immediately after - BEGIN. This is incorrect and will result in a syntax error. + BEGIN. This is incorrect and will result in a syntax error. A label is only needed if you want to identify the block for use - in an EXIT statement, or to qualify the names of the + in an EXIT statement, or to qualify the names of the variables declared in the block. If a label is given after - END, it must match the label at the block's beginning. + END, it must match the label at the block's beginning. @@ -242,7 +242,7 @@ END label ; - Comments work the same way in PL/pgSQL code as in + Comments work the same way in PL/pgSQL code as in ordinary SQL. A double dash (--) starts a comment that extends to the end of the line. A /* starts a block comment that extends to the matching occurrence of @@ -251,7 +251,7 @@ END label ; Any statement in the statement section of a block - can be a subblock. Subblocks can be used for + can be a subblock. Subblocks can be used for logical grouping or to localize variables to a small group of statements. Variables declared in a subblock mask any similarly-named variables of outer blocks for the duration @@ -285,8 +285,8 @@ $$ LANGUAGE plpgsql; - There is actually a hidden outer block surrounding the body - of any PL/pgSQL function. This block provides the + There is actually a hidden outer block surrounding the body + of any PL/pgSQL function. This block provides the declarations of the function's parameters (if any), as well as some special variables such as FOUND (see ). The outer block is @@ -297,15 +297,15 @@ $$ LANGUAGE plpgsql; It is important not to confuse the use of - BEGIN/END for grouping statements in - PL/pgSQL with the similarly-named SQL commands + BEGIN/END for grouping statements in + PL/pgSQL with the similarly-named SQL commands for transaction - control. PL/pgSQL's BEGIN/END + control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query — they cannot start or commit that transaction, since there would be no context for them to execute in. - However, a block containing an EXCEPTION clause effectively + However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see . @@ -318,15 +318,15 @@ $$ LANGUAGE plpgsql; All variables used in a block must be declared in the declarations section of the block. - (The only exceptions are that the loop variable of a FOR loop + (The only exceptions are that the loop variable of a FOR loop iterating over a range of integer values is automatically declared as an - integer variable, and likewise the loop variable of a FOR loop + integer variable, and likewise the loop variable of a FOR loop iterating over a cursor's result is automatically declared as a record variable.) - PL/pgSQL variables can have any SQL data type, such as + PL/pgSQL variables can have any SQL data type, such as integer, varchar, and char. @@ -348,21 +348,21 @@ arow RECORD; name CONSTANT type COLLATE collation_name NOT NULL { DEFAULT | := | = } expression ; - The DEFAULT clause, if given, specifies the initial value assigned - to the variable when the block is entered. If the DEFAULT clause + The DEFAULT clause, if given, specifies the initial value assigned + to the variable when the block is entered. If the DEFAULT clause is not given then the variable is initialized to the SQL null value. - The CONSTANT option prevents the variable from being + The CONSTANT option prevents the variable from being assigned to after initialization, so that its value will remain constant for the duration of the block. - The COLLATE option specifies a collation to use for the + The COLLATE option specifies a collation to use for the variable (see ). - If NOT NULL + If NOT NULL is specified, an assignment of a null value results in a run-time - error. All variables declared as NOT NULL + error. All variables declared as NOT NULL must have a nonnull default value specified. - Equal (=) can be used instead of PL/SQL-compliant - :=. + Equal (=) can be used instead of PL/SQL-compliant + :=. @@ -428,9 +428,9 @@ $$ LANGUAGE plpgsql; These two examples are not perfectly equivalent. In the first case, - subtotal could be referenced as - sales_tax.subtotal, but in the second case it could not. - (Had we attached a label to the inner block, subtotal could + subtotal could be referenced as + sales_tax.subtotal, but in the second case it could not. + (Had we attached a label to the inner block, subtotal could be qualified with that label, instead.) @@ -474,7 +474,7 @@ END; $$ LANGUAGE plpgsql; - Notice that we omitted RETURNS real — we could have + Notice that we omitted RETURNS real — we could have included it, but it would be redundant. @@ -493,13 +493,13 @@ $$ LANGUAGE plpgsql; As discussed in , this effectively creates an anonymous record type for the function's - results. If a RETURNS clause is given, it must say - RETURNS record. + results. If a RETURNS clause is given, it must say + RETURNS record. Another way to declare a PL/pgSQL function - is with RETURNS TABLE, for example: + is with RETURNS TABLE, for example: CREATE FUNCTION extended_sales(p_itemno int) @@ -511,9 +511,9 @@ END; $$ LANGUAGE plpgsql; - This is exactly equivalent to declaring one or more OUT + This is exactly equivalent to declaring one or more OUT parameters and specifying RETURNS SETOF - sometype. + sometype. @@ -530,7 +530,7 @@ $$ LANGUAGE plpgsql; the function, so it can be used to hold the return value if desired, though that is not required. $0 can also be given an alias. For example, this function works on any data type - that has a + operator: + that has a + operator: CREATE FUNCTION add_three_values(v1 anyelement, v2 anyelement, v3 anyelement) @@ -564,14 +564,14 @@ $$ LANGUAGE plpgsql; - <literal>ALIAS</> + <literal>ALIAS</literal> -newname ALIAS FOR oldname; +newname ALIAS FOR oldname; - The ALIAS syntax is more general than is suggested in the + The ALIAS syntax is more general than is suggested in the previous section: you can declare an alias for any variable, not just function parameters. The main practical use for this is to assign a different name for variables with predetermined names, such as @@ -589,7 +589,7 @@ DECLARE - Since ALIAS creates two different ways to name the same + Since ALIAS creates two different ways to name the same object, unrestricted use can be confusing. It's best to use it only for the purpose of overriding predetermined names. @@ -608,7 +608,7 @@ DECLARE database values. For example, let's say you have a column named user_id in your users table. To declare a variable with the same data type as - users.user_id you write: + users.user_id you write: user_id users.user_id%TYPE; @@ -618,7 +618,7 @@ user_id users.user_id%TYPE; By using %TYPE you don't need to know the data type of the structure you are referencing, and most importantly, if the data type of the referenced item changes in the future (for - instance: you change the type of user_id + instance: you change the type of user_id from integer to real), you might not need to change your function definition. @@ -642,9 +642,9 @@ user_id users.user_id%TYPE; - A variable of a composite type is called a row - variable (or row-type variable). Such a variable - can hold a whole row of a SELECT or FOR + A variable of a composite type is called a row + variable (or row-type variable). Such a variable + can hold a whole row of a SELECT or FOR query result, so long as that query's column set matches the declared type of the variable. The individual fields of the row value @@ -658,7 +658,7 @@ user_id users.user_id%TYPE; table_name%ROWTYPE notation; or it can be declared by giving a composite type's name. (Since every table has an associated composite type of the same name, - it actually does not matter in PostgreSQL whether you + it actually does not matter in PostgreSQL whether you write %ROWTYPE or not. But the form with %ROWTYPE is more portable.) @@ -666,7 +666,7 @@ user_id users.user_id%TYPE; Parameters to a function can be composite types (complete table rows). In that case, the - corresponding identifier $n will be a row variable, and fields can + corresponding identifier $n will be a row variable, and fields can be selected from it, for example $1.user_id. @@ -675,12 +675,12 @@ user_id users.user_id%TYPE; row-type variable, not the OID or other system columns (because the row could be from a view). The fields of the row type inherit the table's field size or precision for data types such as - char(n). + char(n). - Here is an example of using composite types. table1 - and table2 are existing tables having at least the + Here is an example of using composite types. table1 + and table2 are existing tables having at least the mentioned fields: @@ -708,7 +708,7 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; Record variables are similar to row-type variables, but they have no predefined structure. They take on the actual row structure of the - row they are assigned during a SELECT or FOR command. The substructure + row they are assigned during a SELECT or FOR command. The substructure of a record variable can change each time it is assigned to. A consequence of this is that until a record variable is first assigned to, it has no substructure, and any attempt to access a @@ -716,13 +716,13 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; - Note that RECORD is not a true data type, only a placeholder. + Note that RECORD is not a true data type, only a placeholder. One should also realize that when a PL/pgSQL - function is declared to return type record, this is not quite the + function is declared to return type record, this is not quite the same concept as a record variable, even though such a function might use a record variable to hold its result. In both cases the actual row structure is unknown when the function is written, but for a function - returning record the actual structure is determined when the + returning record the actual structure is determined when the calling query is parsed, whereas a record variable can change its row structure on-the-fly. @@ -732,8 +732,8 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; Collation of <application>PL/pgSQL</application> Variables - collation - in PL/pgSQL + collation + in PL/pgSQL @@ -758,9 +758,9 @@ SELECT less_than(text_field_1, text_field_2) FROM table1; SELECT less_than(text_field_1, text_field_2 COLLATE "C") FROM table1; - The first use of less_than will use the common collation - of text_field_1 and text_field_2 for - the comparison, while the second use will use C collation. + The first use of less_than will use the common collation + of text_field_1 and text_field_2 for + the comparison, while the second use will use C collation. @@ -790,7 +790,7 @@ $$ LANGUAGE plpgsql; A local variable of a collatable data type can have a different collation - associated with it by including the COLLATE option in its + associated with it by including the COLLATE option in its declaration, for example @@ -803,7 +803,7 @@ DECLARE - Also, of course explicit COLLATE clauses can be written inside + Also, of course explicit COLLATE clauses can be written inside a function if it is desired to force a particular collation to be used in a particular operation. For example, @@ -838,7 +838,7 @@ IF expression THEN ... SELECT expression - to the main SQL engine. While forming the SELECT command, + to the main SQL engine. While forming the SELECT command, any occurrences of PL/pgSQL variable names are replaced by parameters, as discussed in detail in . @@ -846,17 +846,17 @@ SELECT expression be prepared just once and then reused for subsequent evaluations with different values of the variables. Thus, what really happens on first use of an expression is essentially a - PREPARE command. For example, if we have declared - two integer variables x and y, and we write + PREPARE command. For example, if we have declared + two integer variables x and y, and we write IF x < y THEN ... what happens behind the scenes is equivalent to -PREPARE statement_name(integer, integer) AS SELECT $1 < $2; +PREPARE statement_name(integer, integer) AS SELECT $1 < $2; - and then this prepared statement is EXECUTEd for each - execution of the IF statement, with the current values + and then this prepared statement is EXECUTEd for each + execution of the IF statement, with the current values of the PL/pgSQL variables supplied as parameter values. Normally these details are not important to a PL/pgSQL user, but @@ -888,20 +888,20 @@ PREPARE statement_name(integer, integer) AS SELECT $1 < $2; variable { := | = } expression; As explained previously, the expression in such a statement is evaluated - by means of an SQL SELECT command sent to the main + by means of an SQL SELECT command sent to the main database engine. The expression must yield a single value (possibly a row value, if the variable is a row or record variable). The target variable can be a simple variable (optionally qualified with a block name), a field of a row or record variable, or an element of an array - that is a simple variable or field. Equal (=) can be - used instead of PL/SQL-compliant :=. + that is a simple variable or field. Equal (=) can be + used instead of PL/SQL-compliant :=. If the expression's result data type doesn't match the variable's data type, the value will be coerced as though by an assignment cast (see ). If no assignment cast is known - for the pair of data types involved, the PL/pgSQL + for the pair of data types involved, the PL/pgSQL interpreter will attempt to convert the result value textually, that is by applying the result type's output function followed by the variable type's input function. Note that this could result in run-time errors @@ -923,7 +923,7 @@ my_record.user_id := 20; For any SQL command that does not return rows, for example - INSERT without a RETURNING clause, you can + INSERT without a RETURNING clause, you can execute the command within a PL/pgSQL function just by writing the command. @@ -944,7 +944,7 @@ my_record.user_id := 20; - Sometimes it is useful to evaluate an expression or SELECT + Sometimes it is useful to evaluate an expression or SELECT query but discard the result, for example when calling a function that has side-effects but no useful result value. To do this in PL/pgSQL, use the @@ -956,9 +956,9 @@ PERFORM query; This executes query and discards the result. Write the query the same - way you would write an SQL SELECT command, but replace the - initial keyword SELECT with PERFORM. - For WITH queries, use PERFORM and then + way you would write an SQL SELECT command, but replace the + initial keyword SELECT with PERFORM. + For WITH queries, use PERFORM and then place the query in parentheses. (In this case, the query can only return one row.) PL/pgSQL variables will be @@ -976,7 +976,7 @@ PERFORM query; present the only accepted way to do it is PERFORM. A SQL command that can return rows, such as SELECT, will be rejected as an error - unless it has an INTO clause as discussed in the + unless it has an INTO clause as discussed in the next section. @@ -1006,7 +1006,7 @@ PERFORM create_mv('cs_session_page_requests_mv', my_query); The result of a SQL command yielding a single row (possibly of multiple columns) can be assigned to a record variable, row-type variable, or list of scalar variables. This is done by writing the base SQL command and - adding an INTO clause. For example, + adding an INTO clause. For example, SELECT select_expressions INTO STRICT target FROM ...; @@ -1021,21 +1021,21 @@ DELETE ... RETURNING expressions INTO STRIC PL/pgSQL variables will be substituted into the rest of the query, and the plan is cached, just as described above for commands that do not return rows. - This works for SELECT, - INSERT/UPDATE/DELETE with - RETURNING, and utility commands that return row-set - results (such as EXPLAIN). - Except for the INTO clause, the SQL command is the same + This works for SELECT, + INSERT/UPDATE/DELETE with + RETURNING, and utility commands that return row-set + results (such as EXPLAIN). + Except for the INTO clause, the SQL command is the same as it would be written outside PL/pgSQL. - Note that this interpretation of SELECT with INTO - is quite different from PostgreSQL's regular - SELECT INTO command, wherein the INTO + Note that this interpretation of SELECT with INTO + is quite different from PostgreSQL's regular + SELECT INTO command, wherein the INTO target is a newly created table. If you want to create a table from a - SELECT result inside a + SELECT result inside a PL/pgSQL function, use the syntax CREATE TABLE ... AS SELECT. @@ -1050,21 +1050,21 @@ DELETE ... RETURNING expressions INTO STRIC - The INTO clause can appear almost anywhere in the SQL + The INTO clause can appear almost anywhere in the SQL command. Customarily it is written either just before or just after the list of select_expressions in a - SELECT command, or at the end of the command for other + SELECT command, or at the end of the command for other command types. It is recommended that you follow this convention in case the PL/pgSQL parser becomes stricter in future versions. - If STRICT is not specified in the INTO + If STRICT is not specified in the INTO clause, then target will be set to the first row returned by the query, or to nulls if the query returned no rows. - (Note that the first row is not - well-defined unless you've used ORDER BY.) Any result rows + (Note that the first row is not + well-defined unless you've used ORDER BY.) Any result rows after the first row are discarded. You can check the special FOUND variable (see ) to @@ -1079,7 +1079,7 @@ END IF; If the STRICT option is specified, the query must return exactly one row or a run-time error will be reported, either - NO_DATA_FOUND (no rows) or TOO_MANY_ROWS + NO_DATA_FOUND (no rows) or TOO_MANY_ROWS (more than one row). You can use an exception block if you wish to catch the error, for example: @@ -1093,28 +1093,28 @@ BEGIN RAISE EXCEPTION 'employee % not unique', myname; END; - Successful execution of a command with STRICT + Successful execution of a command with STRICT always sets FOUND to true. - For INSERT/UPDATE/DELETE with - RETURNING, PL/pgSQL reports + For INSERT/UPDATE/DELETE with + RETURNING, PL/pgSQL reports an error for more than one returned row, even when STRICT is not specified. This is because there - is no option such as ORDER BY with which to determine + is no option such as ORDER BY with which to determine which affected row should be returned. - If print_strict_params is enabled for the function, + If print_strict_params is enabled for the function, then when an error is thrown because the requirements - of STRICT are not met, the DETAIL part of + of STRICT are not met, the DETAIL part of the error message will include information about the parameters passed to the query. - You can change the print_strict_params + You can change the print_strict_params setting for all functions by setting - plpgsql.print_strict_params, though only subsequent + plpgsql.print_strict_params, though only subsequent function compilations will be affected. You can also enable it on a per-function basis by using a compiler option, for example: @@ -1140,7 +1140,7 @@ CONTEXT: PL/pgSQL function get_userid(text) line 6 at SQL statement - The STRICT option matches the behavior of + The STRICT option matches the behavior of Oracle PL/SQL's SELECT INTO and related statements. @@ -1174,12 +1174,12 @@ EXECUTE command-string INT command to be executed. The optional target is a record variable, a row variable, or a comma-separated list of simple variables and record/row fields, into which the results of - the command will be stored. The optional USING expressions + the command will be stored. The optional USING expressions supply values to be inserted into the command. - No substitution of PL/pgSQL variables is done on the + No substitution of PL/pgSQL variables is done on the computed command string. Any required variable values must be inserted in the command string as it is constructed; or you can use parameters as described below. @@ -1207,14 +1207,14 @@ EXECUTE command-string INT - If the STRICT option is given, an error is reported + If the STRICT option is given, an error is reported unless the query produces exactly one row. The command string can use parameter values, which are referenced - in the command as $1, $2, etc. - These symbols refer to values supplied in the USING + in the command as $1, $2, etc. + These symbols refer to values supplied in the USING clause. This method is often preferable to inserting data values into the command string as text: it avoids run-time overhead of converting the values to text and back, and it is much less prone @@ -1240,7 +1240,7 @@ EXECUTE 'SELECT count(*) FROM ' INTO c USING checked_user, checked_date; - A cleaner approach is to use format()'s %I + A cleaner approach is to use format()'s %I specification for table or column names (strings separated by a newline are concatenated): @@ -1250,32 +1250,32 @@ EXECUTE format('SELECT count(*) FROM %I ' USING checked_user, checked_date; Another restriction on parameter symbols is that they only work in - SELECT, INSERT, UPDATE, and - DELETE commands. In other statement + SELECT, INSERT, UPDATE, and + DELETE commands. In other statement types (generically called utility statements), you must insert values textually even if they are just data values. - An EXECUTE with a simple constant command string and some - USING parameters, as in the first example above, is + An EXECUTE with a simple constant command string and some + USING parameters, as in the first example above, is functionally equivalent to just writing the command directly in PL/pgSQL and allowing replacement of PL/pgSQL variables to happen automatically. - The important difference is that EXECUTE will re-plan + The important difference is that EXECUTE will re-plan the command on each execution, generating a plan that is specific to the current parameter values; whereas PL/pgSQL may otherwise create a generic plan and cache it for re-use. In situations where the best plan depends strongly on the parameter values, it can be helpful to use - EXECUTE to positively ensure that a generic plan is not + EXECUTE to positively ensure that a generic plan is not selected. SELECT INTO is not currently supported within - EXECUTE; instead, execute a plain SELECT - command and specify INTO as part of the EXECUTE + EXECUTE; instead, execute a plain SELECT + command and specify INTO as part of the EXECUTE itself. @@ -1287,7 +1287,7 @@ EXECUTE format('SELECT count(*) FROM %I ' statement supported by the PostgreSQL server. The server's EXECUTE statement cannot be used directly within - PL/pgSQL functions (and is not needed). + PL/pgSQL functions (and is not needed). @@ -1326,7 +1326,7 @@ EXECUTE format('SELECT count(*) FROM %I ' Dynamic values require careful handling since they might contain quote characters. - An example using format() (this assumes that you are + An example using format() (this assumes that you are dollar quoting the function body so quote marks need not be doubled): EXECUTE format('UPDATE tbl SET %I = $1 ' @@ -1351,7 +1351,7 @@ EXECUTE 'UPDATE tbl SET ' or table identifiers should be passed through quote_ident before insertion in a dynamic query. Expressions containing values that should be literal strings in the - constructed command should be passed through quote_literal. + constructed command should be passed through quote_literal. These functions take the appropriate steps to return the input text enclosed in double or single quotes respectively, with any embedded special characters properly escaped. @@ -1360,12 +1360,12 @@ EXECUTE 'UPDATE tbl SET ' Because quote_literal is labeled STRICT, it will always return null when called with a - null argument. In the above example, if newvalue or - keyvalue were null, the entire dynamic query string would + null argument. In the above example, if newvalue or + keyvalue were null, the entire dynamic query string would become null, leading to an error from EXECUTE. - You can avoid this problem by using the quote_nullable - function, which works the same as quote_literal except that - when called with a null argument it returns the string NULL. + You can avoid this problem by using the quote_nullable + function, which works the same as quote_literal except that + when called with a null argument it returns the string NULL. For example, EXECUTE 'UPDATE tbl SET ' @@ -1376,26 +1376,26 @@ EXECUTE 'UPDATE tbl SET ' || quote_nullable(keyvalue); If you are dealing with values that might be null, you should usually - use quote_nullable in place of quote_literal. + use quote_nullable in place of quote_literal. As always, care must be taken to ensure that null values in a query do - not deliver unintended results. For example the WHERE clause + not deliver unintended results. For example the WHERE clause 'WHERE key = ' || quote_nullable(keyvalue) - will never succeed if keyvalue is null, because the - result of using the equality operator = with a null operand + will never succeed if keyvalue is null, because the + result of using the equality operator = with a null operand is always null. If you wish null to work like an ordinary key value, you would need to rewrite the above as 'WHERE key IS NOT DISTINCT FROM ' || quote_nullable(keyvalue) - (At present, IS NOT DISTINCT FROM is handled much less - efficiently than =, so don't do this unless you must. + (At present, IS NOT DISTINCT FROM is handled much less + efficiently than =, so don't do this unless you must. See for - more information on nulls and IS DISTINCT.) + more information on nulls and IS DISTINCT.) @@ -1409,12 +1409,12 @@ EXECUTE 'UPDATE tbl SET ' || '$$ WHERE key = ' || quote_literal(keyvalue); - because it would break if the contents of newvalue - happened to contain $$. The same objection would + because it would break if the contents of newvalue + happened to contain $$. The same objection would apply to any other dollar-quoting delimiter you might pick. So, to safely quote text that is not known in advance, you - must use quote_literal, - quote_nullable, or quote_ident, as appropriate. + must use quote_literal, + quote_nullable, or quote_ident, as appropriate. @@ -1425,8 +1425,8 @@ EXECUTE 'UPDATE tbl SET ' EXECUTE format('UPDATE tbl SET %I = %L ' 'WHERE key = %L', colname, newvalue, keyvalue); - %I is equivalent to quote_ident, and - %L is equivalent to quote_nullable. + %I is equivalent to quote_ident, and + %L is equivalent to quote_nullable. The format function can be used in conjunction with the USING clause: @@ -1435,7 +1435,7 @@ EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) This form is better because the variables are handled in their native data type format, rather than unconditionally converting them to - text and quoting them via %L. It is also more efficient. + text and quoting them via %L. It is also more efficient. @@ -1443,7 +1443,7 @@ EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) A much larger example of a dynamic command and EXECUTE can be seen in , which builds and executes a - CREATE FUNCTION command to define a new function. + CREATE FUNCTION command to define a new function. @@ -1460,14 +1460,14 @@ GET CURRENT DIAGNOSTICS variable This command allows retrieval of system status indicators. - CURRENT is a noise word (but see also GET STACKED + CURRENT is a noise word (but see also GET STACKED DIAGNOSTICS in ). Each item is a key word identifying a status value to be assigned to the specified variable (which should be of the right data type to receive it). The currently available status items are shown in . Colon-equal - (:=) can be used instead of the SQL-standard = + (:=) can be used instead of the SQL-standard = token. An example: GET DIAGNOSTICS integer_var = ROW_COUNT; @@ -1487,13 +1487,13 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; ROW_COUNT - bigint + bigint the number of rows processed by the most recent SQL command RESULT_OID - oid + oid the OID of the last row inserted by the most recent SQL command (only useful after an INSERT command into a table having @@ -1501,7 +1501,7 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; PG_CONTEXT - text + text line(s) of text describing the current call stack (see ) @@ -1526,33 +1526,33 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; - A PERFORM statement sets FOUND + A PERFORM statement sets FOUND true if it produces (and discards) one or more rows, false if no row is produced. - UPDATE, INSERT, and DELETE + UPDATE, INSERT, and DELETE statements set FOUND true if at least one row is affected, false if no row is affected. - A FETCH statement sets FOUND + A FETCH statement sets FOUND true if it returns a row, false if no row is returned. - A MOVE statement sets FOUND + A MOVE statement sets FOUND true if it successfully repositions the cursor, false otherwise. - A FOR or FOREACH statement sets + A FOR or FOREACH statement sets FOUND true if it iterates one or more times, else false. FOUND is set this way when the @@ -1625,7 +1625,7 @@ END; In Oracle's PL/SQL, empty statement lists are not allowed, and so - NULL statements are required for situations + NULL statements are required for situations such as this. PL/pgSQL allows you to just write nothing, instead. @@ -1639,9 +1639,9 @@ END; Control structures are probably the most useful (and - important) part of PL/pgSQL. With - PL/pgSQL's control structures, - you can manipulate PostgreSQL data in a very + important) part of PL/pgSQL. With + PL/pgSQL's control structures, + you can manipulate PostgreSQL data in a very flexible and powerful way. @@ -1655,7 +1655,7 @@ END; - <command>RETURN</> + <command>RETURN</command> RETURN expression; @@ -1665,7 +1665,7 @@ RETURN expression; RETURN with an expression terminates the function and returns the value of expression to the caller. This form - is used for PL/pgSQL functions that do + is used for PL/pgSQL functions that do not return a set. @@ -1716,7 +1716,7 @@ RETURN (1, 2, 'three'::text); -- must cast columns to correct types - <command>RETURN NEXT</> and <command>RETURN QUERY</command> + <command>RETURN NEXT</command> and <command>RETURN QUERY</command> RETURN NEXT in PL/pgSQL @@ -1733,8 +1733,8 @@ RETURN QUERY EXECUTE command-string < - When a PL/pgSQL function is declared to return - SETOF sometype, the procedure + When a PL/pgSQL function is declared to return + SETOF sometype, the procedure to follow is slightly different. In that case, the individual items to return are specified by a sequence of RETURN NEXT or RETURN QUERY commands, and @@ -1755,7 +1755,7 @@ RETURN QUERY EXECUTE command-string < QUERY do not actually return from the function — they simply append zero or more rows to the function's result set. Execution then continues with the next statement in the - PL/pgSQL function. As successive + PL/pgSQL function. As successive RETURN NEXT or RETURN QUERY commands are executed, the result set is built up. A final RETURN, which should have no @@ -1767,8 +1767,8 @@ RETURN QUERY EXECUTE command-string < RETURN QUERY has a variant RETURN QUERY EXECUTE, which specifies the query to be executed dynamically. Parameter expressions can - be inserted into the computed query string via USING, - in just the same way as in the EXECUTE command. + be inserted into the computed query string via USING, + in just the same way as in the EXECUTE command. @@ -1778,9 +1778,9 @@ RETURN QUERY EXECUTE command-string < variable(s) will be saved for eventual return as a row of the result. Note that you must declare the function as returning SETOF record when there are multiple output - parameters, or SETOF sometype + parameters, or SETOF sometype when there is just one output parameter of type - sometype, in order to create a set-returning + sometype, in order to create a set-returning function with output parameters. @@ -1848,11 +1848,11 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); The current implementation of RETURN NEXT and RETURN QUERY stores the entire result set before returning from the function, as discussed above. That - means that if a PL/pgSQL function produces a + means that if a PL/pgSQL function produces a very large result set, performance might be poor: data will be written to disk to avoid memory exhaustion, but the function itself will not return until the entire result set has been - generated. A future version of PL/pgSQL might + generated. A future version of PL/pgSQL might allow users to define set-returning functions that do not have this limitation. Currently, the point at which data begins being written to disk is controlled by the @@ -1869,34 +1869,34 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); Conditionals - IF and CASE statements let you execute + IF and CASE statements let you execute alternative commands based on certain conditions. - PL/pgSQL has three forms of IF: + PL/pgSQL has three forms of IF: - IF ... THEN ... END IF + IF ... THEN ... END IF - IF ... THEN ... ELSE ... END IF + IF ... THEN ... ELSE ... END IF - IF ... THEN ... ELSIF ... THEN ... ELSE ... END IF + IF ... THEN ... ELSIF ... THEN ... ELSE ... END IF - and two forms of CASE: + and two forms of CASE: - CASE ... WHEN ... THEN ... ELSE ... END CASE + CASE ... WHEN ... THEN ... ELSE ... END CASE - CASE WHEN ... THEN ... ELSE ... END CASE + CASE WHEN ... THEN ... ELSE ... END CASE - <literal>IF-THEN</> + <literal>IF-THEN</literal> IF boolean-expression THEN @@ -1923,7 +1923,7 @@ END IF; - <literal>IF-THEN-ELSE</> + <literal>IF-THEN-ELSE</literal> IF boolean-expression THEN @@ -1964,7 +1964,7 @@ END IF; - <literal>IF-THEN-ELSIF</> + <literal>IF-THEN-ELSIF</literal> IF boolean-expression THEN @@ -1983,15 +1983,15 @@ END IF; Sometimes there are more than just two alternatives. - IF-THEN-ELSIF provides a convenient + IF-THEN-ELSIF provides a convenient method of checking several alternatives in turn. - The IF conditions are tested successively + The IF conditions are tested successively until the first one that is true is found. Then the associated statement(s) are executed, after which control - passes to the next statement after END IF. - (Any subsequent IF conditions are not - tested.) If none of the IF conditions is true, - then the ELSE block (if any) is executed. + passes to the next statement after END IF. + (Any subsequent IF conditions are not + tested.) If none of the IF conditions is true, + then the ELSE block (if any) is executed. @@ -2012,8 +2012,8 @@ END IF; - The key word ELSIF can also be spelled - ELSEIF. + The key word ELSIF can also be spelled + ELSEIF. @@ -2033,14 +2033,14 @@ END IF; - However, this method requires writing a matching END IF - for each IF, so it is much more cumbersome than - using ELSIF when there are many alternatives. + However, this method requires writing a matching END IF + for each IF, so it is much more cumbersome than + using ELSIF when there are many alternatives. - Simple <literal>CASE</> + Simple <literal>CASE</literal> CASE search-expression @@ -2055,16 +2055,16 @@ END CASE; - The simple form of CASE provides conditional execution - based on equality of operands. The search-expression + The simple form of CASE provides conditional execution + based on equality of operands. The search-expression is evaluated (once) and successively compared to each - expression in the WHEN clauses. + expression in the WHEN clauses. If a match is found, then the corresponding statements are executed, and then control - passes to the next statement after END CASE. (Subsequent - WHEN expressions are not evaluated.) If no match is - found, the ELSE statements are - executed; but if ELSE is not present, then a + passes to the next statement after END CASE. (Subsequent + WHEN expressions are not evaluated.) If no match is + found, the ELSE statements are + executed; but if ELSE is not present, then a CASE_NOT_FOUND exception is raised. @@ -2083,7 +2083,7 @@ END CASE; - Searched <literal>CASE</> + Searched <literal>CASE</literal> CASE @@ -2098,16 +2098,16 @@ END CASE; - The searched form of CASE provides conditional execution - based on truth of Boolean expressions. Each WHEN clause's + The searched form of CASE provides conditional execution + based on truth of Boolean expressions. Each WHEN clause's boolean-expression is evaluated in turn, - until one is found that yields true. Then the + until one is found that yields true. Then the corresponding statements are executed, and - then control passes to the next statement after END CASE. - (Subsequent WHEN expressions are not evaluated.) - If no true result is found, the ELSE + then control passes to the next statement after END CASE. + (Subsequent WHEN expressions are not evaluated.) + If no true result is found, the ELSE statements are executed; - but if ELSE is not present, then a + but if ELSE is not present, then a CASE_NOT_FOUND exception is raised. @@ -2125,9 +2125,9 @@ END CASE; - This form of CASE is entirely equivalent to - IF-THEN-ELSIF, except for the rule that reaching - an omitted ELSE clause results in an error rather + This form of CASE is entirely equivalent to + IF-THEN-ELSIF, except for the rule that reaching + an omitted ELSE clause results in an error rather than doing nothing. @@ -2143,14 +2143,14 @@ END CASE; - With the LOOP, EXIT, - CONTINUE, WHILE, FOR, - and FOREACH statements, you can arrange for your - PL/pgSQL function to repeat a series of commands. + With the LOOP, EXIT, + CONTINUE, WHILE, FOR, + and FOREACH statements, you can arrange for your + PL/pgSQL function to repeat a series of commands. - <literal>LOOP</> + <literal>LOOP</literal> <<label>> @@ -2160,17 +2160,17 @@ END LOOP label ; - LOOP defines an unconditional loop that is repeated - indefinitely until terminated by an EXIT or + LOOP defines an unconditional loop that is repeated + indefinitely until terminated by an EXIT or RETURN statement. The optional - label can be used by EXIT + label can be used by EXIT and CONTINUE statements within nested loops to specify which loop those statements refer to. - <literal>EXIT</> + <literal>EXIT</literal> EXIT @@ -2184,21 +2184,21 @@ EXIT label WHEN If no label is given, the innermost loop is terminated and the statement following END - LOOP is executed next. If label + LOOP is executed next. If label is given, it must be the label of the current or some outer level of nested loop or block. Then the named loop or block is terminated and control continues with the statement after the - loop's/block's corresponding END. + loop's/block's corresponding END. - If WHEN is specified, the loop exit occurs only if - boolean-expression is true. Otherwise, control passes - to the statement after EXIT. + If WHEN is specified, the loop exit occurs only if + boolean-expression is true. Otherwise, control passes + to the statement after EXIT. - EXIT can be used with all types of loops; it is + EXIT can be used with all types of loops; it is not limited to use with unconditional loops. @@ -2242,7 +2242,7 @@ END; - <literal>CONTINUE</> + <literal>CONTINUE</literal> CONTINUE @@ -2254,25 +2254,25 @@ CONTINUE label WHEN - If no label is given, the next iteration of + If no label is given, the next iteration of the innermost loop is begun. That is, all statements remaining in the loop body are skipped, and control returns to the loop control expression (if any) to determine whether another loop iteration is needed. - If label is present, it + If label is present, it specifies the label of the loop whose execution will be continued. - If WHEN is specified, the next iteration of the - loop is begun only if boolean-expression is + If WHEN is specified, the next iteration of the + loop is begun only if boolean-expression is true. Otherwise, control passes to the statement after - CONTINUE. + CONTINUE. - CONTINUE can be used with all types of loops; it + CONTINUE can be used with all types of loops; it is not limited to use with unconditional loops. @@ -2291,7 +2291,7 @@ END LOOP; - <literal>WHILE</> + <literal>WHILE</literal> WHILE @@ -2306,7 +2306,7 @@ END LOOP label ; - The WHILE statement repeats a + The WHILE statement repeats a sequence of statements so long as the boolean-expression evaluates to true. The expression is checked just before @@ -2328,7 +2328,7 @@ END LOOP; - <literal>FOR</> (Integer Variant) + <literal>FOR</literal> (Integer Variant) <<label>> @@ -2338,22 +2338,22 @@ END LOOP label ; - This form of FOR creates a loop that iterates over a range + This form of FOR creates a loop that iterates over a range of integer values. The variable name is automatically defined as type - integer and exists only inside the loop (any existing + integer and exists only inside the loop (any existing definition of the variable name is ignored within the loop). The two expressions giving the lower and upper bound of the range are evaluated once when entering - the loop. If the BY clause isn't specified the iteration - step is 1, otherwise it's the value specified in the BY + the loop. If the BY clause isn't specified the iteration + step is 1, otherwise it's the value specified in the BY clause, which again is evaluated once on loop entry. - If REVERSE is specified then the step value is + If REVERSE is specified then the step value is subtracted, rather than added, after each iteration. - Some examples of integer FOR loops: + Some examples of integer FOR loops: FOR i IN 1..10 LOOP -- i will take on the values 1,2,3,4,5,6,7,8,9,10 within the loop @@ -2371,13 +2371,13 @@ END LOOP; If the lower bound is greater than the upper bound (or less than, - in the REVERSE case), the loop body is not + in the REVERSE case), the loop body is not executed at all. No error is raised. If a label is attached to the - FOR loop then the integer loop variable can be + FOR loop then the integer loop variable can be referenced with a qualified name, using that label. @@ -2388,7 +2388,7 @@ END LOOP; Looping Through Query Results - Using a different type of FOR loop, you can iterate through + Using a different type of FOR loop, you can iterate through the results of a query and manipulate that data accordingly. The syntax is: @@ -2424,28 +2424,28 @@ END; $$ LANGUAGE plpgsql; - If the loop is terminated by an EXIT statement, the last + If the loop is terminated by an EXIT statement, the last assigned row value is still accessible after the loop. - The query used in this type of FOR + The query used in this type of FOR statement can be any SQL command that returns rows to the caller: - SELECT is the most common case, - but you can also use INSERT, UPDATE, or - DELETE with a RETURNING clause. Some utility - commands such as EXPLAIN will work too. + SELECT is the most common case, + but you can also use INSERT, UPDATE, or + DELETE with a RETURNING clause. Some utility + commands such as EXPLAIN will work too. - PL/pgSQL variables are substituted into the query text, + PL/pgSQL variables are substituted into the query text, and the query plan is cached for possible re-use, as discussed in detail in and . - The FOR-IN-EXECUTE statement is another way to iterate over + The FOR-IN-EXECUTE statement is another way to iterate over rows: <<label>> @@ -2455,11 +2455,11 @@ END LOOP label ; This is like the previous form, except that the source query is specified as a string expression, which is evaluated and replanned - on each entry to the FOR loop. This allows the programmer to + on each entry to the FOR loop. This allows the programmer to choose the speed of a preplanned query or the flexibility of a dynamic query, just as with a plain EXECUTE statement. As with EXECUTE, parameter values can be inserted - into the dynamic command via USING. + into the dynamic command via USING. @@ -2473,13 +2473,13 @@ END LOOP label ; Looping Through Arrays - The FOREACH loop is much like a FOR loop, + The FOREACH loop is much like a FOR loop, but instead of iterating through the rows returned by a SQL query, it iterates through the elements of an array value. - (In general, FOREACH is meant for looping through + (In general, FOREACH is meant for looping through components of a composite-valued expression; variants for looping through composites besides arrays may be added in future.) - The FOREACH statement to loop over an array is: + The FOREACH statement to loop over an array is: <<label>> @@ -2490,7 +2490,7 @@ END LOOP label ; - Without SLICE, or if SLICE 0 is specified, + Without SLICE, or if SLICE 0 is specified, the loop iterates through individual elements of the array produced by evaluating the expression. The target variable is assigned each @@ -2522,13 +2522,13 @@ $$ LANGUAGE plpgsql; - With a positive SLICE value, FOREACH + With a positive SLICE value, FOREACH iterates through slices of the array rather than single elements. - The SLICE value must be an integer constant not larger + The SLICE value must be an integer constant not larger than the number of dimensions of the array. The target variable must be an array, and it receives successive slices of the array value, where each slice - is of the number of dimensions specified by SLICE. + is of the number of dimensions specified by SLICE. Here is an example of iterating through one-dimensional slices: @@ -2562,12 +2562,12 @@ NOTICE: row = {10,11,12} - By default, any error occurring in a PL/pgSQL + By default, any error occurring in a PL/pgSQL function aborts execution of the function, and indeed of the surrounding transaction as well. You can trap errors and recover - from them by using a BEGIN block with an - EXCEPTION clause. The syntax is an extension of the - normal syntax for a BEGIN block: + from them by using a BEGIN block with an + EXCEPTION clause. The syntax is an extension of the + normal syntax for a BEGIN block: <<label>> @@ -2588,18 +2588,18 @@ END; If no error occurs, this form of block simply executes all the statements, and then control passes - to the next statement after END. But if an error + to the next statement after END. But if an error occurs within the statements, further processing of the statements is - abandoned, and control passes to the EXCEPTION list. + abandoned, and control passes to the EXCEPTION list. The list is searched for the first condition matching the error that occurred. If a match is found, the corresponding handler_statements are executed, and then control passes to the next statement after - END. If no match is found, the error propagates out - as though the EXCEPTION clause were not there at all: + END. If no match is found, the error propagates out + as though the EXCEPTION clause were not there at all: the error can be caught by an enclosing block with - EXCEPTION, or if there is none it aborts processing + EXCEPTION, or if there is none it aborts processing of the function. @@ -2607,12 +2607,12 @@ END; The condition names can be any of those shown in . A category name matches any error within its category. The special - condition name OTHERS matches every error type except - QUERY_CANCELED and ASSERT_FAILURE. + condition name OTHERS matches every error type except + QUERY_CANCELED and ASSERT_FAILURE. (It is possible, but often unwise, to trap those two error types by name.) Condition names are not case-sensitive. Also, an error condition can be specified - by SQLSTATE code; for example these are equivalent: + by SQLSTATE code; for example these are equivalent: WHEN division_by_zero THEN ... WHEN SQLSTATE '22012' THEN ... @@ -2622,13 +2622,13 @@ WHEN SQLSTATE '22012' THEN ... If a new error occurs within the selected handler_statements, it cannot be caught - by this EXCEPTION clause, but is propagated out. - A surrounding EXCEPTION clause could catch it. + by this EXCEPTION clause, but is propagated out. + A surrounding EXCEPTION clause could catch it. - When an error is caught by an EXCEPTION clause, - the local variables of the PL/pgSQL function + When an error is caught by an EXCEPTION clause, + the local variables of the PL/pgSQL function remain as they were when the error occurred, but all changes to persistent database state within the block are rolled back. As an example, consider this fragment: @@ -2646,32 +2646,32 @@ EXCEPTION END; - When control reaches the assignment to y, it will - fail with a division_by_zero error. This will be caught by - the EXCEPTION clause. The value returned in the - RETURN statement will be the incremented value of - x, but the effects of the UPDATE command will - have been rolled back. The INSERT command preceding the + When control reaches the assignment to y, it will + fail with a division_by_zero error. This will be caught by + the EXCEPTION clause. The value returned in the + RETURN statement will be the incremented value of + x, but the effects of the UPDATE command will + have been rolled back. The INSERT command preceding the block is not rolled back, however, so the end result is that the database - contains Tom Jones not Joe Jones. + contains Tom Jones not Joe Jones. - A block containing an EXCEPTION clause is significantly + A block containing an EXCEPTION clause is significantly more expensive to enter and exit than a block without one. Therefore, - don't use EXCEPTION without need. + don't use EXCEPTION without need. - Exceptions with <command>UPDATE</>/<command>INSERT</> + Exceptions with <command>UPDATE</command>/<command>INSERT</command> This example uses exception handling to perform either - UPDATE or INSERT, as appropriate. It is - recommended that applications use INSERT with - ON CONFLICT DO UPDATE rather than actually using + UPDATE or INSERT, as appropriate. It is + recommended that applications use INSERT with + ON CONFLICT DO UPDATE rather than actually using this pattern. This example serves primarily to illustrate use of PL/pgSQL control flow structures: @@ -2705,8 +2705,8 @@ SELECT merge_db(1, 'david'); SELECT merge_db(1, 'dennis'); - This coding assumes the unique_violation error is caused by - the INSERT, and not by, say, an INSERT in a + This coding assumes the unique_violation error is caused by + the INSERT, and not by, say, an INSERT in a trigger function on the table. It might also misbehave if there is more than one unique index on the table, since it will retry the operation regardless of which index caused the error. @@ -2722,7 +2722,7 @@ SELECT merge_db(1, 'dennis'); Exception handlers frequently need to identify the specific error that occurred. There are two ways to get information about the current - exception in PL/pgSQL: special variables and the + exception in PL/pgSQL: special variables and the GET STACKED DIAGNOSTICS command. @@ -2764,52 +2764,52 @@ GET STACKED DIAGNOSTICS variable { = | := } RETURNED_SQLSTATE - text + text the SQLSTATE error code of the exception COLUMN_NAME - text + text the name of the column related to exception CONSTRAINT_NAME - text + text the name of the constraint related to exception PG_DATATYPE_NAME - text + text the name of the data type related to exception MESSAGE_TEXT - text + text the text of the exception's primary message TABLE_NAME - text + text the name of the table related to exception SCHEMA_NAME - text + text the name of the schema related to exception PG_EXCEPTION_DETAIL - text + text the text of the exception's detail message, if any PG_EXCEPTION_HINT - text + text the text of the exception's hint message, if any PG_EXCEPTION_CONTEXT - text + text line(s) of text describing the call stack at the time of the exception (see ) @@ -2850,9 +2850,9 @@ END; in , retrieves information about current execution state (whereas the GET STACKED DIAGNOSTICS command discussed above reports information about - the execution state as of a previous error). Its PG_CONTEXT + the execution state as of a previous error). Its PG_CONTEXT status item is useful for identifying the current execution - location. PG_CONTEXT returns a text string with line(s) + location. PG_CONTEXT returns a text string with line(s) of text describing the call stack. The first line refers to the current function and currently executing GET DIAGNOSTICS command. The second and any subsequent lines refer to calling functions @@ -2907,11 +2907,11 @@ CONTEXT: PL/pgSQL function outer_func() line 3 at RETURN Rather than executing a whole query at once, it is possible to set - up a cursor that encapsulates the query, and then read + up a cursor that encapsulates the query, and then read the query result a few rows at a time. One reason for doing this is to avoid memory overrun when the result contains a large number of - rows. (However, PL/pgSQL users do not normally need - to worry about that, since FOR loops automatically use a cursor + rows. (However, PL/pgSQL users do not normally need + to worry about that, since FOR loops automatically use a cursor internally to avoid memory problems.) A more interesting usage is to return a reference to a cursor that a function has created, allowing the caller to read the rows. This provides an efficient way to return @@ -2922,19 +2922,19 @@ CONTEXT: PL/pgSQL function outer_func() line 3 at RETURN Declaring Cursor Variables - All access to cursors in PL/pgSQL goes through + All access to cursors in PL/pgSQL goes through cursor variables, which are always of the special data type - refcursor. One way to create a cursor variable - is just to declare it as a variable of type refcursor. + refcursor. One way to create a cursor variable + is just to declare it as a variable of type refcursor. Another way is to use the cursor declaration syntax, which in general is: name NO SCROLL CURSOR ( arguments ) FOR query; - (FOR can be replaced by IS for + (FOR can be replaced by IS for Oracle compatibility.) - If SCROLL is specified, the cursor will be capable of - scrolling backward; if NO SCROLL is specified, backward + If SCROLL is specified, the cursor will be capable of + scrolling backward; if NO SCROLL is specified, backward fetches will be rejected; if neither specification appears, it is query-dependent whether backward fetches will be allowed. arguments, if specified, is a @@ -2952,13 +2952,13 @@ DECLARE curs2 CURSOR FOR SELECT * FROM tenk1; curs3 CURSOR (key integer) FOR SELECT * FROM tenk1 WHERE unique1 = key; - All three of these variables have the data type refcursor, + All three of these variables have the data type refcursor, but the first can be used with any query, while the second has - a fully specified query already bound to it, and the last - has a parameterized query bound to it. (key will be + a fully specified query already bound to it, and the last + has a parameterized query bound to it. (key will be replaced by an integer parameter value when the cursor is opened.) - The variable curs1 - is said to be unbound since it is not bound to + The variable curs1 + is said to be unbound since it is not bound to any particular query. @@ -2968,16 +2968,16 @@ DECLARE Before a cursor can be used to retrieve rows, it must be - opened. (This is the equivalent action to the SQL - command DECLARE CURSOR.) PL/pgSQL has - three forms of the OPEN statement, two of which use unbound + opened. (This is the equivalent action to the SQL + command DECLARE CURSOR.) PL/pgSQL has + three forms of the OPEN statement, two of which use unbound cursor variables while the third uses a bound cursor variable. Bound cursor variables can also be used without explicitly opening the cursor, - via the FOR statement described in + via the FOR statement described in . @@ -2993,18 +2993,18 @@ OPEN unbound_cursorvar NO refcursor variable). The query must be a + refcursor variable). The query must be a SELECT, or something else that returns rows - (such as EXPLAIN). The query + (such as EXPLAIN). The query is treated in the same way as other SQL commands in - PL/pgSQL: PL/pgSQL + PL/pgSQL: PL/pgSQL variable names are substituted, and the query plan is cached for - possible reuse. When a PL/pgSQL + possible reuse. When a PL/pgSQL variable is substituted into the cursor query, the value that is - substituted is the one it has at the time of the OPEN; + substituted is the one it has at the time of the OPEN; subsequent changes to the variable will not affect the cursor's behavior. - The SCROLL and NO SCROLL + The SCROLL and NO SCROLL options have the same meanings as for a bound cursor. @@ -3028,16 +3028,16 @@ OPEN unbound_cursorvar NO refcursor variable). The query is specified as a string + refcursor variable). The query is specified as a string expression, in the same way as in the EXECUTE command. As usual, this gives flexibility so the query plan can vary from one run to the next (see ), and it also means that variable substitution is not done on the command string. As with EXECUTE, parameter values can be inserted into the dynamic command via - format() and USING. - The SCROLL and - NO SCROLL options have the same meanings as for a bound + format() and USING. + The SCROLL and + NO SCROLL options have the same meanings as for a bound cursor. @@ -3047,8 +3047,8 @@ OPEN unbound_cursorvar NO In this example, the table name is inserted into the query via - format(). The comparison value for col1 - is inserted via a USING parameter, so it needs + format(). The comparison value for col1 + is inserted via a USING parameter, so it needs no quoting. @@ -3071,8 +3071,8 @@ OPEN bound_cursorvar ( The query plan for a bound cursor is always considered cacheable; there is no equivalent of EXECUTE in this case. - Notice that SCROLL and NO SCROLL cannot be - specified in OPEN, as the cursor's scrolling + Notice that SCROLL and NO SCROLL cannot be + specified in OPEN, as the cursor's scrolling behavior was already determined. @@ -3098,13 +3098,13 @@ OPEN curs3(key := 42); Because variable substitution is done on a bound cursor's query, there are really two ways to pass values into the cursor: either - with an explicit argument to OPEN, or implicitly by - referencing a PL/pgSQL variable in the query. + with an explicit argument to OPEN, or implicitly by + referencing a PL/pgSQL variable in the query. However, only variables declared before the bound cursor was declared will be substituted into it. In either case the value to - be passed is determined at the time of the OPEN. + be passed is determined at the time of the OPEN. For example, another way to get the same effect as the - curs3 example above is + curs3 example above is DECLARE key integer; @@ -3127,22 +3127,22 @@ BEGIN These manipulations need not occur in the same function that - opened the cursor to begin with. You can return a refcursor + opened the cursor to begin with. You can return a refcursor value out of a function and let the caller operate on the cursor. - (Internally, a refcursor value is simply the string name + (Internally, a refcursor value is simply the string name of a so-called portal containing the active query for the cursor. This name - can be passed around, assigned to other refcursor variables, + can be passed around, assigned to other refcursor variables, and so on, without disturbing the portal.) All portals are implicitly closed at transaction end. Therefore - a refcursor value is usable to reference an open cursor + a refcursor value is usable to reference an open cursor only until the end of the transaction. - <literal>FETCH</> + <literal>FETCH</literal> FETCH direction { FROM | IN } cursor INTO target; @@ -3163,23 +3163,23 @@ FETCH direction { FROM | IN } variants allowed in the SQL command except the ones that can fetch more than one row; namely, it can be - NEXT, - PRIOR, - FIRST, - LAST, - ABSOLUTE count, - RELATIVE count, - FORWARD, or - BACKWARD. + NEXT, + PRIOR, + FIRST, + LAST, + ABSOLUTE count, + RELATIVE count, + FORWARD, or + BACKWARD. Omitting direction is the same - as specifying NEXT. + as specifying NEXT. direction values that require moving backward are likely to fail unless the cursor was declared or opened - with the SCROLL option. + with the SCROLL option. - cursor must be the name of a refcursor + cursor must be the name of a refcursor variable that references an open cursor portal. @@ -3195,7 +3195,7 @@ FETCH RELATIVE -2 FROM curs4 INTO x; - <literal>MOVE</> + <literal>MOVE</literal> MOVE direction { FROM | IN } cursor; @@ -3214,20 +3214,20 @@ MOVE direction { FROM | IN } < The direction clause can be any of the variants allowed in the SQL command, namely - NEXT, - PRIOR, - FIRST, - LAST, - ABSOLUTE count, - RELATIVE count, - ALL, - FORWARD count | ALL , or - BACKWARD count | ALL . + NEXT, + PRIOR, + FIRST, + LAST, + ABSOLUTE count, + RELATIVE count, + ALL, + FORWARD count | ALL , or + BACKWARD count | ALL . Omitting direction is the same - as specifying NEXT. + as specifying NEXT. direction values that require moving backward are likely to fail unless the cursor was declared or opened - with the SCROLL option. + with the SCROLL option. @@ -3242,7 +3242,7 @@ MOVE FORWARD 2 FROM curs4; - <literal>UPDATE/DELETE WHERE CURRENT OF</> + <literal>UPDATE/DELETE WHERE CURRENT OF</literal> UPDATE table SET ... WHERE CURRENT OF cursor; @@ -3253,7 +3253,7 @@ DELETE FROM table WHERE CURRENT OF curso When a cursor is positioned on a table row, that row can be updated or deleted using the cursor to identify the row. There are restrictions on what the cursor's query can be (in particular, - no grouping) and it's best to use FOR UPDATE in the + no grouping) and it's best to use FOR UPDATE in the cursor. For more information see the reference page. @@ -3268,7 +3268,7 @@ UPDATE foo SET dataval = myval WHERE CURRENT OF curs1; - <literal>CLOSE</> + <literal>CLOSE</literal> CLOSE cursor; @@ -3292,7 +3292,7 @@ CLOSE curs1; Returning Cursors - PL/pgSQL functions can return cursors to the + PL/pgSQL functions can return cursors to the caller. This is useful to return multiple rows or columns, especially with very large result sets. To do this, the function opens the cursor and returns the cursor name to the caller (or simply @@ -3305,13 +3305,13 @@ CLOSE curs1; The portal name used for a cursor can be specified by the programmer or automatically generated. To specify a portal name, - simply assign a string to the refcursor variable before - opening it. The string value of the refcursor variable - will be used by OPEN as the name of the underlying portal. - However, if the refcursor variable is null, - OPEN automatically generates a name that does not + simply assign a string to the refcursor variable before + opening it. The string value of the refcursor variable + will be used by OPEN as the name of the underlying portal. + However, if the refcursor variable is null, + OPEN automatically generates a name that does not conflict with any existing portal, and assigns it to the - refcursor variable. + refcursor variable. @@ -3405,7 +3405,7 @@ COMMIT; Looping Through a Cursor's Result - There is a variant of the FOR statement that allows + There is a variant of the FOR statement that allows iterating through the rows returned by a cursor. The syntax is: @@ -3416,18 +3416,18 @@ END LOOP label ; The cursor variable must have been bound to some query when it was - declared, and it cannot be open already. The - FOR statement automatically opens the cursor, and it closes + declared, and it cannot be open already. The + FOR statement automatically opens the cursor, and it closes the cursor again when the loop exits. A list of actual argument value expressions must appear if and only if the cursor was declared to take arguments. These values will be substituted in the query, in just - the same way as during an OPEN (see OPEN (see ). The variable recordvar is automatically - defined as type record and exists only inside the loop (any + defined as type record and exists only inside the loop (any existing definition of the variable name is ignored within the loop). Each row returned by the cursor is successively assigned to this record variable and the loop body is executed. @@ -3458,8 +3458,8 @@ END LOOP label ; RAISE level 'format' , expression , ... USING option = expression , ... ; -RAISE level condition_name USING option = expression , ... ; -RAISE level SQLSTATE 'sqlstate' USING option = expression , ... ; +RAISE level condition_name USING option = expression , ... ; +RAISE level SQLSTATE 'sqlstate' USING option = expression , ... ; RAISE level USING option = expression , ... ; RAISE ; @@ -3491,13 +3491,13 @@ RAISE ; Inside the format string, % is replaced by the string representation of the next optional argument's value. Write %% to emit a literal %. - The number of arguments must match the number of % + The number of arguments must match the number of % placeholders in the format string, or an error is raised during the compilation of the function. - In this example, the value of v_job_id will replace the + In this example, the value of v_job_id will replace the % in the string: RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; @@ -3506,7 +3506,7 @@ RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; You can attach additional information to the error report by writing - USING followed by USING followed by option = expression items. Each expression can be any @@ -3518,8 +3518,8 @@ RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; MESSAGE Sets the error message text. This option can't be used in the - form of RAISE that includes a format string - before USING. + form of RAISE that includes a format string + before USING. @@ -3577,13 +3577,13 @@ RAISE 'Duplicate user ID: %', user_id USING ERRCODE = '23505'; - There is a second RAISE syntax in which the main argument + There is a second RAISE syntax in which the main argument is the condition name or SQLSTATE to be reported, for example: RAISE division_by_zero; RAISE SQLSTATE '22012'; - In this syntax, USING can be used to supply a custom + In this syntax, USING can be used to supply a custom error message, detail, or hint. Another way to do the earlier example is @@ -3592,25 +3592,25 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; - Still another variant is to write RAISE USING or RAISE - level USING and put - everything else into the USING list. + Still another variant is to write RAISE USING or RAISE + level USING and put + everything else into the USING list. - The last variant of RAISE has no parameters at all. - This form can only be used inside a BEGIN block's - EXCEPTION clause; + The last variant of RAISE has no parameters at all. + This form can only be used inside a BEGIN block's + EXCEPTION clause; it causes the error currently being handled to be re-thrown. - Before PostgreSQL 9.1, RAISE without + Before PostgreSQL 9.1, RAISE without parameters was interpreted as re-throwing the error from the block - containing the active exception handler. Thus an EXCEPTION + containing the active exception handler. Thus an EXCEPTION clause nested within that handler could not catch it, even if the - RAISE was within the nested EXCEPTION clause's + RAISE was within the nested EXCEPTION clause's block. This was deemed surprising as well as being incompatible with Oracle's PL/SQL. @@ -3619,7 +3619,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; If no condition name nor SQLSTATE is specified in a RAISE EXCEPTION command, the default is to use - RAISE_EXCEPTION (P0001). If no message + RAISE_EXCEPTION (P0001). If no message text is specified, the default is to use the condition name or SQLSTATE as message text. @@ -3629,7 +3629,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; When specifying an error code by SQLSTATE code, you are not limited to the predefined error codes, but can select any error code consisting of five digits and/or upper-case ASCII - letters, other than 00000. It is recommended that + letters, other than 00000. It is recommended that you avoid throwing error codes that end in three zeroes, because these are category codes and can only be trapped by trapping the whole category. @@ -3652,7 +3652,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; - plpgsql.check_asserts configuration parameter + plpgsql.check_asserts configuration parameter @@ -3667,7 +3667,7 @@ ASSERT condition , condition is a Boolean expression that is expected to always evaluate to true; if it does, the ASSERT statement does nothing further. If the - result is false or null, then an ASSERT_FAILURE exception + result is false or null, then an ASSERT_FAILURE exception is raised. (If an error occurs while evaluating the condition, it is reported as a normal error.) @@ -3676,7 +3676,7 @@ ASSERT condition , If the optional message is provided, it is an expression whose result (if not null) replaces the - default error message text assertion failed, should + default error message text assertion failed, should the condition fail. The message expression is not evaluated in the normal case where the assertion succeeds. @@ -3684,15 +3684,15 @@ ASSERT condition , Testing of assertions can be enabled or disabled via the configuration - parameter plpgsql.check_asserts, which takes a Boolean - value; the default is on. If this parameter - is off then ASSERT statements do nothing. + parameter plpgsql.check_asserts, which takes a Boolean + value; the default is on. If this parameter + is off then ASSERT statements do nothing. Note that ASSERT is meant for detecting program bugs, not for reporting ordinary error conditions. Use - the RAISE statement, described above, for that. + the RAISE statement, described above, for that. @@ -3710,11 +3710,11 @@ ASSERT condition , PL/pgSQL can be used to define trigger procedures on data changes or database events. - A trigger procedure is created with the CREATE FUNCTION + A trigger procedure is created with the CREATE FUNCTION command, declaring it as a function with no arguments and a return type of - trigger (for data change triggers) or - event_trigger (for database event triggers). - Special local variables named PG_something are + trigger (for data change triggers) or + event_trigger (for database event triggers). + Special local variables named PG_something are automatically defined to describe the condition that triggered the call. @@ -3722,11 +3722,11 @@ ASSERT condition , Triggers on Data Changes - A data change trigger is declared as a - function with no arguments and a return type of trigger. + A data change trigger is declared as a + function with no arguments and a return type of trigger. Note that the function must be declared with no arguments even if it - expects to receive some arguments specified in CREATE TRIGGER - — such arguments are passed via TG_ARGV, as described + expects to receive some arguments specified in CREATE TRIGGER + — such arguments are passed via TG_ARGV, as described below. @@ -3741,7 +3741,7 @@ ASSERT condition , Data type RECORD; variable holding the new - database row for INSERT/UPDATE operations in row-level + database row for INSERT/UPDATE operations in row-level triggers. This variable is unassigned in statement-level triggers and for DELETE operations. @@ -3753,7 +3753,7 @@ ASSERT condition , Data type RECORD; variable holding the old - database row for UPDATE/DELETE operations in row-level + database row for UPDATE/DELETE operations in row-level triggers. This variable is unassigned in statement-level triggers and for INSERT operations. @@ -3798,7 +3798,7 @@ ASSERT condition , Data type text; a string of INSERT, UPDATE, - DELETE, or TRUNCATE + DELETE, or TRUNCATE telling for which operation the trigger was fired. @@ -3820,7 +3820,7 @@ ASSERT condition , Data type name; the name of the table that caused the trigger invocation. This is now deprecated, and could disappear in a future - release. Use TG_TABLE_NAME instead. + release. Use TG_TABLE_NAME instead. @@ -3862,7 +3862,7 @@ ASSERT condition , text; the arguments from the CREATE TRIGGER statement. The index counts from 0. Invalid - indexes (less than 0 or greater than or equal to tg_nargs) + indexes (less than 0 or greater than or equal to tg_nargs) result in a null value. @@ -3877,20 +3877,20 @@ ASSERT condition , - Row-level triggers fired BEFORE can return null to signal the + Row-level triggers fired BEFORE can return null to signal the trigger manager to skip the rest of the operation for this row (i.e., subsequent triggers are not fired, and the - INSERT/UPDATE/DELETE does not occur + INSERT/UPDATE/DELETE does not occur for this row). If a nonnull value is returned then the operation proceeds with that row value. Returning a row value different from the original value - of NEW alters the row that will be inserted or + of NEW alters the row that will be inserted or updated. Thus, if the trigger function wants the triggering action to succeed normally without altering the row value, NEW (or a value equal thereto) has to be returned. To alter the row to be stored, it is possible to - replace single values directly in NEW and return the - modified NEW, or to build a complete new record/row to + replace single values directly in NEW and return the + modified NEW, or to build a complete new record/row to return. In the case of a before-trigger on DELETE, the returned value has no direct effect, but it has to be nonnull to allow the trigger action to @@ -3901,28 +3901,28 @@ ASSERT condition , - INSTEAD OF triggers (which are always row-level triggers, + INSTEAD OF triggers (which are always row-level triggers, and may only be used on views) can return null to signal that they did not perform any updates, and that the rest of the operation for this row should be skipped (i.e., subsequent triggers are not fired, and the row is not counted in the rows-affected status for the surrounding - INSERT/UPDATE/DELETE). + INSERT/UPDATE/DELETE). Otherwise a nonnull value should be returned, to signal that the trigger performed the requested operation. For - INSERT and UPDATE operations, the return value - should be NEW, which the trigger function may modify to - support INSERT RETURNING and UPDATE RETURNING + INSERT and UPDATE operations, the return value + should be NEW, which the trigger function may modify to + support INSERT RETURNING and UPDATE RETURNING (this will also affect the row value passed to any subsequent triggers, - or passed to a special EXCLUDED alias reference within - an INSERT statement with an ON CONFLICT DO - UPDATE clause). For DELETE operations, the return - value should be OLD. + or passed to a special EXCLUDED alias reference within + an INSERT statement with an ON CONFLICT DO + UPDATE clause). For DELETE operations, the return + value should be OLD. The return value of a row-level trigger fired AFTER or a statement-level trigger - fired BEFORE or AFTER is + fired BEFORE or AFTER is always ignored; it might as well be null. However, any of these types of triggers might still abort the entire operation by raising an error. @@ -4267,9 +4267,9 @@ SELECT * FROM sales_summary_bytime; - AFTER triggers can also make use of transition - tables to inspect the entire set of rows changed by the triggering - statement. The CREATE TRIGGER command assigns names to one + AFTER triggers can also make use of transition + tables to inspect the entire set of rows changed by the triggering + statement. The CREATE TRIGGER command assigns names to one or both transition tables, and then the function can refer to those names as though they were read-only temporary tables. shows an example. @@ -4286,10 +4286,10 @@ SELECT * FROM sales_summary_bytime; table. This can be significantly faster than the row-trigger approach when the invoking statement has modified many rows. Notice that we must make a separate trigger declaration for each kind of event, since the - REFERENCING clauses must be different for each case. But + REFERENCING clauses must be different for each case. But this does not stop us from using a single trigger function if we choose. (In practice, it might be better to use three separate functions and - avoid the run-time tests on TG_OP.) + avoid the run-time tests on TG_OP.) @@ -4348,10 +4348,10 @@ CREATE TRIGGER emp_audit_del PL/pgSQL can be used to define - event triggers. - PostgreSQL requires that a procedure that + event triggers. + PostgreSQL requires that a procedure that is to be called as an event trigger must be declared as a function with - no arguments and a return type of event_trigger. + no arguments and a return type of event_trigger. @@ -4410,29 +4410,29 @@ CREATE EVENT TRIGGER snitch ON ddl_command_start EXECUTE PROCEDURE snitch(); - <application>PL/pgSQL</> Under the Hood + <application>PL/pgSQL</application> Under the Hood This section discusses some implementation details that are - frequently important for PL/pgSQL users to know. + frequently important for PL/pgSQL users to know. Variable Substitution - SQL statements and expressions within a PL/pgSQL function + SQL statements and expressions within a PL/pgSQL function can refer to variables and parameters of the function. Behind the scenes, - PL/pgSQL substitutes query parameters for such references. + PL/pgSQL substitutes query parameters for such references. Parameters will only be substituted in places where a parameter or column reference is syntactically allowed. As an extreme case, consider this example of poor programming style: INSERT INTO foo (foo) VALUES (foo); - The first occurrence of foo must syntactically be a table + The first occurrence of foo must syntactically be a table name, so it will not be substituted, even if the function has a variable - named foo. The second occurrence must be the name of a + named foo. The second occurrence must be the name of a column of the table, so it will not be substituted either. Only the third occurrence is a candidate to be a reference to the function's variable. @@ -4453,18 +4453,18 @@ INSERT INTO foo (foo) VALUES (foo); INSERT INTO dest (col) SELECT foo + bar FROM src; - Here, dest and src must be table names, and - col must be a column of dest, but foo - and bar might reasonably be either variables of the function - or columns of src. + Here, dest and src must be table names, and + col must be a column of dest, but foo + and bar might reasonably be either variables of the function + or columns of src. - By default, PL/pgSQL will report an error if a name + By default, PL/pgSQL will report an error if a name in a SQL statement could refer to either a variable or a table column. You can fix such a problem by renaming the variable or column, or by qualifying the ambiguous reference, or by telling - PL/pgSQL which interpretation to prefer. + PL/pgSQL which interpretation to prefer. @@ -4473,13 +4473,13 @@ INSERT INTO dest (col) SELECT foo + bar FROM src; different naming convention for PL/pgSQL variables than you use for column names. For example, if you consistently name function variables - v_something while none of your - column names start with v_, no conflicts will occur. + v_something while none of your + column names start with v_, no conflicts will occur. Alternatively you can qualify ambiguous references to make them clear. - In the above example, src.foo would be an unambiguous reference + In the above example, src.foo would be an unambiguous reference to the table column. To create an unambiguous reference to a variable, declare it in a labeled block and use the block's label (see ). For example, @@ -4491,37 +4491,37 @@ BEGIN foo := ...; INSERT INTO dest (col) SELECT block.foo + bar FROM src; - Here block.foo means the variable even if there is a column - foo in src. Function parameters, as well as - special variables such as FOUND, can be qualified by the + Here block.foo means the variable even if there is a column + foo in src. Function parameters, as well as + special variables such as FOUND, can be qualified by the function's name, because they are implicitly declared in an outer block labeled with the function's name. Sometimes it is impractical to fix all the ambiguous references in a - large body of PL/pgSQL code. In such cases you can - specify that PL/pgSQL should resolve ambiguous references - as the variable (which is compatible with PL/pgSQL's + large body of PL/pgSQL code. In such cases you can + specify that PL/pgSQL should resolve ambiguous references + as the variable (which is compatible with PL/pgSQL's behavior before PostgreSQL 9.0), or as the table column (which is compatible with some other systems such as Oracle). - plpgsql.variable_conflict configuration parameter + plpgsql.variable_conflict configuration parameter To change this behavior on a system-wide basis, set the configuration - parameter plpgsql.variable_conflict to one of - error, use_variable, or - use_column (where error is the factory default). + parameter plpgsql.variable_conflict to one of + error, use_variable, or + use_column (where error is the factory default). This parameter affects subsequent compilations - of statements in PL/pgSQL functions, but not statements + of statements in PL/pgSQL functions, but not statements already compiled in the current session. Because changing this setting - can cause unexpected changes in the behavior of PL/pgSQL + can cause unexpected changes in the behavior of PL/pgSQL functions, it can only be changed by a superuser. @@ -4535,7 +4535,7 @@ BEGIN #variable_conflict use_column These commands affect only the function they are written in, and override - the setting of plpgsql.variable_conflict. An example is + the setting of plpgsql.variable_conflict. An example is CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ #variable_conflict use_variable @@ -4547,15 +4547,15 @@ CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ END; $$ LANGUAGE plpgsql; - In the UPDATE command, curtime, comment, - and id will refer to the function's variable and parameters - whether or not users has columns of those names. Notice - that we had to qualify the reference to users.id in the - WHERE clause to make it refer to the table column. - But we did not have to qualify the reference to comment - as a target in the UPDATE list, because syntactically - that must be a column of users. We could write the same - function without depending on the variable_conflict setting + In the UPDATE command, curtime, comment, + and id will refer to the function's variable and parameters + whether or not users has columns of those names. Notice + that we had to qualify the reference to users.id in the + WHERE clause to make it refer to the table column. + But we did not have to qualify the reference to comment + as a target in the UPDATE list, because syntactically + that must be a column of users. We could write the same + function without depending on the variable_conflict setting in this way: CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ @@ -4572,19 +4572,19 @@ $$ LANGUAGE plpgsql; Variable substitution does not happen in the command string given - to EXECUTE or one of its variants. If you need to + to EXECUTE or one of its variants. If you need to insert a varying value into such a command, do so as part of - constructing the string value, or use USING, as illustrated in + constructing the string value, or use USING, as illustrated in . - Variable substitution currently works only in SELECT, - INSERT, UPDATE, and DELETE commands, + Variable substitution currently works only in SELECT, + INSERT, UPDATE, and DELETE commands, because the main SQL engine allows query parameters only in these commands. To use a non-constant name or value in other statement types (generically called utility statements), you must construct - the utility statement as a string and EXECUTE it. + the utility statement as a string and EXECUTE it. @@ -4593,22 +4593,22 @@ $$ LANGUAGE plpgsql; Plan Caching - The PL/pgSQL interpreter parses the function's source + The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session). The instruction tree fully translates the - PL/pgSQL statement structure, but individual + PL/pgSQL statement structure, but individual SQL expressions and SQL commands used in the function are not translated immediately. - preparing a query - in PL/pgSQL + preparing a query + in PL/pgSQL As each expression and SQL command is first - executed in the function, the PL/pgSQL interpreter + executed in the function, the PL/pgSQL interpreter parses and analyzes the command to create a prepared statement, using the SPI manager's SPI_prepare function. @@ -4624,17 +4624,17 @@ $$ LANGUAGE plpgsql; - PL/pgSQL (or more precisely, the SPI manager) can + PL/pgSQL (or more precisely, the SPI manager) can furthermore attempt to cache the execution plan associated with any particular prepared statement. If a cached plan is not used, then a fresh execution plan is generated on each visit to the statement, - and the current parameter values (that is, PL/pgSQL + and the current parameter values (that is, PL/pgSQL variable values) can be used to optimize the selected plan. If the statement has no parameters, or is executed many times, the SPI manager - will consider creating a generic plan that is not dependent + will consider creating a generic plan that is not dependent on specific parameter values, and caching that for re-use. Typically this will happen only if the execution plan is not very sensitive to - the values of the PL/pgSQL variables referenced in it. + the values of the PL/pgSQL variables referenced in it. If it is, generating a plan each time is a net win. See for more information about the behavior of prepared statements. @@ -4670,7 +4670,7 @@ $$ LANGUAGE plpgsql; for each trigger function and table combination, not just for each function. This alleviates some of the problems with varying data types; for instance, a trigger function will be able to work - successfully with a column named key even if it happens + successfully with a column named key even if it happens to have different types in different tables. @@ -4720,8 +4720,8 @@ $$ LANGUAGE plpgsql; INSERT is analyzed, and then used in all invocations of logfunc1 during the lifetime of the session. Needless to say, this isn't what the programmer - wanted. A better idea is to use the now() or - current_timestamp function. + wanted. A better idea is to use the now() or + current_timestamp function. @@ -4737,7 +4737,7 @@ $$ LANGUAGE plpgsql; functions for the conversion. So, the computed time stamp is updated on each execution as the programmer expects. Even though this happens to work as expected, it's not terribly efficient, so - use of the now() function would still be a better idea. + use of the now() function would still be a better idea. @@ -4749,12 +4749,12 @@ $$ LANGUAGE plpgsql; One good way to develop in - PL/pgSQL is to use the text editor of your + PL/pgSQL is to use the text editor of your choice to create your functions, and in another window, use psql to load and test those functions. If you are doing it this way, it is a good idea to write the function using CREATE OR - REPLACE FUNCTION. That way you can just reload the file to update + REPLACE FUNCTION. That way you can just reload the file to update the function definition. For example: CREATE OR REPLACE FUNCTION testfunc(integer) RETURNS integer AS $$ @@ -4773,10 +4773,10 @@ $$ LANGUAGE plpgsql; - Another good way to develop in PL/pgSQL is with a + Another good way to develop in PL/pgSQL is with a GUI database access tool that facilitates development in a procedural language. One example of such a tool is - pgAdmin, although others exist. These tools often + pgAdmin, although others exist. These tools often provide convenient features such as escaping single quotes and making it easier to recreate and debug functions. @@ -4785,7 +4785,7 @@ $$ LANGUAGE plpgsql; Handling of Quotation Marks - The code of a PL/pgSQL function is specified in + The code of a PL/pgSQL function is specified in CREATE FUNCTION as a string literal. If you write the string literal in the ordinary way with surrounding single quotes, then any single quotes inside the function body @@ -4795,7 +4795,7 @@ $$ LANGUAGE plpgsql; the code can become downright incomprehensible, because you can easily find yourself needing half a dozen or more adjacent quote marks. It's recommended that you instead write the function body as a - dollar-quoted string literal (see dollar-quoted string literal (see ). In the dollar-quoting approach, you never double any quote marks, but instead take care to choose a different dollar-quoting delimiter for each level of @@ -4807,9 +4807,9 @@ CREATE OR REPLACE FUNCTION testfunc(integer) RETURNS integer AS $PROC$ $PROC$ LANGUAGE plpgsql; Within this, you might use quote marks for simple literal strings in - SQL commands and $$ to delimit fragments of SQL commands + SQL commands and $$ to delimit fragments of SQL commands that you are assembling as strings. If you need to quote text that - includes $$, you could use $Q$, and so on. + includes $$, you could use $Q$, and so on. @@ -4830,7 +4830,7 @@ CREATE FUNCTION foo() RETURNS integer AS ' ' LANGUAGE plpgsql; Anywhere within a single-quoted function body, quote marks - must appear in pairs. + must appear in pairs. @@ -4849,7 +4849,7 @@ SELECT * FROM users WHERE f_name=''foobar''; a_output := 'Blah'; SELECT * FROM users WHERE f_name='foobar'; - which is exactly what the PL/pgSQL parser would see + which is exactly what the PL/pgSQL parser would see in either case. @@ -4873,7 +4873,7 @@ a_output := a_output || '' AND name LIKE ''''foobar'''' AND xyz'' a_output := a_output || $$ AND name LIKE 'foobar' AND xyz$$ being careful that any dollar-quote delimiters around this are not - just $$. + just $$. @@ -4942,20 +4942,20 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ To aid the user in finding instances of simple but common problems before - they cause harm, PL/pgSQL provides additional - checks. When enabled, depending on the configuration, they - can be used to emit either a WARNING or an ERROR + they cause harm, PL/pgSQL provides additional + checks. When enabled, depending on the configuration, they + can be used to emit either a WARNING or an ERROR during the compilation of a function. A function which has received - a WARNING can be executed without producing further messages, + a WARNING can be executed without producing further messages, so you are advised to test in a separate development environment. These additional checks are enabled through the configuration variables - plpgsql.extra_warnings for warnings and - plpgsql.extra_errors for errors. Both can be set either to - a comma-separated list of checks, "none" or "all". - The default is "none". Currently the list of available checks + plpgsql.extra_warnings for warnings and + plpgsql.extra_errors for errors. Both can be set either to + a comma-separated list of checks, "none" or "all". + The default is "none". Currently the list of available checks includes only one: @@ -4968,8 +4968,8 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ - The following example shows the effect of plpgsql.extra_warnings - set to shadowed_variables: + The following example shows the effect of plpgsql.extra_warnings + set to shadowed_variables: SET plpgsql.extra_warnings TO 'shadowed_variables'; @@ -5006,10 +5006,10 @@ CREATE FUNCTION This section explains differences between - PostgreSQL's PL/pgSQL + PostgreSQL's PL/pgSQL language and Oracle's PL/SQL language, to help developers who port applications from - Oracle to PostgreSQL. + Oracle to PostgreSQL. @@ -5017,7 +5017,7 @@ CREATE FUNCTION aspects. It is a block-structured, imperative language, and all variables have to be declared. Assignments, loops, conditionals are similar. The main differences you should keep in mind when - porting from PL/SQL to + porting from PL/SQL to PL/pgSQL are: @@ -5025,21 +5025,21 @@ CREATE FUNCTION If a name used in a SQL command could be either a column name of a table or a reference to a variable of the function, - PL/SQL treats it as a column name. This corresponds - to PL/pgSQL's - plpgsql.variable_conflict = use_column + PL/SQL treats it as a column name. This corresponds + to PL/pgSQL's + plpgsql.variable_conflict = use_column behavior, which is not the default, as explained in . It's often best to avoid such ambiguities in the first place, but if you have to port a large amount of code that depends on - this behavior, setting variable_conflict may be the + this behavior, setting variable_conflict may be the best solution. - In PostgreSQL the function body must be written as + In PostgreSQL the function body must be written as a string literal. Therefore you need to use dollar quoting or escape single quotes in the function body. (See .) @@ -5049,10 +5049,10 @@ CREATE FUNCTION Data type names often need translation. For example, in Oracle string - values are commonly declared as being of type varchar2, which + values are commonly declared as being of type varchar2, which is a non-SQL-standard type. In PostgreSQL, - use type varchar or text instead. Similarly, replace - type number with numeric, or use some other numeric + use type varchar or text instead. Similarly, replace + type number with numeric, or use some other numeric data type if there's a more appropriate one. @@ -5074,9 +5074,9 @@ CREATE FUNCTION - Integer FOR loops with REVERSE work - differently: PL/SQL counts down from the second - number to the first, while PL/pgSQL counts down + Integer FOR loops with REVERSE work + differently: PL/SQL counts down from the second + number to the first, while PL/pgSQL counts down from the first number to the second, requiring the loop bounds to be swapped when porting. This incompatibility is unfortunate but is unlikely to be changed. (See - FOR loops over queries (other than cursors) also work + FOR loops over queries (other than cursors) also work differently: the target variable(s) must have been declared, - whereas PL/SQL always declares them implicitly. + whereas PL/SQL always declares them implicitly. An advantage of this is that the variable values are still accessible after the loop exits. @@ -5109,14 +5109,14 @@ CREATE FUNCTION shows how to port a simple - function from PL/SQL to PL/pgSQL. + function from PL/SQL to PL/pgSQL. - Porting a Simple Function from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Simple Function from <application>PL/SQL</application> to <application>PL/pgSQL</application> - Here is an Oracle PL/SQL function: + Here is an Oracle PL/SQL function: CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar2, v_version varchar2) @@ -5134,14 +5134,14 @@ show errors; Let's go through this function and see the differences compared to - PL/pgSQL: + PL/pgSQL: - The type name varchar2 has to be changed to varchar - or text. In the examples in this section, we'll - use varchar, but text is often a better choice if + The type name varchar2 has to be changed to varchar + or text. In the examples in this section, we'll + use varchar, but text is often a better choice if you do not need specific string length limits. @@ -5152,17 +5152,17 @@ show errors; prototype (not the function body) becomes RETURNS in PostgreSQL. - Also, IS becomes AS, and you need to - add a LANGUAGE clause because PL/pgSQL + Also, IS becomes AS, and you need to + add a LANGUAGE clause because PL/pgSQL is not the only possible function language. - In PostgreSQL, the function body is considered + In PostgreSQL, the function body is considered to be a string literal, so you need to use quote marks or dollar - quotes around it. This substitutes for the terminating / + quotes around it. This substitutes for the terminating / in the Oracle approach. @@ -5170,7 +5170,7 @@ show errors; The show errors command does not exist in - PostgreSQL, and is not needed since errors are + PostgreSQL, and is not needed since errors are reported automatically. @@ -5179,7 +5179,7 @@ show errors; This is how this function would look when ported to - PostgreSQL: + PostgreSQL: CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar, @@ -5203,7 +5203,7 @@ $$ LANGUAGE plpgsql; - Porting a Function that Creates Another Function from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Function that Creates Another Function from <application>PL/SQL</application> to <application>PL/pgSQL</application> The following procedure grabs rows from a @@ -5242,7 +5242,7 @@ show errors; - Here is how this function would end up in PostgreSQL: + Here is how this function would end up in PostgreSQL: CREATE OR REPLACE FUNCTION cs_update_referrer_type_proc() RETURNS void AS $func$ DECLARE @@ -5277,24 +5277,24 @@ END; $func$ LANGUAGE plpgsql; Notice how the body of the function is built separately and passed - through quote_literal to double any quote marks in it. This + through quote_literal to double any quote marks in it. This technique is needed because we cannot safely use dollar quoting for defining the new function: we do not know for sure what strings will - be interpolated from the referrer_key.key_string field. - (We are assuming here that referrer_key.kind can be - trusted to always be host, domain, or - url, but referrer_key.key_string might be + be interpolated from the referrer_key.key_string field. + (We are assuming here that referrer_key.kind can be + trusted to always be host, domain, or + url, but referrer_key.key_string might be anything, in particular it might contain dollar signs.) This function is actually an improvement on the Oracle original, because it will - not generate broken code when referrer_key.key_string or - referrer_key.referrer_type contain quote marks. + not generate broken code when referrer_key.key_string or + referrer_key.referrer_type contain quote marks. shows how to port a function - with OUT parameters and string manipulation. - PostgreSQL does not have a built-in + with OUT parameters and string manipulation. + PostgreSQL does not have a built-in instr function, but you can create one using a combination of other functions. In there is a @@ -5305,8 +5305,8 @@ $func$ LANGUAGE plpgsql; Porting a Procedure With String Manipulation and - <literal>OUT</> Parameters from <application>PL/SQL</> to - <application>PL/pgSQL</> + OUT Parameters from PL/SQL to + PL/pgSQL The following Oracle PL/SQL procedure is used @@ -5357,7 +5357,7 @@ show errors; - Here is a possible translation into PL/pgSQL: + Here is a possible translation into PL/pgSQL: CREATE OR REPLACE FUNCTION cs_parse_url( v_url IN VARCHAR, @@ -5411,7 +5411,7 @@ SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); - Porting a Procedure from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Procedure from <application>PL/SQL</application> to <application>PL/pgSQL</application> The Oracle version: @@ -5447,20 +5447,20 @@ show errors - Procedures like this can easily be converted into PostgreSQL + Procedures like this can easily be converted into PostgreSQL functions returning void. This procedure in particular is interesting because it can teach us some things: - There is no PRAGMA statement in PostgreSQL. + There is no PRAGMA statement in PostgreSQL. - If you do a LOCK TABLE in PL/pgSQL, + If you do a LOCK TABLE in PL/pgSQL, the lock will not be released until the calling transaction is finished. @@ -5468,9 +5468,9 @@ show errors - You cannot issue COMMIT in a + You cannot issue COMMIT in a PL/pgSQL function. The function is - running within some outer transaction and so COMMIT + running within some outer transaction and so COMMIT would imply terminating the function's execution. However, in this particular case it is not necessary anyway, because the lock obtained by the LOCK TABLE will be released when @@ -5481,7 +5481,7 @@ show errors - This is how we could port this procedure to PL/pgSQL: + This is how we could port this procedure to PL/pgSQL: CREATE OR REPLACE FUNCTION cs_create_job(v_job_id integer) RETURNS void AS $$ @@ -5512,15 +5512,15 @@ $$ LANGUAGE plpgsql; - The syntax of RAISE is considerably different from - Oracle's statement, although the basic case RAISE + The syntax of RAISE is considerably different from + Oracle's statement, although the basic case RAISE exception_name works similarly. - The exception names supported by PL/pgSQL are + The exception names supported by PL/pgSQL are different from Oracle's. The set of built-in exception names is much larger (see ). There is not currently a way to declare user-defined exception names, @@ -5530,7 +5530,7 @@ $$ LANGUAGE plpgsql; The main functional difference between this procedure and the - Oracle equivalent is that the exclusive lock on the cs_jobs + Oracle equivalent is that the exclusive lock on the cs_jobs table will be held until the calling transaction completes. Also, if the caller later aborts (for example due to an error), the effects of this procedure will be rolled back. @@ -5543,7 +5543,7 @@ $$ LANGUAGE plpgsql; This section explains a few other things to watch for when porting - Oracle PL/SQL functions to + Oracle PL/SQL functions to PostgreSQL. @@ -5551,9 +5551,9 @@ $$ LANGUAGE plpgsql; Implicit Rollback after Exceptions - In PL/pgSQL, when an exception is caught by an - EXCEPTION clause, all database changes since the block's - BEGIN are automatically rolled back. That is, the behavior + In PL/pgSQL, when an exception is caught by an + EXCEPTION clause, all database changes since the block's + BEGIN are automatically rolled back. That is, the behavior is equivalent to what you'd get in Oracle with: @@ -5571,10 +5571,10 @@ END; If you are translating an Oracle procedure that uses - SAVEPOINT and ROLLBACK TO in this style, - your task is easy: just omit the SAVEPOINT and - ROLLBACK TO. If you have a procedure that uses - SAVEPOINT and ROLLBACK TO in a different way + SAVEPOINT and ROLLBACK TO in this style, + your task is easy: just omit the SAVEPOINT and + ROLLBACK TO. If you have a procedure that uses + SAVEPOINT and ROLLBACK TO in a different way then some actual thought will be required. @@ -5583,9 +5583,9 @@ END; <command>EXECUTE</command> - The PL/pgSQL version of + The PL/pgSQL version of EXECUTE works similarly to the - PL/SQL version, but you have to remember to use + PL/SQL version, but you have to remember to use quote_literal and quote_ident as described in . Constructs of the @@ -5598,8 +5598,8 @@ END; Optimizing <application>PL/pgSQL</application> Functions - PostgreSQL gives you two function creation - modifiers to optimize execution: volatility (whether + PostgreSQL gives you two function creation + modifiers to optimize execution: volatility (whether the function always returns the same result when given the same arguments) and strictness (whether the function returns null if any argument is null). Consult the - instr function + instr function diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml index 777a7ef780..043225fc47 100644 --- a/doc/src/sgml/plpython.sgml +++ b/doc/src/sgml/plpython.sgml @@ -3,8 +3,8 @@ PL/Python - Python Procedural Language - PL/Python - Python + PL/Python + Python The PL/Python procedural language allows @@ -14,22 +14,22 @@ To install PL/Python in a particular database, use - CREATE EXTENSION plpythonu (but + CREATE EXTENSION plpythonu (but see also ). - If a language is installed into template1, all subsequently + If a language is installed into template1, all subsequently created databases will have the language installed automatically. - PL/Python is only available as an untrusted language, meaning + PL/Python is only available as an untrusted language, meaning it does not offer any way of restricting what users can do in it and - is therefore named plpythonu. A trusted - variant plpython might become available in the future + is therefore named plpythonu. A trusted + variant plpython might become available in the future if a secure execution mechanism is developed in Python. The writer of a function in untrusted PL/Python must take care that the function cannot be used to do anything unwanted, since it will be @@ -383,8 +383,8 @@ $$ LANGUAGE plpythonu; For all other PostgreSQL return types, the return value is converted to a string using the Python built-in str, and the result is passed to the input function of the PostgreSQL data type. - (If the Python value is a float, it is converted using - the repr built-in instead of str, to + (If the Python value is a float, it is converted using + the repr built-in instead of str, to avoid loss of precision.) @@ -756,8 +756,8 @@ SELECT * FROM multiout_simple_setof(3); data between function calls. This variable is private static data. The global dictionary GD is public data, available to all Python functions within a session. Use with - care.global data - in PL/Python + care.global data + in PL/Python @@ -800,38 +800,38 @@ $$ LANGUAGE plpythonu; TD contains trigger-related values: - TD["event"] + TD["event"] contains the event as a string: - INSERT, UPDATE, - DELETE, or TRUNCATE. + INSERT, UPDATE, + DELETE, or TRUNCATE. - TD["when"] + TD["when"] - contains one of BEFORE, AFTER, or - INSTEAD OF. + contains one of BEFORE, AFTER, or + INSTEAD OF. - TD["level"] + TD["level"] - contains ROW or STATEMENT. + contains ROW or STATEMENT. - TD["new"] - TD["old"] + TD["new"] + TD["old"] For a row-level trigger, one or both of these fields contain @@ -841,7 +841,7 @@ $$ LANGUAGE plpythonu; - TD["name"] + TD["name"] contains the trigger name. @@ -850,7 +850,7 @@ $$ LANGUAGE plpythonu; - TD["table_name"] + TD["table_name"] contains the name of the table on which the trigger occurred. @@ -859,7 +859,7 @@ $$ LANGUAGE plpythonu; - TD["table_schema"] + TD["table_schema"] contains the schema of the table on which the trigger occurred. @@ -868,7 +868,7 @@ $$ LANGUAGE plpythonu; - TD["relid"] + TD["relid"] contains the OID of the table on which the trigger occurred. @@ -877,12 +877,12 @@ $$ LANGUAGE plpythonu; - TD["args"] + TD["args"] - If the CREATE TRIGGER command - included arguments, they are available in TD["args"][0] to - TD["args"][n-1]. + If the CREATE TRIGGER command + included arguments, they are available in TD["args"][0] to + TD["args"][n-1]. @@ -890,14 +890,14 @@ $$ LANGUAGE plpythonu; - If TD["when"] is BEFORE or - INSTEAD OF and - TD["level"] is ROW, you can + If TD["when"] is BEFORE or + INSTEAD OF and + TD["level"] is ROW, you can return None or "OK" from the Python function to indicate the row is unmodified, - "SKIP" to abort the event, or if TD["event"] - is INSERT or UPDATE you can return - "MODIFY" to indicate you've modified the new row. + "SKIP" to abort the event, or if TD["event"] + is INSERT or UPDATE you can return + "MODIFY" to indicate you've modified the new row. Otherwise the return value is ignored. @@ -1023,7 +1023,7 @@ foo = rv[i]["my_column"] plpy.execute(plan [, arguments [, max-rows]]) - preparing a queryin PL/Python + preparing a queryin PL/Python plpy.prepare prepares the execution plan for a query. It is called with a query string and a list of parameter types, if you have parameter references in the query. For example: @@ -1371,22 +1371,22 @@ $$ LANGUAGE plpythonu; The plpy module also provides the functions - plpy.debug(msg, **kwargs) - plpy.log(msg, **kwargs) - plpy.info(msg, **kwargs) - plpy.notice(msg, **kwargs) - plpy.warning(msg, **kwargs) - plpy.error(msg, **kwargs) - plpy.fatal(msg, **kwargs) + plpy.debug(msg, **kwargs) + plpy.log(msg, **kwargs) + plpy.info(msg, **kwargs) + plpy.notice(msg, **kwargs) + plpy.warning(msg, **kwargs) + plpy.error(msg, **kwargs) + plpy.fatal(msg, **kwargs) - elogin PL/Python + elogin PL/Python plpy.error and plpy.fatal actually raise a Python exception which, if uncaught, propagates out to the calling query, causing the current transaction or subtransaction to - be aborted. raise plpy.Error(msg) and - raise plpy.Fatal(msg) are - equivalent to calling plpy.error(msg) and - plpy.fatal(msg), respectively but + be aborted. raise plpy.Error(msg) and + raise plpy.Fatal(msg) are + equivalent to calling plpy.error(msg) and + plpy.fatal(msg), respectively but the raise form does not allow passing keyword arguments. The other functions only generate messages of different priority levels. Whether messages of a particular priority are reported to the client, @@ -1397,7 +1397,7 @@ $$ LANGUAGE plpythonu; - The msg argument is given as a positional argument. For + The msg argument is given as a positional argument. For backward compatibility, more than one positional argument can be given. In that case, the string representation of the tuple of positional arguments becomes the message reported to the client. @@ -1438,9 +1438,9 @@ PL/Python function "raise_custom_exception" Another set of utility functions are - plpy.quote_literal(string), - plpy.quote_nullable(string), and - plpy.quote_ident(string). They + plpy.quote_literal(string), + plpy.quote_nullable(string), and + plpy.quote_ident(string). They are equivalent to the built-in quoting functions described in . They are useful when constructing ad-hoc queries. A PL/Python equivalent of dynamic SQL from elog(). PL/Tcl + SPI and to raise messages via elog(). PL/Tcl provides no way to access internals of the database server or to gain OS-level access under the permissions of the PostgreSQL server process, as a C @@ -50,23 +50,23 @@ Sometimes it is desirable to write Tcl functions that are not restricted to safe Tcl. For example, one might want a Tcl function that sends - email. To handle these cases, there is a variant of PL/Tcl called PL/TclU + email. To handle these cases, there is a variant of PL/Tcl called PL/TclU (for untrusted Tcl). This is exactly the same language except that a full - Tcl interpreter is used. If PL/TclU is used, it must be + Tcl interpreter is used. If PL/TclU is used, it must be installed as an untrusted procedural language so that only - database superusers can create functions in it. The writer of a PL/TclU + database superusers can create functions in it. The writer of a PL/TclU function must take care that the function cannot be used to do anything unwanted, since it will be able to do anything that could be done by a user logged in as the database administrator. - The shared object code for the PL/Tcl and - PL/TclU call handlers is automatically built and + The shared object code for the PL/Tcl and + PL/TclU call handlers is automatically built and installed in the PostgreSQL library directory if Tcl support is specified in the configuration step of - the installation procedure. To install PL/Tcl - and/or PL/TclU in a particular database, use the - CREATE EXTENSION command, for example + the installation procedure. To install PL/Tcl + and/or PL/TclU in a particular database, use the + CREATE EXTENSION command, for example CREATE EXTENSION pltcl or CREATE EXTENSION pltclu. @@ -78,7 +78,7 @@ PL/Tcl Functions and Arguments - To create a function in the PL/Tcl language, use + To create a function in the PL/Tcl language, use the standard syntax: @@ -87,8 +87,8 @@ CREATE FUNCTION funcname (argument-types $$ LANGUAGE pltcl; - PL/TclU is the same, except that the language has to be specified as - pltclu. + PL/TclU is the same, except that the language has to be specified as + pltclu. @@ -111,7 +111,7 @@ CREATE FUNCTION tcl_max(integer, integer) RETURNS integer AS $$ $$ LANGUAGE pltcl STRICT; - Note the clause STRICT, which saves us from + Note the clause STRICT, which saves us from having to think about null input values: if a null value is passed, the function will not be called at all, but will just return a null result automatically. @@ -122,7 +122,7 @@ $$ LANGUAGE pltcl STRICT; if the actual value of an argument is null, the corresponding $n variable will be set to an empty string. To detect whether a particular argument is null, use the function - argisnull. For example, suppose that we wanted tcl_max + argisnull. For example, suppose that we wanted tcl_max with one null and one nonnull argument to return the nonnull argument, rather than null: @@ -188,7 +188,7 @@ $$ LANGUAGE pltcl; The result list can be made from an array representation of the - desired tuple with the array get Tcl command. For example: + desired tuple with the array get Tcl command. For example: CREATE FUNCTION raise_pay(employee, delta int) RETURNS employee AS $$ @@ -233,8 +233,8 @@ $$ LANGUAGE pltcl; The argument values supplied to a PL/Tcl function's code are simply the input arguments converted to text form (just as if they had been - displayed by a SELECT statement). Conversely, the - return and return_next commands will accept + displayed by a SELECT statement). Conversely, the + return and return_next commands will accept any string that is acceptable input format for the function's declared result type, or for the specified column of a composite result type. @@ -262,14 +262,14 @@ $$ LANGUAGE pltcl; role in a separate Tcl interpreter for that role. This prevents accidental or malicious interference by one user with the behavior of another user's PL/Tcl functions. Each such interpreter will have its own - values for any global Tcl variables. Thus, two PL/Tcl + values for any global Tcl variables. Thus, two PL/Tcl functions will share the same global variables if and only if they are executed by the same SQL role. In an application wherein a single session executes code under multiple SQL roles (via SECURITY - DEFINER functions, use of SET ROLE, etc) you may need to + DEFINER functions, use of SET ROLE, etc) you may need to take explicit steps to ensure that PL/Tcl functions can share data. To do that, make sure that functions that should communicate are owned by - the same user, and mark them SECURITY DEFINER. You must of + the same user, and mark them SECURITY DEFINER. You must of course take care that such functions can't be used to do anything unintended. @@ -286,19 +286,19 @@ $$ LANGUAGE pltcl; To help protect PL/Tcl functions from unintentionally interfering with each other, a global - array is made available to each function via the upvar + array is made available to each function via the upvar command. The global name of this variable is the function's internal - name, and the local name is GD. It is recommended that - GD be used + name, and the local name is GD. It is recommended that + GD be used for persistent private data of a function. Use regular Tcl global variables only for values that you specifically intend to be shared among - multiple functions. (Note that the GD arrays are only + multiple functions. (Note that the GD arrays are only global within a particular interpreter, so they do not bypass the security restrictions mentioned above.) - An example of using GD appears in the + An example of using GD appears in the spi_execp example below. @@ -320,28 +320,28 @@ $$ LANGUAGE pltcl; causes an error to be raised. Otherwise, the return value of spi_exec is the number of rows processed (selected, inserted, updated, or deleted) by the command, or zero if the command is a utility - statement. In addition, if the command is a SELECT statement, the + statement. In addition, if the command is a SELECT statement, the values of the selected columns are placed in Tcl variables as described below. - The optional -count value tells + The optional -count value tells spi_exec the maximum number of rows to process in the command. The effect of this is comparable to - setting up a query as a cursor and then saying FETCH n. + setting up a query as a cursor and then saying FETCH n. - If the command is a SELECT statement, the values of the + If the command is a SELECT statement, the values of the result columns are placed into Tcl variables named after the columns. - If the -array option is given, the column values are + If the -array option is given, the column values are instead stored into elements of the named associative array, with the column names used as array indexes. In addition, the current row number within the result (counting from zero) is stored into the array - element named .tupno, unless that name is + element named .tupno, unless that name is in use as a column name in the result. - If the command is a SELECT statement and no loop-body + If the command is a SELECT statement and no loop-body script is given, then only the first row of results are stored into Tcl variables or array elements; remaining rows, if any, are ignored. No storing occurs if the query returns no rows. (This case can be @@ -350,14 +350,14 @@ $$ LANGUAGE pltcl; spi_exec "SELECT count(*) AS cnt FROM pg_proc" - will set the Tcl variable $cnt to the number of rows in - the pg_proc system catalog. + will set the Tcl variable $cnt to the number of rows in + the pg_proc system catalog. - If the optional loop-body argument is given, it is + If the optional loop-body argument is given, it is a piece of Tcl script that is executed once for each row in the - query result. (loop-body is ignored if the given - command is not a SELECT.) + query result. (loop-body is ignored if the given + command is not a SELECT.) The values of the current row's columns are stored into Tcl variables or array elements before each iteration. For example: @@ -366,14 +366,14 @@ spi_exec -array C "SELECT * FROM pg_class" { elog DEBUG "have table $C(relname)" } - will print a log message for every row of pg_class. This + will print a log message for every row of pg_class. This feature works similarly to other Tcl looping constructs; in - particular continue and break work in the + particular continue and break work in the usual way inside the loop body. If a column of a query result is null, the target - variable for it is unset rather than being set. + variable for it is unset rather than being set. @@ -384,8 +384,8 @@ spi_exec -array C "SELECT * FROM pg_class" { Prepares and saves a query plan for later execution. The saved plan will be retained for the life of the current - session.preparing a query - in PL/Tcl + session.preparing a query + in PL/Tcl The query can use parameters, that is, placeholders for @@ -405,29 +405,29 @@ spi_exec -array C "SELECT * FROM pg_class" { - spi_execp -count n -array name -nulls string queryid value-list loop-body + spi_execp -count n -array name -nulls string queryid value-list loop-body - Executes a query previously prepared with spi_prepare. + Executes a query previously prepared with spi_prepare. queryid is the ID returned by - spi_prepare. If the query references parameters, + spi_prepare. If the query references parameters, a value-list must be supplied. This is a Tcl list of actual values for the parameters. The list must be the same length as the parameter type list previously given to - spi_prepare. Omit value-list + spi_prepare. Omit value-list if the query has no parameters. - The optional value for -nulls is a string of spaces and - 'n' characters telling spi_execp + The optional value for -nulls is a string of spaces and + 'n' characters telling spi_execp which of the parameters are null values. If given, it must have exactly the same length as the value-list. If it is not given, all the parameter values are nonnull. Except for the way in which the query and its parameters are specified, - spi_execp works just like spi_exec. - The -count, -array, and + spi_execp works just like spi_exec. + The -count, -array, and loop-body options are the same, and so is the result value. @@ -448,9 +448,9 @@ $$ LANGUAGE pltcl; We need backslashes inside the query string given to - spi_prepare to ensure that the - $n markers will be passed - through to spi_prepare as-is, and not replaced by Tcl + spi_prepare to ensure that the + $n markers will be passed + through to spi_prepare as-is, and not replaced by Tcl variable substitution. @@ -459,7 +459,7 @@ $$ LANGUAGE pltcl; - spi_lastoid + spi_lastoid spi_lastoid in PL/Tcl @@ -468,8 +468,8 @@ $$ LANGUAGE pltcl; Returns the OID of the row inserted by the last - spi_exec or spi_execp, if the - command was a single-row INSERT and the modified + spi_exec or spi_execp, if the + command was a single-row INSERT and the modified table contained OIDs. (If not, you get zero.) @@ -490,7 +490,7 @@ $$ LANGUAGE pltcl; - quote string + quote string Doubles all occurrences of single quote and backslash characters @@ -504,7 +504,7 @@ $$ LANGUAGE pltcl; "SELECT '$val' AS ret" - where the Tcl variable val actually contains + where the Tcl variable val actually contains doesn't. This would result in the final command string: @@ -536,7 +536,7 @@ SELECT 'doesn''t' AS ret - elog level msg + elog level msg elog in PL/Tcl @@ -545,14 +545,14 @@ SELECT 'doesn''t' AS ret Emits a log or error message. Possible levels are - DEBUG, LOG, INFO, - NOTICE, WARNING, ERROR, and - FATAL. ERROR + DEBUG, LOG, INFO, + NOTICE, WARNING, ERROR, and + FATAL. ERROR raises an error condition; if this is not trapped by the surrounding Tcl code, the error propagates out to the calling query, causing the current transaction or subtransaction to be aborted. This - is effectively the same as the Tcl error command. - FATAL aborts the transaction and causes the current + is effectively the same as the Tcl error command. + FATAL aborts the transaction and causes the current session to shut down. (There is probably no good reason to use this error level in PL/Tcl functions, but it's provided for completeness.) The other levels only generate messages of different @@ -585,7 +585,7 @@ SELECT 'doesn''t' AS ret Trigger procedures can be written in PL/Tcl. PostgreSQL requires that a procedure that is to be called as a trigger must be declared as a function with no arguments - and a return type of trigger. + and a return type of trigger. The information from the trigger manager is passed to the procedure body @@ -637,8 +637,8 @@ SELECT 'doesn''t' AS ret A Tcl list of the table column names, prefixed with an empty list - element. So looking up a column name in the list with Tcl's - lsearch command returns the element's number starting + element. So looking up a column name in the list with Tcl's + lsearch command returns the element's number starting with 1 for the first column, the same way the columns are customarily numbered in PostgreSQL. (Empty list elements also appear in the positions of columns that have been @@ -652,8 +652,8 @@ SELECT 'doesn''t' AS ret $TG_when - The string BEFORE, AFTER, or - INSTEAD OF, depending on the type of trigger event. + The string BEFORE, AFTER, or + INSTEAD OF, depending on the type of trigger event. @@ -662,7 +662,7 @@ SELECT 'doesn''t' AS ret $TG_level - The string ROW or STATEMENT depending on the + The string ROW or STATEMENT depending on the type of trigger event. @@ -672,8 +672,8 @@ SELECT 'doesn''t' AS ret $TG_op - The string INSERT, UPDATE, - DELETE, or TRUNCATE depending on the type of + The string INSERT, UPDATE, + DELETE, or TRUNCATE depending on the type of trigger event. @@ -684,8 +684,8 @@ SELECT 'doesn''t' AS ret An associative array containing the values of the new table - row for INSERT or UPDATE actions, or - empty for DELETE. The array is indexed by column + row for INSERT or UPDATE actions, or + empty for DELETE. The array is indexed by column name. Columns that are null will not appear in the array. This is not set for statement-level triggers. @@ -697,8 +697,8 @@ SELECT 'doesn''t' AS ret An associative array containing the values of the old table - row for UPDATE or DELETE actions, or - empty for INSERT. The array is indexed by column + row for UPDATE or DELETE actions, or + empty for INSERT. The array is indexed by column name. Columns that are null will not appear in the array. This is not set for statement-level triggers. @@ -721,32 +721,32 @@ SELECT 'doesn''t' AS ret The return value from a trigger procedure can be one of the strings - OK or SKIP, or a list of column name/value pairs. - If the return value is OK, - the operation (INSERT/UPDATE/DELETE) + OK or SKIP, or a list of column name/value pairs. + If the return value is OK, + the operation (INSERT/UPDATE/DELETE) that fired the trigger will proceed - normally. SKIP tells the trigger manager to silently suppress + normally. SKIP tells the trigger manager to silently suppress the operation for this row. If a list is returned, it tells PL/Tcl to return a modified row to the trigger manager; the contents of the modified row are specified by the column names and values in the list. Any columns not mentioned in the list are set to null. Returning a modified row is only meaningful - for row-level BEFORE INSERT or UPDATE + for row-level BEFORE INSERT or UPDATE triggers, for which the modified row will be inserted instead of the one - given in $NEW; or for row-level INSTEAD OF - INSERT or UPDATE triggers where the returned row - is used as the source data for INSERT RETURNING or - UPDATE RETURNING clauses. - In row-level BEFORE DELETE or INSTEAD - OF DELETE triggers, returning a modified row has the same - effect as returning OK, that is the operation proceeds. + given in $NEW; or for row-level INSTEAD OF + INSERT or UPDATE triggers where the returned row + is used as the source data for INSERT RETURNING or + UPDATE RETURNING clauses. + In row-level BEFORE DELETE or INSTEAD + OF DELETE triggers, returning a modified row has the same + effect as returning OK, that is the operation proceeds. The trigger return value is ignored for all other types of triggers. The result list can be made from an array representation of the - modified tuple with the array get Tcl command. + modified tuple with the array get Tcl command. @@ -797,7 +797,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab Event trigger procedures can be written in PL/Tcl. PostgreSQL requires that a procedure that is to be called as an event trigger must be declared as a function with no - arguments and a return type of event_trigger. + arguments and a return type of event_trigger. The information from the trigger manager is passed to the procedure body @@ -885,17 +885,17 @@ CREATE EVENT TRIGGER tcl_a_snitch ON ddl_command_start EXECUTE PROCEDURE tclsnit word is POSTGRES, the second word is the PostgreSQL version number, and additional words are field name/value pairs providing detailed information about the error. - Fields SQLSTATE, condition, - and message are always supplied + Fields SQLSTATE, condition, + and message are always supplied (the first two represent the error code and condition name as shown in ). Fields that may be present include - detail, hint, context, - schema, table, column, - datatype, constraint, - statement, cursor_position, - filename, lineno, and - funcname. + detail, hint, context, + schema, table, column, + datatype, constraint, + statement, cursor_position, + filename, lineno, and + funcname. @@ -1006,7 +1006,7 @@ $$ LANGUAGE pltcl; This section lists configuration parameters that - affect PL/Tcl. + affect PL/Tcl. @@ -1015,7 +1015,7 @@ $$ LANGUAGE pltcl; pltcl.start_proc (string) - pltcl.start_proc configuration parameter + pltcl.start_proc configuration parameter @@ -1031,8 +1031,8 @@ $$ LANGUAGE pltcl; - The referenced function must be written in the pltcl - language, and must not be marked SECURITY DEFINER. + The referenced function must be written in the pltcl + language, and must not be marked SECURITY DEFINER. (These restrictions ensure that it runs in the interpreter it's supposed to initialize.) The current user must have permission to call it, too. @@ -1060,14 +1060,14 @@ $$ LANGUAGE pltcl; pltclu.start_proc (string) - pltclu.start_proc configuration parameter + pltclu.start_proc configuration parameter This parameter is exactly like pltcl.start_proc, except that it applies to PL/TclU. The referenced function must - be written in the pltclu language. + be written in the pltclu language. @@ -1084,7 +1084,7 @@ $$ LANGUAGE pltcl; differ. Tcl, however, requires all procedure names to be distinct. PL/Tcl deals with this by making the internal Tcl procedure names contain the object - ID of the function from the system table pg_proc as part of their name. Thus, + ID of the function from the system table pg_proc as part of their name. Thus, PostgreSQL functions with the same name and different argument types will be different Tcl procedures, too. This is not normally a concern for a PL/Tcl programmer, but it might be visible diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml index d83fc9e52b..265effbe48 100644 --- a/doc/src/sgml/postgres-fdw.sgml +++ b/doc/src/sgml/postgres-fdw.sgml @@ -8,7 +8,7 @@ - The postgres_fdw module provides the foreign-data wrapper + The postgres_fdw module provides the foreign-data wrapper postgres_fdw, which can be used to access data stored in external PostgreSQL servers. @@ -16,17 +16,17 @@ The functionality provided by this module overlaps substantially with the functionality of the older module. - But postgres_fdw provides more transparent and + But postgres_fdw provides more transparent and standards-compliant syntax for accessing remote tables, and can give better performance in many cases. - To prepare for remote access using postgres_fdw: + To prepare for remote access using postgres_fdw: - Install the postgres_fdw extension using postgres_fdw extension using . @@ -61,17 +61,17 @@ - Now you need only SELECT from a foreign table to access + Now you need only SELECT from a foreign table to access the data stored in its underlying remote table. You can also modify - the remote table using INSERT, UPDATE, or - DELETE. (Of course, the remote user you have specified + the remote table using INSERT, UPDATE, or + DELETE. (Of course, the remote user you have specified in your user mapping must have privileges to do these things.) - Note that postgres_fdw currently lacks support for + Note that postgres_fdw currently lacks support for INSERT statements with an ON CONFLICT DO - UPDATE clause. However, the ON CONFLICT DO NOTHING + UPDATE clause. However, the ON CONFLICT DO NOTHING clause is supported, provided a unique index inference specification is omitted. @@ -79,10 +79,10 @@ It is generally recommended that the columns of a foreign table be declared with exactly the same data types, and collations if applicable, as the - referenced columns of the remote table. Although postgres_fdw + referenced columns of the remote table. Although postgres_fdw is currently rather forgiving about performing data type conversions at need, surprising semantic anomalies may arise when types or collations do - not match, due to the remote server interpreting WHERE clauses + not match, due to the remote server interpreting WHERE clauses slightly differently from the local server. @@ -99,8 +99,8 @@ Connection Options - A foreign server using the postgres_fdw foreign data wrapper - can have the same options that libpq accepts in + A foreign server using the postgres_fdw foreign data wrapper + can have the same options that libpq accepts in connection strings, as described in , except that these options are not allowed: @@ -113,14 +113,14 @@ - client_encoding (this is automatically set from the local + client_encoding (this is automatically set from the local server encoding) - fallback_application_name (always set to - postgres_fdw) + fallback_application_name (always set to + postgres_fdw) @@ -186,14 +186,14 @@ Cost Estimation Options - postgres_fdw retrieves remote data by executing queries + postgres_fdw retrieves remote data by executing queries against remote servers, so ideally the estimated cost of scanning a foreign table should be whatever it costs to be done on the remote server, plus some overhead for communication. The most reliable way to get such an estimate is to ask the remote server and then add something for overhead — but for simple queries, it may not be worth the cost of an additional remote query to get a cost estimate. - So postgres_fdw provides the following options to control + So postgres_fdw provides the following options to control how cost estimation is done: @@ -204,7 +204,7 @@ This option, which can be specified for a foreign table or a foreign - server, controls whether postgres_fdw issues remote + server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost estimates. A setting for a foreign table overrides any setting for its server, but only for that table. @@ -245,11 +245,11 @@ When use_remote_estimate is true, - postgres_fdw obtains row count and cost estimates from the + postgres_fdw obtains row count and cost estimates from the remote server and then adds fdw_startup_cost and fdw_tuple_cost to the cost estimates. When use_remote_estimate is false, - postgres_fdw performs local row count and cost estimation + postgres_fdw performs local row count and cost estimation and then adds fdw_startup_cost and fdw_tuple_cost to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the @@ -268,12 +268,12 @@ Remote Execution Options - By default, only WHERE clauses using built-in operators and + By default, only WHERE clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched. If such functions are available on the remote server and can be relied on to produce the same results as they do locally, performance can - be improved by sending such WHERE clauses for remote + be improved by sending such WHERE clauses for remote execution. This behavior can be controlled using the following option: @@ -284,7 +284,7 @@ This option is a comma-separated list of names - of PostgreSQL extensions that are installed, in + of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. @@ -293,7 +293,7 @@ When using the extensions option, it is the - user's responsibility that the listed extensions exist and behave + user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly. @@ -304,11 +304,11 @@ fetch_size - This option specifies the number of rows postgres_fdw + This option specifies the number of rows postgres_fdw should get in each fetch operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. - The default is 100. + The default is 100. @@ -321,7 +321,7 @@ Updatability Options - By default all foreign tables using postgres_fdw are assumed + By default all foreign tables using postgres_fdw are assumed to be updatable. This may be overridden using the following option: @@ -331,20 +331,20 @@ updatable - This option controls whether postgres_fdw allows foreign - tables to be modified using INSERT, UPDATE and - DELETE commands. It can be specified for a foreign table + This option controls whether postgres_fdw allows foreign + tables to be modified using INSERT, UPDATE and + DELETE commands. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. - The default is true. + The default is true. Of course, if the remote table is not in fact updatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. Note however - that the information_schema views will report a - postgres_fdw foreign table to be updatable (or not) + that the information_schema views will report a + postgres_fdw foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server. @@ -358,7 +358,7 @@ Importing Options - postgres_fdw is able to import foreign table definitions + postgres_fdw is able to import foreign table definitions using . This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported @@ -368,7 +368,7 @@ Importing behavior can be customized with the following options - (given in the IMPORT FOREIGN SCHEMA command): + (given in the IMPORT FOREIGN SCHEMA command): @@ -376,9 +376,9 @@ import_collate - This option controls whether column COLLATE options + This option controls whether column COLLATE options are included in the definitions of foreign tables imported - from a foreign server. The default is true. You might + from a foreign server. The default is true. You might need to turn this off if the remote server has a different set of collation names than the local server does, which is likely to be the case if it's running on a different operating system. @@ -389,13 +389,13 @@ import_default - This option controls whether column DEFAULT expressions + This option controls whether column DEFAULT expressions are included in the definitions of foreign tables imported - from a foreign server. The default is false. If you + from a foreign server. The default is false. If you enable this option, be wary of defaults that might get computed differently on the local server than they would be on the remote - server; nextval() is a common source of problems. - The IMPORT will fail altogether if an imported default + server; nextval() is a common source of problems. + The IMPORT will fail altogether if an imported default expression uses a function or operator that does not exist locally. @@ -404,25 +404,25 @@ import_not_null - This option controls whether column NOT NULL + This option controls whether column NOT NULL constraints are included in the definitions of foreign tables imported - from a foreign server. The default is true. + from a foreign server. The default is true. - Note that constraints other than NOT NULL will never be - imported from the remote tables. Although PostgreSQL - does support CHECK constraints on foreign tables, there is no + Note that constraints other than NOT NULL will never be + imported from the remote tables. Although PostgreSQL + does support CHECK constraints on foreign tables, there is no provision for importing them automatically, because of the risk that a constraint expression could evaluate differently on the local and remote - servers. Any such inconsistency in the behavior of a CHECK + servers. Any such inconsistency in the behavior of a CHECK constraint could lead to hard-to-detect errors in query optimization. - So if you wish to import CHECK constraints, you must do so + So if you wish to import CHECK constraints, you must do so manually, and you should verify the semantics of each one carefully. - For more detail about the treatment of CHECK constraints on + For more detail about the treatment of CHECK constraints on foreign tables, see . @@ -464,18 +464,18 @@ - The remote transaction uses SERIALIZABLE - isolation level when the local transaction has SERIALIZABLE - isolation level; otherwise it uses REPEATABLE READ + The remote transaction uses SERIALIZABLE + isolation level when the local transaction has SERIALIZABLE + isolation level; otherwise it uses REPEATABLE READ isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local - transaction uses SERIALIZABLE or REPEATABLE READ + transaction uses SERIALIZABLE or REPEATABLE READ isolation level, but it might be surprising for a READ - COMMITTED local transaction. A future + COMMITTED local transaction. A future PostgreSQL release might modify these rules. @@ -484,42 +484,42 @@ Remote Query Optimization - postgres_fdw attempts to optimize remote queries to reduce + postgres_fdw attempts to optimize remote queries to reduce the amount of data transferred from foreign servers. This is done by - sending query WHERE clauses to the remote server for + sending query WHERE clauses to the remote server for execution, and by not retrieving table columns that are not needed for the current query. To reduce the risk of misexecution of queries, - WHERE clauses are not sent to the remote server unless they use + WHERE clauses are not sent to the remote server unless they use only data types, operators, and functions that are built-in or belong to an - extension that's listed in the foreign server's extensions + extension that's listed in the foreign server's extensions option. Operators and functions in such clauses must - be IMMUTABLE as well. - For an UPDATE or DELETE query, - postgres_fdw attempts to optimize the query execution by + be IMMUTABLE as well. + For an UPDATE or DELETE query, + postgres_fdw attempts to optimize the query execution by sending the whole query to the remote server if there are no query - WHERE clauses that cannot be sent to the remote server, - no local joins for the query, no row-level local BEFORE or - AFTER triggers on the target table, and no - CHECK OPTION constraints from parent views. - In UPDATE, + WHERE clauses that cannot be sent to the remote server, + no local joins for the query, no row-level local BEFORE or + AFTER triggers on the target table, and no + CHECK OPTION constraints from parent views. + In UPDATE, expressions to assign to target columns must use only built-in data types, - IMMUTABLE operators, or IMMUTABLE functions, + IMMUTABLE operators, or IMMUTABLE functions, to reduce the risk of misexecution of the query. - When postgres_fdw encounters a join between foreign tables on + When postgres_fdw encounters a join between foreign tables on the same foreign server, it sends the entire join to the foreign server, unless for some reason it believes that it will be more efficient to fetch rows from each table individually, or unless the table references involved - are subject to different user mappings. While sending the JOIN + are subject to different user mappings. While sending the JOIN clauses, it takes the same precautions as mentioned above for the - WHERE clauses. + WHERE clauses. The query that is actually sent to the remote server for execution can - be examined using EXPLAIN VERBOSE. + be examined using EXPLAIN VERBOSE. @@ -527,55 +527,55 @@ Remote Query Execution Environment - In the remote sessions opened by postgres_fdw, + In the remote sessions opened by postgres_fdw, the parameter is set to - just pg_catalog, so that only built-in objects are visible + just pg_catalog, so that only built-in objects are visible without schema qualification. This is not an issue for queries - generated by postgres_fdw itself, because it always + generated by postgres_fdw itself, because it always supplies such qualification. However, this can pose a hazard for functions that are executed on the remote server via triggers or rules on remote tables. For example, if a remote table is actually a view, any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such - functions, or else attach SET search_path options + functions, or else attach SET search_path options (see ) to such functions to establish their expected search path environment. - postgres_fdw likewise establishes remote session settings + postgres_fdw likewise establishes remote session settings for various parameters: - is set to UTC + is set to UTC - is set to ISO + is set to ISO - is set to postgres + is set to postgres - is set to 3 for remote - servers 9.0 and newer and is set to 2 for older versions + is set to 3 for remote + servers 9.0 and newer and is set to 2 for older versions - These are less likely to be problematic than search_path, but - can be handled with function SET options if the need arises. + These are less likely to be problematic than search_path, but + can be handled with function SET options if the need arises. - It is not recommended that you override this behavior by + It is not recommended that you override this behavior by changing the session-level settings of these parameters; that is likely - to cause postgres_fdw to malfunction. + to cause postgres_fdw to malfunction. @@ -583,19 +583,19 @@ Cross-Version Compatibility - postgres_fdw can be used with remote servers dating back - to PostgreSQL 8.3. Read-only capability is available - back to 8.1. A limitation however is that postgres_fdw + postgres_fdw can be used with remote servers dating back + to PostgreSQL 8.3. Read-only capability is available + back to 8.1. A limitation however is that postgres_fdw generally assumes that immutable built-in functions and operators are safe to send to the remote server for execution, if they appear in a - WHERE clause for a foreign table. Thus, a built-in + WHERE clause for a foreign table. Thus, a built-in function that was added since the remote server's release might be sent - to it for execution, resulting in function does not exist or + to it for execution, resulting in function does not exist or a similar error. This type of failure can be worked around by rewriting the query, for example by embedding the foreign table - reference in a sub-SELECT with OFFSET 0 as an + reference in a sub-SELECT with OFFSET 0 as an optimization fence, and placing the problematic function or operator - outside the sub-SELECT. + outside the sub-SELECT. @@ -604,7 +604,7 @@ Here is an example of creating a foreign table with - postgres_fdw. First install the extension: + postgres_fdw. First install the extension: @@ -613,7 +613,7 @@ CREATE EXTENSION postgres_fdw; Then create a foreign server using . - In this example we wish to connect to a PostgreSQL server + In this example we wish to connect to a PostgreSQL server on host 192.83.123.89 listening on port 5432. The database to which the connection is made is named foreign_db on the remote server: @@ -640,9 +640,9 @@ CREATE USER MAPPING FOR local_user Now it is possible to create a foreign table with . In this example we - wish to access the table named some_schema.some_table + wish to access the table named some_schema.some_table on the remote server. The local name for it will - be foreign_table: + be foreign_table: CREATE FOREIGN TABLE foreign_table ( @@ -654,8 +654,8 @@ CREATE FOREIGN TABLE foreign_table ( It's essential that the data types and other properties of the columns - declared in CREATE FOREIGN TABLE match the actual remote table. - Column names must match as well, unless you attach column_name + declared in CREATE FOREIGN TABLE match the actual remote table. + Column names must match as well, unless you attach column_name options to the individual columns to show how they are named in the remote table. In many cases, use of is diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml index 8a3bfc9b0d..f8a6c48a57 100644 --- a/doc/src/sgml/postgres.sgml +++ b/doc/src/sgml/postgres.sgml @@ -85,11 +85,11 @@ Readers of this part should know how to connect to a - PostgreSQL database and issue + PostgreSQL database and issue SQL commands. Readers that are unfamiliar with these issues are encouraged to read first. SQL commands are typically entered - using the PostgreSQL interactive terminal + using the PostgreSQL interactive terminal psql, but other programs that have similar functionality can be used as well. @@ -116,10 +116,10 @@ This part covers topics that are of interest to a - PostgreSQL database administrator. This includes + PostgreSQL database administrator. This includes installation of the software, set up and configuration of the server, management of users and databases, and maintenance tasks. - Anyone who runs a PostgreSQL server, even for + Anyone who runs a PostgreSQL server, even for personal use, but especially in production, should be familiar with the topics covered in this part. @@ -139,7 +139,7 @@ up their own server can begin their exploration with this part. The rest of this part is about tuning and management; that material assumes that the reader is familiar with the general use of - the PostgreSQL database system. Readers are + the PostgreSQL database system. Readers are encouraged to look at and for additional information. @@ -171,7 +171,7 @@ This part describes the client programming interfaces distributed - with PostgreSQL. Each of these chapters can be + with PostgreSQL. Each of these chapters can be read independently. Note that there are many other programming interfaces for client programs that are distributed separately and contain their own documentation ( @@ -197,7 +197,7 @@ This part is about extending the server functionality with user-defined functions, data types, triggers, etc. These are advanced topics which should probably be approached only after all - the other user documentation about PostgreSQL has + the other user documentation about PostgreSQL has been understood. Later chapters in this part describe the server-side programming languages available in the PostgreSQL distribution as well as @@ -234,7 +234,7 @@ This part contains assorted information that might be of use to - PostgreSQL developers. + PostgreSQL developers. diff --git a/doc/src/sgml/problems.sgml b/doc/src/sgml/problems.sgml index 6bf74bb399..edceec3381 100644 --- a/doc/src/sgml/problems.sgml +++ b/doc/src/sgml/problems.sgml @@ -145,7 +145,7 @@ - If your application uses some other client interface, such as PHP, then + If your application uses some other client interface, such as PHP, then please try to isolate the offending queries. We will probably not set up a web server to reproduce your problem. In any case remember to provide the exact input files; do not guess that the problem happens for @@ -167,10 +167,10 @@ If you are reporting an error message, please obtain the most verbose - form of the message. In psql, say \set - VERBOSITY verbose beforehand. If you are extracting the message + form of the message. In psql, say \set + VERBOSITY verbose beforehand. If you are extracting the message from the server log, set the run-time parameter - to verbose so that all + to verbose so that all details are logged. @@ -236,9 +236,9 @@ If your version is older than &version; we will almost certainly tell you to upgrade. There are many bug fixes and improvements in each new release, so it is quite possible that a bug you have - encountered in an older release of PostgreSQL + encountered in an older release of PostgreSQL has already been fixed. We can only provide limited support for - sites using older releases of PostgreSQL; if you + sites using older releases of PostgreSQL; if you require more than we can provide, consider acquiring a commercial support contract. @@ -283,8 +283,8 @@ are specifically talking about the backend process, mention that, do not just say PostgreSQL crashes. A crash of a single backend process is quite different from crash of the parent - postgres process; please don't say the server - crashed when you mean a single backend process went down, nor vice versa. + postgres process; please don't say the server + crashed when you mean a single backend process went down, nor vice versa. Also, client programs such as the interactive frontend psql are completely separate from the backend. Please try to be specific about whether the problem is on the client or server side. @@ -356,10 +356,10 @@ subscribed to a list to be allowed to post on it. (You need not be subscribed to use the bug-report web form, however.) If you would like to send mail but do not want to receive list traffic, - you can subscribe and set your subscription option to nomail. + you can subscribe and set your subscription option to nomail. For more information send mail to majordomo@postgresql.org - with the single word help in the body of the message. + with the single word help in the body of the message. diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 526e8011de..15108baf71 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -30,12 +30,12 @@ In order to serve multiple clients efficiently, the server launches - a new backend process for each client. + a new backend process for each client. In the current implementation, a new child process is created immediately after an incoming connection is detected. This is transparent to the protocol, however. For purposes of the - protocol, the terms backend and server are - interchangeable; likewise frontend and client + protocol, the terms backend and server are + interchangeable; likewise frontend and client are interchangeable. @@ -56,7 +56,7 @@ During normal operation, the frontend sends queries and other commands to the backend, and the backend sends back query results - and other responses. There are a few cases (such as NOTIFY) + and other responses. There are a few cases (such as NOTIFY) wherein the backend will send unsolicited messages, but for the most part this portion of a session is driven by frontend requests. @@ -71,9 +71,9 @@ Within normal operation, SQL commands can be executed through either of - two sub-protocols. In the simple query protocol, the frontend + two sub-protocols. In the simple query protocol, the frontend just sends a textual query string, which is parsed and immediately - executed by the backend. In the extended query protocol, + executed by the backend. In the extended query protocol, processing of queries is separated into multiple steps: parsing, binding of parameter values, and execution. This offers flexibility and performance benefits, at the cost of extra complexity. @@ -81,7 +81,7 @@ Normal operation has additional sub-protocols for special operations - such as COPY. + such as COPY. @@ -123,24 +123,24 @@ In the extended-query protocol, execution of SQL commands is divided into multiple steps. The state retained between steps is represented - by two types of objects: prepared statements and - portals. A prepared statement represents the result of + by two types of objects: prepared statements and + portals. A prepared statement represents the result of parsing and semantic analysis of a textual query string. A prepared statement is not in itself ready to execute, because it might - lack specific values for parameters. A portal represents + lack specific values for parameters. A portal represents a ready-to-execute or already-partially-executed statement, with any - missing parameter values filled in. (For SELECT statements, + missing parameter values filled in. (For SELECT statements, a portal is equivalent to an open cursor, but we choose to use a different - term since cursors don't handle non-SELECT statements.) + term since cursors don't handle non-SELECT statements.) - The overall execution cycle consists of a parse step, + The overall execution cycle consists of a parse step, which creates a prepared statement from a textual query string; a - bind step, which creates a portal given a prepared + bind step, which creates a portal given a prepared statement and values for any needed parameters; and an - execute step that runs a portal's query. In the case of - a query that returns rows (SELECT, SHOW, etc), + execute step that runs a portal's query. In the case of + a query that returns rows (SELECT, SHOW, etc), the execute step can be told to fetch only a limited number of rows, so that multiple execute steps might be needed to complete the operation. @@ -151,7 +151,7 @@ (but note that these exist only within a session, and are never shared across sessions). Existing prepared statements and portals are referenced by names assigned when they were created. In addition, - an unnamed prepared statement and portal exist. Although these + an unnamed prepared statement and portal exist. Although these behave largely the same as named objects, operations on them are optimized for the case of executing a query only once and then discarding it, whereas operations on named objects are optimized on the expectation @@ -164,10 +164,10 @@ Data of a particular data type might be transmitted in any of several - different formats. As of PostgreSQL 7.4 - the only supported formats are text and binary, + different formats. As of PostgreSQL 7.4 + the only supported formats are text and binary, but the protocol makes provision for future extensions. The desired - format for any value is specified by a format code. + format for any value is specified by a format code. Clients can specify a format code for each transmitted parameter value and for each column of a query result. Text has format code zero, binary has format code one, and all other format codes are reserved @@ -300,8 +300,8 @@ password, the server responds with an AuthenticationOk, otherwise it responds with an ErrorResponse. The actual PasswordMessage can be computed in SQL as concat('md5', - md5(concat(md5(concat(password, username)), random-salt))). - (Keep in mind the md5() function returns its + md5(concat(md5(concat(password, username)), random-salt))). + (Keep in mind the md5() function returns its result as a hex string.) @@ -624,11 +624,11 @@ - The response to a SELECT query (or other queries that - return row sets, such as EXPLAIN or SHOW) + The response to a SELECT query (or other queries that + return row sets, such as EXPLAIN or SHOW) normally consists of RowDescription, zero or more DataRow messages, and then CommandComplete. - COPY to or from the frontend invokes special protocol + COPY to or from the frontend invokes special protocol as described in . All other query types normally produce only a CommandComplete message. @@ -657,8 +657,8 @@ In simple Query mode, the format of retrieved values is always text, - except when the given command is a FETCH from a cursor - declared with the BINARY option. In that case, the + except when the given command is a FETCH from a cursor + declared with the BINARY option. In that case, the retrieved values are in binary format. The format codes given in the RowDescription message tell which format is being used. @@ -689,10 +689,10 @@ INSERT INTO mytable VALUES(1); SELECT 1/0; INSERT INTO mytable VALUES(2); - then the divide-by-zero failure in the SELECT will force - rollback of the first INSERT. Furthermore, because + then the divide-by-zero failure in the SELECT will force + rollback of the first INSERT. Furthermore, because execution of the message is abandoned at the first error, the second - INSERT is never attempted at all. + INSERT is never attempted at all. @@ -704,17 +704,17 @@ COMMIT; INSERT INTO mytable VALUES(2); SELECT 1/0; - then the first INSERT is committed by the - explicit COMMIT command. The second INSERT - and the SELECT are still treated as a single transaction, + then the first INSERT is committed by the + explicit COMMIT command. The second INSERT + and the SELECT are still treated as a single transaction, so that the divide-by-zero failure will roll back the - second INSERT, but not the first one. + second INSERT, but not the first one. This behavior is implemented by running the statements in a multi-statement Query message in an implicit transaction - block unless there is some explicit transaction block for them to + block unless there is some explicit transaction block for them to run in. The main difference between an implicit transaction block and a regular one is that an implicit block is closed automatically at the end of the Query message, either by an implicit commit if there was no @@ -725,27 +725,27 @@ SELECT 1/0; If the session is already in a transaction block, as a result of - a BEGIN in some previous message, then the Query message + a BEGIN in some previous message, then the Query message simply continues that transaction block, whether the message contains one statement or several. However, if the Query message contains - a COMMIT or ROLLBACK closing the existing + a COMMIT or ROLLBACK closing the existing transaction block, then any following statements are executed in an implicit transaction block. - Conversely, if a BEGIN appears in a multi-statement Query + Conversely, if a BEGIN appears in a multi-statement Query message, then it starts a regular transaction block that will only be - terminated by an explicit COMMIT or ROLLBACK, + terminated by an explicit COMMIT or ROLLBACK, whether that appears in this Query message or a later one. - If the BEGIN follows some statements that were executed as + If the BEGIN follows some statements that were executed as an implicit transaction block, those statements are not immediately committed; in effect, they are retroactively included into the new regular transaction block. - A COMMIT or ROLLBACK appearing in an implicit + A COMMIT or ROLLBACK appearing in an implicit transaction block is executed as normal, closing the implicit block; - however, a warning will be issued since a COMMIT - or ROLLBACK without a previous BEGIN might + however, a warning will be issued since a COMMIT + or ROLLBACK without a previous BEGIN might represent a mistake. If more statements follow, a new implicit transaction block will be started for them. @@ -766,8 +766,8 @@ SELECT 1/0; ROLLBACK; in a single Query message, the session will be left inside a failed - regular transaction block, since the ROLLBACK is not - reached after the divide-by-zero error. Another ROLLBACK + regular transaction block, since the ROLLBACK is not + reached after the divide-by-zero error. Another ROLLBACK will be needed to restore the session to a usable state. @@ -789,7 +789,7 @@ INSERT INTO mytable VALUES(2); SELCT 1/0; then none of the statements would get run, resulting in the visible - difference that the first INSERT is not committed. + difference that the first INSERT is not committed. Errors detected at semantic analysis or later, such as a misspelled table or column name, do not have this effect. @@ -824,17 +824,17 @@ SELCT 1/0; A parameter data type can be left unspecified by setting it to zero, or by making the array of parameter type OIDs shorter than the - number of parameter symbols ($n) + number of parameter symbols ($n) used in the query string. Another special case is that a parameter's - type can be specified as void (that is, the OID of the - void pseudo-type). This is meant to allow parameter symbols + type can be specified as void (that is, the OID of the + void pseudo-type). This is meant to allow parameter symbols to be used for function parameters that are actually OUT parameters. - Ordinarily there is no context in which a void parameter + Ordinarily there is no context in which a void parameter could be used, but if such a parameter symbol appears in a function's parameter list, it is effectively ignored. For example, a function - call such as foo($1,$2,$3,$4) could match a function with - two IN and two OUT arguments, if $3 and $4 - are specified as having type void. + call such as foo($1,$2,$3,$4) could match a function with + two IN and two OUT arguments, if $3 and $4 + are specified as having type void. @@ -858,7 +858,7 @@ SELCT 1/0; statements must be explicitly closed before they can be redefined by another Parse message, but this is not required for the unnamed statement. Named prepared statements can also be created and accessed at the SQL - command level, using PREPARE and EXECUTE. + command level, using PREPARE and EXECUTE. @@ -869,7 +869,7 @@ SELCT 1/0; the values to use for any parameter placeholders present in the prepared statement. The supplied parameter set must match those needed by the prepared statement. - (If you declared any void parameters in the Parse message, + (If you declared any void parameters in the Parse message, pass NULL values for them in the Bind message.) Bind also specifies the format to use for any data returned by the query; the format can be specified overall, or per-column. @@ -880,7 +880,7 @@ SELCT 1/0; The choice between text and binary output is determined by the format codes given in Bind, regardless of the SQL command involved. The - BINARY attribute in cursor declarations is irrelevant when + BINARY attribute in cursor declarations is irrelevant when using extended query protocol. @@ -904,14 +904,14 @@ SELCT 1/0; portals must be explicitly closed before they can be redefined by another Bind message, but this is not required for the unnamed portal. Named portals can also be created and accessed at the SQL - command level, using DECLARE CURSOR and FETCH. + command level, using DECLARE CURSOR and FETCH. Once a portal exists, it can be executed using an Execute message. The Execute message specifies the portal name (empty string denotes the unnamed portal) and - a maximum result-row count (zero meaning fetch all rows). + a maximum result-row count (zero meaning fetch all rows). The result-row count is only meaningful for portals containing commands that return row sets; in other cases the command is always executed to completion, and the row count is ignored. @@ -938,7 +938,7 @@ SELCT 1/0; At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless message causes the backend to close the current transaction if it's not inside a - BEGIN/COMMIT transaction block (close + BEGIN/COMMIT transaction block (close meaning to commit if no error, or roll back if error). Then a ReadyForQuery response is issued. The purpose of Sync is to provide a resynchronization point for error recovery. When an error is detected @@ -946,13 +946,13 @@ SELCT 1/0; ErrorResponse, then reads and discards messages until a Sync is reached, then issues ReadyForQuery and returns to normal message processing. (But note that no skipping occurs if an error is detected - while processing Sync — this ensures that there is one + while processing Sync — this ensures that there is one and only one ReadyForQuery sent for each Sync.) - Sync does not cause a transaction block opened with BEGIN + Sync does not cause a transaction block opened with BEGIN to be closed. It is possible to detect this situation since the ReadyForQuery message includes transaction status information. @@ -1039,7 +1039,7 @@ SELCT 1/0; The Function Call sub-protocol is a legacy feature that is probably best avoided in new code. Similar results can be accomplished by setting up - a prepared statement that does SELECT function($1, ...). + a prepared statement that does SELECT function($1, ...). The Function Call cycle can then be replaced with Bind/Execute. @@ -1107,7 +1107,7 @@ SELCT 1/0; COPY Operations - The COPY command allows high-speed bulk data transfer + The COPY command allows high-speed bulk data transfer to or from the server. Copy-in and copy-out operations each switch the connection into a distinct sub-protocol, which lasts until the operation is completed. @@ -1115,16 +1115,16 @@ SELCT 1/0; Copy-in mode (data transfer to the server) is initiated when the - backend executes a COPY FROM STDIN SQL statement. The backend + backend executes a COPY FROM STDIN SQL statement. The backend sends a CopyInResponse message to the frontend. The frontend should then send zero or more CopyData messages, forming a stream of input data. (The message boundaries are not required to have anything to do with row boundaries, although that is often a reasonable choice.) The frontend can terminate the copy-in mode by sending either a CopyDone message (allowing successful termination) or a CopyFail message (which - will cause the COPY SQL statement to fail with an + will cause the COPY SQL statement to fail with an error). The backend then reverts to the command-processing mode it was - in before the COPY started, which will be either simple or + in before the COPY started, which will be either simple or extended query protocol. It will next send either CommandComplete (if successful) or ErrorResponse (if not). @@ -1132,10 +1132,10 @@ SELCT 1/0; In the event of a backend-detected error during copy-in mode (including receipt of a CopyFail message), the backend will issue an ErrorResponse - message. If the COPY command was issued via an extended-query + message. If the COPY command was issued via an extended-query message, the backend will now discard frontend messages until a Sync message is received, then it will issue ReadyForQuery and return to normal - processing. If the COPY command was issued in a simple + processing. If the COPY command was issued in a simple Query message, the rest of that message is discarded and ReadyForQuery is issued. In either case, any subsequent CopyData, CopyDone, or CopyFail messages issued by the frontend will simply be dropped. @@ -1147,16 +1147,16 @@ SELCT 1/0; that will abort the copy-in state as described above. (The exception for Flush and Sync is for the convenience of client libraries that always send Flush or Sync after an Execute message, without checking whether - the command to be executed is a COPY FROM STDIN.) + the command to be executed is a COPY FROM STDIN.) Copy-out mode (data transfer from the server) is initiated when the - backend executes a COPY TO STDOUT SQL statement. The backend + backend executes a COPY TO STDOUT SQL statement. The backend sends a CopyOutResponse message to the frontend, followed by zero or more CopyData messages (always one per row), followed by CopyDone. The backend then reverts to the command-processing mode it was - in before the COPY started, and sends CommandComplete. + in before the COPY started, and sends CommandComplete. The frontend cannot abort the transfer (except by closing the connection or issuing a Cancel request), but it can discard unwanted CopyData and CopyDone messages. @@ -1179,7 +1179,7 @@ SELCT 1/0; There is another Copy-related mode called copy-both, which allows - high-speed bulk data transfer to and from the server. + high-speed bulk data transfer to and from the server. Copy-both mode is initiated when a backend in walsender mode executes a START_REPLICATION statement. The backend sends a CopyBothResponse message to the frontend. Both @@ -1204,7 +1204,7 @@ SELCT 1/0; The CopyInResponse, CopyOutResponse and CopyBothResponse messages include fields that inform the frontend of the number of columns per row and the format codes being used for each column. (As of - the present implementation, all columns in a given COPY + the present implementation, all columns in a given COPY operation will use the same format, but the message design does not assume this.) @@ -1226,7 +1226,7 @@ SELCT 1/0; It is possible for NoticeResponse messages to be generated due to outside activity; for example, if the database administrator commands - a fast database shutdown, the backend will send a NoticeResponse + a fast database shutdown, the backend will send a NoticeResponse indicating this fact before closing the connection. Accordingly, frontends should always be prepared to accept and display NoticeResponse messages, even when the connection is nominally idle. @@ -1236,7 +1236,7 @@ SELCT 1/0; ParameterStatus messages will be generated whenever the active value changes for any of the parameters the backend believes the frontend should know about. Most commonly this occurs in response - to a SET SQL command executed by the frontend, and + to a SET SQL command executed by the frontend, and this case is effectively synchronous — but it is also possible for parameter status changes to occur because the administrator changed a configuration file and then sent the @@ -1249,27 +1249,27 @@ SELCT 1/0; At present there is a hard-wired set of parameters for which ParameterStatus will be generated: they are - server_version, - server_encoding, - client_encoding, - application_name, - is_superuser, - session_authorization, - DateStyle, - IntervalStyle, - TimeZone, - integer_datetimes, and - standard_conforming_strings. - (server_encoding, TimeZone, and - integer_datetimes were not reported by releases before 8.0; - standard_conforming_strings was not reported by releases + server_version, + server_encoding, + client_encoding, + application_name, + is_superuser, + session_authorization, + DateStyle, + IntervalStyle, + TimeZone, + integer_datetimes, and + standard_conforming_strings. + (server_encoding, TimeZone, and + integer_datetimes were not reported by releases before 8.0; + standard_conforming_strings was not reported by releases before 8.1; - IntervalStyle was not reported by releases before 8.4; - application_name was not reported by releases before 9.0.) + IntervalStyle was not reported by releases before 8.4; + application_name was not reported by releases before 9.0.) Note that - server_version, - server_encoding and - integer_datetimes + server_version, + server_encoding and + integer_datetimes are pseudo-parameters that cannot change after startup. This set might change in the future, or even become configurable. Accordingly, a frontend should simply ignore ParameterStatus for @@ -1394,7 +1394,7 @@ SELCT 1/0; frontend disconnects while a non-SELECT query is being processed, the backend will probably finish the query before noticing the disconnection. If the query is outside any - transaction block (BEGIN ... COMMIT + transaction block (BEGIN ... COMMIT sequence) then its results might be committed before the disconnection is recognized. @@ -1404,7 +1404,7 @@ SELCT 1/0; <acronym>SSL</acronym> Session Encryption - If PostgreSQL was built with + If PostgreSQL was built with SSL support, frontend/backend communications can be encrypted using SSL. This provides communication security in environments where attackers might be @@ -1417,17 +1417,17 @@ SELCT 1/0; To initiate an SSL-encrypted connection, the frontend initially sends an SSLRequest message rather than a StartupMessage. The server then responds with a single byte - containing S or N, indicating that it is + containing S or N, indicating that it is willing or unwilling to perform SSL, respectively. The frontend might close the connection at this point if it is dissatisfied with the response. To continue after - S, perform an SSL startup handshake + S, perform an SSL startup handshake (not described here, part of the SSL specification) with the server. If this is successful, continue with sending the usual StartupMessage. In this case the StartupMessage and all subsequent data will be SSL-encrypted. To continue after - N, send the usual StartupMessage and proceed without + N, send the usual StartupMessage and proceed without encryption. @@ -1435,7 +1435,7 @@ SELCT 1/0; The frontend should also be prepared to handle an ErrorMessage response to SSLRequest from the server. This would only occur if the server predates the addition of SSL support - to PostgreSQL. (Such servers are now very ancient, + to PostgreSQL. (Such servers are now very ancient, and likely do not exist in the wild anymore.) In this case the connection must be closed, but the frontend might choose to open a fresh connection @@ -1460,8 +1460,8 @@ SELCT 1/0; SASL Authentication -SASL is a framework for authentication in connection-oriented -protocols. At the moment, PostgreSQL implements only one SASL +SASL is a framework for authentication in connection-oriented +protocols. At the moment, PostgreSQL implements only one SASL authentication mechanism, SCRAM-SHA-256, but more might be added in the future. The below steps illustrate how SASL authentication is performed in general, while the next subsection gives more details on SCRAM-SHA-256. @@ -1518,24 +1518,24 @@ ErrorMessage. SCRAM-SHA-256 authentication - SCRAM-SHA-256 (called just SCRAM from now on) is + SCRAM-SHA-256 (called just SCRAM from now on) is the only implemented SASL mechanism, at the moment. It is described in detail in RFC 7677 and RFC 5802. When SCRAM-SHA-256 is used in PostgreSQL, the server will ignore the user name -that the client sends in the client-first-message. The user name +that the client sends in the client-first-message. The user name that was already sent in the startup message is used instead. -PostgreSQL supports multiple character encodings, while SCRAM +PostgreSQL supports multiple character encodings, while SCRAM dictates UTF-8 to be used for the user name, so it might be impossible to represent the PostgreSQL user name in UTF-8. The SCRAM specification dictates that the password is also in UTF-8, and is -processed with the SASLprep algorithm. -PostgreSQL, however, does not require UTF-8 to be used for +processed with the SASLprep algorithm. +PostgreSQL, however, does not require UTF-8 to be used for the password. When a user's password is set, it is processed with SASLprep as if it was in UTF-8, regardless of the actual encoding used. However, if it is not a legal UTF-8 byte sequence, or it contains UTF-8 byte sequences @@ -1547,7 +1547,7 @@ the password is in. -Channel binding has not been implemented yet. +Channel binding has not been implemented yet. @@ -1561,27 +1561,27 @@ the password is in. The client responds by sending a SASLInitialResponse message, which - indicates the chosen mechanism, SCRAM-SHA-256. In the Initial + indicates the chosen mechanism, SCRAM-SHA-256. In the Initial Client response field, the message contains the SCRAM - client-first-message. + client-first-message. Server sends an AuthenticationSASLContinue message, with a SCRAM - server-first message as the content. + server-first message as the content. Client sends a SASLResponse message, with SCRAM - client-final-message as the content. + client-final-message as the content. Server sends an AuthenticationSASLFinal message, with the SCRAM - server-final-message, followed immediately by + server-final-message, followed immediately by an AuthenticationOk message. @@ -1594,14 +1594,14 @@ the password is in. To initiate streaming replication, the frontend sends the -replication parameter in the startup message. A Boolean value -of true tells the backend to go into walsender mode, wherein a +replication parameter in the startup message. A Boolean value +of true tells the backend to go into walsender mode, wherein a small set of replication commands can be issued instead of SQL statements. Only the simple query protocol can be used in walsender mode. Replication commands are logged in the server log when is enabled. -Passing database as the value instructs walsender to connect to -the database specified in the dbname parameter, which will allow +Passing database as the value instructs walsender to connect to +the database specified in the dbname parameter, which will allow the connection to be used for logical replication from that database. @@ -1697,7 +1697,7 @@ The commands accepted in walsender mode are: - name + name The name of a run-time parameter. Available parameters are documented @@ -1728,7 +1728,7 @@ The commands accepted in walsender mode are: - File name of the timeline history file, e.g., 00000002.history. + File name of the timeline history file, e.g., 00000002.history. @@ -1750,7 +1750,7 @@ The commands accepted in walsender mode are: - CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] } + CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] } CREATE_REPLICATION_SLOT @@ -1761,7 +1761,7 @@ The commands accepted in walsender mode are: - slot_name + slot_name The name of the slot to create. Must be a valid replication slot @@ -1771,7 +1771,7 @@ The commands accepted in walsender mode are: - output_plugin + output_plugin The name of the output plugin used for logical decoding @@ -1781,7 +1781,7 @@ The commands accepted in walsender mode are: - TEMPORARY + TEMPORARY Specify that this replication slot is a temporary one. Temporary @@ -1792,30 +1792,30 @@ The commands accepted in walsender mode are: - RESERVE_WAL + RESERVE_WAL - Specify that this physical replication slot reserves WAL - immediately. Otherwise, WAL is only reserved upon + Specify that this physical replication slot reserves WAL + immediately. Otherwise, WAL is only reserved upon connection from a streaming replication client. - EXPORT_SNAPSHOT - NOEXPORT_SNAPSHOT - USE_SNAPSHOT + EXPORT_SNAPSHOT + NOEXPORT_SNAPSHOT + USE_SNAPSHOT Decides what to do with the snapshot created during logical slot - initialization. EXPORT_SNAPSHOT, which is the default, + initialization. EXPORT_SNAPSHOT, which is the default, will export the snapshot for use in other sessions. This option can't - be used inside a transaction. USE_SNAPSHOT will use the + be used inside a transaction. USE_SNAPSHOT will use the snapshot for the current transaction executing the command. This option must be used in a transaction, and CREATE_REPLICATION_SLOT must be the first command - run in that transaction. Finally, NOEXPORT_SNAPSHOT will + run in that transaction. Finally, NOEXPORT_SNAPSHOT will just use the snapshot for logical decoding as normal but won't do anything else with it. @@ -1875,15 +1875,15 @@ The commands accepted in walsender mode are: - START_REPLICATION [ SLOT slot_name ] [ PHYSICAL ] XXX/XXX [ TIMELINE tli ] + START_REPLICATION [ SLOT slot_name ] [ PHYSICAL ] XXX/XXX [ TIMELINE tli ] START_REPLICATION Instructs server to start streaming WAL, starting at - WAL location XXX/XXX. + WAL location XXX/XXX. If TIMELINE option is specified, - streaming starts on timeline tli; + streaming starts on timeline tli; otherwise, the server's current timeline is selected. The server can reply with an error, for example if the requested section of WAL has already been recycled. On success, server responds with a CopyBothResponse @@ -1892,9 +1892,9 @@ The commands accepted in walsender mode are: If a slot's name is provided - via slot_name, it will be updated + via slot_name, it will be updated as replication progresses so that the server knows which WAL segments, - and if hot_standby_feedback is on which transactions, + and if hot_standby_feedback is on which transactions, are still needed by the standby. @@ -2228,11 +2228,11 @@ The commands accepted in walsender mode are: - START_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name [ option_value ] [, ...] ) ] + START_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name [ option_value ] [, ...] ) ] Instructs server to start streaming WAL for logical replication, starting - at WAL location XXX/XXX. The server can + at WAL location XXX/XXX. The server can reply with an error, for example if the requested section of WAL has already been recycled. On success, server responds with a CopyBothResponse message, and then starts to stream WAL to the frontend. @@ -2250,7 +2250,7 @@ The commands accepted in walsender mode are: - SLOT slot_name + SLOT slot_name The name of the slot to stream changes from. This parameter is required, @@ -2261,7 +2261,7 @@ The commands accepted in walsender mode are: - XXX/XXX + XXX/XXX The WAL location to begin streaming at. @@ -2269,7 +2269,7 @@ The commands accepted in walsender mode are: - option_name + option_name The name of an option passed to the slot's logical decoding plugin. @@ -2277,7 +2277,7 @@ The commands accepted in walsender mode are: - option_value + option_value Optional value, in the form of a string constant, associated with the @@ -2291,7 +2291,7 @@ The commands accepted in walsender mode are: - DROP_REPLICATION_SLOT slot_name WAIT + DROP_REPLICATION_SLOT slot_name WAIT DROP_REPLICATION_SLOT @@ -2302,7 +2302,7 @@ The commands accepted in walsender mode are: - slot_name + slot_name The name of the slot to drop. @@ -2348,7 +2348,7 @@ The commands accepted in walsender mode are: - PROGRESS + PROGRESS Request information required to generate a progress report. This will @@ -2365,7 +2365,7 @@ The commands accepted in walsender mode are: - FAST + FAST Request a fast checkpoint. @@ -2399,7 +2399,7 @@ The commands accepted in walsender mode are: - MAX_RATE rate + MAX_RATE rate Limit (throttle) the maximum amount of data transferred from server @@ -2420,7 +2420,7 @@ The commands accepted in walsender mode are: pg_tblspc in a file named tablespace_map. The tablespace map file includes each symbolic link name as it exists in the directory - pg_tblspc/ and the full path of that symbolic link. + pg_tblspc/ and the full path of that symbolic link. @@ -2473,9 +2473,9 @@ The commands accepted in walsender mode are: After the second regular result set, one or more CopyResponse results will be sent, one for the main data directory and one for each additional tablespace other - than pg_default and pg_global. The data in + than pg_default and pg_global. The data in the CopyResponse results will be a tar format (following the - ustar interchange format specified in the POSIX 1003.1-2008 + ustar interchange format specified in the POSIX 1003.1-2008 standard) dump of the tablespace contents, except that the two trailing blocks of zeroes specified in the standard are omitted. After the tar data is complete, a final ordinary result set will be sent, @@ -2486,29 +2486,29 @@ The commands accepted in walsender mode are: The tar archive for the data directory and each tablespace will contain all files in the directories, regardless of whether they are - PostgreSQL files or other files added to the same + PostgreSQL files or other files added to the same directory. The only excluded files are: - postmaster.pid + postmaster.pid - postmaster.opts + postmaster.opts Various temporary files and directories created during the operation of the PostgreSQL server, such as any file or directory beginning - with pgsql_tmp. + with pgsql_tmp. - pg_wal, including subdirectories. If the backup is run + pg_wal, including subdirectories. If the backup is run with WAL files included, a synthesized version of pg_wal will be included, but it will only contain the files necessary for the backup to work, not the rest of the contents. @@ -2516,10 +2516,10 @@ The commands accepted in walsender mode are: - pg_dynshmem, pg_notify, - pg_replslot, pg_serial, - pg_snapshots, pg_stat_tmp, and - pg_subtrans are copied as empty directories (even if + pg_dynshmem, pg_notify, + pg_replslot, pg_serial, + pg_snapshots, pg_stat_tmp, and + pg_subtrans are copied as empty directories (even if they are symbolic links). @@ -2549,7 +2549,7 @@ The commands accepted in walsender mode are: This section describes the logical replication protocol, which is the message flow started by the START_REPLICATION - SLOT slot_name + SLOT slot_name LOGICAL replication command. @@ -3419,7 +3419,7 @@ Bind (F) The number of parameter format codes that follow - (denoted C below). + (denoted C below). This can be zero to indicate that there are no parameters or that the parameters all use the default format (text); or one, in which case the specified format code is applied @@ -3430,7 +3430,7 @@ Bind (F) - Int16[C] + Int16[C] @@ -3488,7 +3488,7 @@ Bind (F) The number of result-column format codes that follow - (denoted R below). + (denoted R below). This can be zero to indicate that there are no result columns or that the result columns should all use the default format (text); @@ -3500,7 +3500,7 @@ Bind (F) - Int16[R] + Int16[R] @@ -3575,7 +3575,7 @@ CancelRequest (F) The cancel request code. The value is chosen to contain - 1234 in the most significant 16 bits, and 5678 in the + 1234 in the most significant 16 bits, and 5678 in the least significant 16 bits. (To avoid confusion, this code must not be the same as any protocol version number.) @@ -3642,8 +3642,8 @@ Close (F) - 'S' to close a prepared statement; or - 'P' to close a portal. + 'S' to close a prepared statement; or + 'P' to close a portal. @@ -3977,13 +3977,13 @@ CopyInResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4050,13 +4050,13 @@ CopyOutResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4123,13 +4123,13 @@ CopyBothResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4252,8 +4252,8 @@ Describe (F) - 'S' to describe a prepared statement; or - 'P' to describe a portal. + 'S' to describe a prepared statement; or + 'P' to describe a portal. @@ -4424,7 +4424,7 @@ Execute (F) Maximum number of rows to return, if portal contains a query that returns rows (ignored otherwise). Zero - denotes no limit. + denotes no limit. @@ -4514,7 +4514,7 @@ FunctionCall (F) The number of argument format codes that follow - (denoted C below). + (denoted C below). This can be zero to indicate that there are no arguments or that the arguments all use the default format (text); or one, in which case the specified format code is applied @@ -4525,7 +4525,7 @@ FunctionCall (F) - Int16[C] + Int16[C] @@ -4855,7 +4855,7 @@ NotificationResponse (B) - The payload string passed from the notifying process. + The payload string passed from the notifying process. @@ -5261,9 +5261,9 @@ ReadyForQuery (B) Current backend transaction status indicator. - Possible values are 'I' if idle (not in - a transaction block); 'T' if in a transaction - block; or 'E' if in a failed transaction + Possible values are 'I' if idle (not in + a transaction block); 'T' if in a transaction + block; or 'E' if in a failed transaction block (queries will be rejected until block is ended). @@ -5364,7 +5364,7 @@ RowDescription (B) - The data type size (see pg_type.typlen). + The data type size (see pg_type.typlen). Note that negative values denote variable-width types. @@ -5375,7 +5375,7 @@ RowDescription (B) - The type modifier (see pg_attribute.atttypmod). + The type modifier (see pg_attribute.atttypmod). The meaning of the modifier is type-specific. @@ -5539,7 +5539,7 @@ SSLRequest (F) The SSL request code. The value is chosen to contain - 1234 in the most significant 16 bits, and 5679 in the + 1234 in the most significant 16 bits, and 5679 in the least significant 16 bits. (To avoid confusion, this code must not be the same as any protocol version number.) @@ -5588,7 +5588,7 @@ StartupMessage (F) parameter name and value strings. A zero byte is required as a terminator after the last name/value pair. Parameters can appear in any - order. user is required, others are optional. + order. user is required, others are optional. Each parameter is specified as: @@ -5602,7 +5602,7 @@ StartupMessage (F) - user + user @@ -5613,7 +5613,7 @@ StartupMessage (F) - database + database @@ -5623,7 +5623,7 @@ StartupMessage (F) - options + options @@ -5631,23 +5631,23 @@ StartupMessage (F) deprecated in favor of setting individual run-time parameters.) Spaces within this string are considered to separate arguments, unless escaped with - a backslash (\); write \\ to + a backslash (\); write \\ to represent a literal backslash. - replication + replication Used to connect in streaming replication mode, where a small set of replication commands can be issued instead of SQL statements. Value can be - true, false, or - database, and the default is - false. See + true, false, or + database, and the default is + false. See for details. @@ -5768,15 +5768,15 @@ message. -S +S Severity: the field contents are - ERROR, FATAL, or - PANIC (in an error message), or - WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message), + ERROR, FATAL, or + PANIC (in an error message), or + WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message), or a localized translation of one of these. Always present. @@ -5784,18 +5784,18 @@ message. -V +V Severity: the field contents are - ERROR, FATAL, or - PANIC (in an error message), or - WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message). - This is identical to the S field except + ERROR, FATAL, or + PANIC (in an error message), or + WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message). + This is identical to the S field except that the contents are never localized. This is present only in - messages generated by PostgreSQL versions 9.6 + messages generated by PostgreSQL versions 9.6 and later. @@ -5803,7 +5803,7 @@ message. -C +C @@ -5815,7 +5815,7 @@ message. -M +M @@ -5828,7 +5828,7 @@ message. -D +D @@ -5840,7 +5840,7 @@ message. -H +H @@ -5854,7 +5854,7 @@ message. -P +P @@ -5868,21 +5868,21 @@ message. -p +p - Internal position: this is defined the same as the P + Internal position: this is defined the same as the P field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. - The q field will always appear when this field appears. + The q field will always appear when this field appears. -q +q @@ -5894,7 +5894,7 @@ message. -W +W @@ -5908,7 +5908,7 @@ message. -s +s @@ -5920,7 +5920,7 @@ message. -t +t @@ -5933,7 +5933,7 @@ message. -c +c @@ -5946,7 +5946,7 @@ message. -d +d @@ -5959,7 +5959,7 @@ message. -n +n @@ -5974,7 +5974,7 @@ message. -F +F @@ -5986,7 +5986,7 @@ message. -L +L @@ -5998,7 +5998,7 @@ message. -R +R @@ -6738,8 +6738,8 @@ developers trying to update existing client libraries to protocol 3.0. The initial startup packet uses a flexible list-of-strings format instead of a fixed format. Notice that session default values for run-time parameters can now be specified directly in the startup packet. (Actually, -you could do that before using the options field, but given the -limited width of options and the lack of any way to quote +you could do that before using the options field, but given the +limited width of options and the lack of any way to quote whitespace in the values, it wasn't a very safe technique.) @@ -6750,7 +6750,7 @@ PasswordMessage now has a type byte. -ErrorResponse and NoticeResponse ('E' and 'N') +ErrorResponse and NoticeResponse ('E' and 'N') messages now contain multiple fields, from which the client code can assemble an error message of the desired level of verbosity. Note that individual fields will typically not end with a newline, whereas the single @@ -6758,7 +6758,7 @@ string sent in the older protocol always did. -The ReadyForQuery ('Z') message includes a transaction status +The ReadyForQuery ('Z') message includes a transaction status indicator. @@ -6771,7 +6771,7 @@ directly tied to the server's internal representation. -There is a new extended query sub-protocol, which adds the frontend +There is a new extended query sub-protocol, which adds the frontend message types Parse, Bind, Execute, Describe, Close, Flush, and Sync, and the backend message types ParseComplete, BindComplete, PortalSuspended, ParameterDescription, NoData, and CloseComplete. Existing clients do not @@ -6782,7 +6782,7 @@ might allow improvements in performance or functionality. COPY data is now encapsulated into CopyData and CopyDone messages. There is a well-defined way to recover from errors during COPY. The special -\. last line is not needed anymore, and is not sent +\. last line is not needed anymore, and is not sent during COPY OUT. (It is still recognized as a terminator during COPY IN, but its use is deprecated and will eventually be removed.) Binary COPY is supported. @@ -6800,31 +6800,31 @@ server data representations. -The backend sends ParameterStatus ('S') messages during connection +The backend sends ParameterStatus ('S') messages during connection startup for all parameters it considers interesting to the client library. Subsequently, a ParameterStatus message is sent whenever the active value changes for any of these parameters. -The RowDescription ('T') message carries new table OID and column +The RowDescription ('T') message carries new table OID and column number fields for each column of the described row. It also shows the format code for each column. -The CursorResponse ('P') message is no longer generated by +The CursorResponse ('P') message is no longer generated by the backend. -The NotificationResponse ('A') message has an additional string -field, which can carry a payload string passed +The NotificationResponse ('A') message has an additional string +field, which can carry a payload string passed from the NOTIFY event sender. -The EmptyQueryResponse ('I') message used to include an empty +The EmptyQueryResponse ('I') message used to include an empty string parameter; this has been removed. diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index 11a4bf4e41..c2c1aaa208 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -31,7 +31,7 @@ WITH with_queries SELECT select_list FROM table_expression sort_specification The following sections describe the details of the select list, the - table expression, and the sort specification. WITH + table expression, and the sort specification. WITH queries are treated last since they are an advanced feature. @@ -51,13 +51,13 @@ SELECT * FROM table1; expression happens to provide. A select list can also select a subset of the available columns or make calculations using the columns. For example, if - table1 has columns named a, - b, and c (and perhaps others) you can make + table1 has columns named a, + b, and c (and perhaps others) you can make the following query: SELECT a, b + c FROM table1; - (assuming that b and c are of a numerical + (assuming that b and c are of a numerical data type). See for more details. @@ -89,19 +89,19 @@ SELECT random(); A table expression computes a table. The - table expression contains a FROM clause that is - optionally followed by WHERE, GROUP BY, and - HAVING clauses. Trivial table expressions simply refer + table expression contains a FROM clause that is + optionally followed by WHERE, GROUP BY, and + HAVING clauses. Trivial table expressions simply refer to a table on disk, a so-called base table, but more complex expressions can be used to modify or combine base tables in various ways. - The optional WHERE, GROUP BY, and - HAVING clauses in the table expression specify a + The optional WHERE, GROUP BY, and + HAVING clauses in the table expression specify a pipeline of successive transformations performed on the table - derived in the FROM clause. All these transformations + derived in the FROM clause. All these transformations produce a virtual table that provides the rows that are passed to the select list to compute the output rows of the query. @@ -118,14 +118,14 @@ FROM table_reference , table_r A table reference can be a table name (possibly schema-qualified), - or a derived table such as a subquery, a JOIN construct, or + or a derived table such as a subquery, a JOIN construct, or complex combinations of these. If more than one table reference is - listed in the FROM clause, the tables are cross-joined + listed in the FROM clause, the tables are cross-joined (that is, the Cartesian product of their rows is formed; see below). - The result of the FROM list is an intermediate virtual + The result of the FROM list is an intermediate virtual table that can then be subject to - transformations by the WHERE, GROUP BY, - and HAVING clauses and is finally the result of the + transformations by the WHERE, GROUP BY, + and HAVING clauses and is finally the result of the overall table expression. @@ -137,14 +137,14 @@ FROM table_reference , table_r When a table reference names a table that is the parent of a table inheritance hierarchy, the table reference produces rows of not only that table but all of its descendant tables, unless the - key word ONLY precedes the table name. However, the + key word ONLY precedes the table name. However, the reference produces only the columns that appear in the named table — any columns added in subtables are ignored. - Instead of writing ONLY before the table name, you can write - * after the table name to explicitly specify that descendant + Instead of writing ONLY before the table name, you can write + * after the table name to explicitly specify that descendant tables are included. There is no real reason to use this syntax any more, because searching descendant tables is now always the default behavior. However, it is supported for compatibility with older releases. @@ -168,8 +168,8 @@ FROM table_reference , table_r Joins of all types can be chained together, or nested: either or both T1 and T2 can be joined tables. Parentheses - can be used around JOIN clauses to control the join - order. In the absence of parentheses, JOIN clauses + can be used around JOIN clauses to control the join + order. In the absence of parentheses, JOIN clauses nest left-to-right. @@ -215,7 +215,7 @@ FROM table_reference , table_r This latter equivalence does not hold exactly when more than two - tables appear, because JOIN binds more tightly than + tables appear, because JOIN binds more tightly than comma. For example FROM T1 CROSS JOIN T2 INNER JOIN T3 @@ -262,8 +262,8 @@ FROM table_reference , table_r The join condition is specified in the - ON or USING clause, or implicitly by - the word NATURAL. The join condition determines + ON or USING clause, or implicitly by + the word NATURAL. The join condition determines which rows from the two source tables are considered to match, as explained in detail below. @@ -273,7 +273,7 @@ FROM table_reference , table_r - INNER JOIN + INNER JOIN @@ -284,7 +284,7 @@ FROM table_reference , table_r - LEFT OUTER JOIN + LEFT OUTER JOIN join left @@ -307,7 +307,7 @@ FROM table_reference , table_r - RIGHT OUTER JOIN + RIGHT OUTER JOIN join right @@ -330,7 +330,7 @@ FROM table_reference , table_r - FULL OUTER JOIN + FULL OUTER JOIN @@ -347,35 +347,35 @@ FROM table_reference , table_r - The ON clause is the most general kind of join + The ON clause is the most general kind of join condition: it takes a Boolean value expression of the same - kind as is used in a WHERE clause. A pair of rows - from T1 and T2 match if the - ON expression evaluates to true. + kind as is used in a WHERE clause. A pair of rows + from T1 and T2 match if the + ON expression evaluates to true. - The USING clause is a shorthand that allows you to take + The USING clause is a shorthand that allows you to take advantage of the specific situation where both sides of the join use the same name for the joining column(s). It takes a comma-separated list of the shared column names and forms a join condition that includes an equality comparison - for each one. For example, joining T1 - and T2 with USING (a, b) produces - the join condition ON T1.a - = T2.a AND T1.b - = T2.b. + for each one. For example, joining T1 + and T2 with USING (a, b) produces + the join condition ON T1.a + = T2.a AND T1.b + = T2.b. - Furthermore, the output of JOIN USING suppresses + Furthermore, the output of JOIN USING suppresses redundant columns: there is no need to print both of the matched columns, since they must have equal values. While JOIN - ON produces all columns from T1 followed by all - columns from T2, JOIN USING produces one + ON produces all columns from T1 followed by all + columns from T2, JOIN USING produces one output column for each of the listed column pairs (in the listed - order), followed by any remaining columns from T1, - followed by any remaining columns from T2. + order), followed by any remaining columns from T1, + followed by any remaining columns from T2. @@ -386,10 +386,10 @@ FROM table_reference , table_r natural join - Finally, NATURAL is a shorthand form of - USING: it forms a USING list + Finally, NATURAL is a shorthand form of + USING: it forms a USING list consisting of all column names that appear in both - input tables. As with USING, these columns appear + input tables. As with USING, these columns appear only once in the output table. If there are no common column names, NATURAL JOIN behaves like JOIN ... ON TRUE, producing a cross-product join. @@ -399,7 +399,7 @@ FROM table_reference , table_r USING is reasonably safe from column changes in the joined relations since only the listed columns - are combined. NATURAL is considerably more risky since + are combined. NATURAL is considerably more risky since any schema changes to either relation that cause a new matching column name to be present will cause the join to combine that new column as well. @@ -428,7 +428,7 @@ FROM table_reference , table_r then we get the following results for the various joins: -=> SELECT * FROM t1 CROSS JOIN t2; +=> SELECT * FROM t1 CROSS JOIN t2; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -442,28 +442,28 @@ FROM table_reference , table_r 3 | c | 5 | zzz (9 rows) -=> SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy (2 rows) -=> SELECT * FROM t1 INNER JOIN t2 USING (num); +=> SELECT * FROM t1 INNER JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) -=> SELECT * FROM t1 NATURAL INNER JOIN t2; +=> SELECT * FROM t1 NATURAL INNER JOIN t2; num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -471,7 +471,7 @@ FROM table_reference , table_r 3 | c | 3 | yyy (3 rows) -=> SELECT * FROM t1 LEFT JOIN t2 USING (num); +=> SELECT * FROM t1 LEFT JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx @@ -479,7 +479,7 @@ FROM table_reference , table_r 3 | c | yyy (3 rows) -=> SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -487,7 +487,7 @@ FROM table_reference , table_r | | 5 | zzz (3 rows) -=> SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -499,12 +499,12 @@ FROM table_reference , table_r - The join condition specified with ON can also contain + The join condition specified with ON can also contain conditions that do not relate directly to the join. This can prove useful for some queries but needs to be thought out carefully. For example: -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx'; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx'; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -512,19 +512,19 @@ FROM table_reference , table_r 3 | c | | (3 rows) - Notice that placing the restriction in the WHERE clause + Notice that placing the restriction in the WHERE clause produces a different result: -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx'; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx'; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx (1 row) - This is because a restriction placed in the ON - clause is processed before the join, while - a restriction placed in the WHERE clause is processed - after the join. + This is because a restriction placed in the ON + clause is processed before the join, while + a restriction placed in the WHERE clause is processed + after the join. That does not matter with inner joins, but it matters a lot with outer joins. @@ -595,7 +595,7 @@ SELECT * FROM people AS mother JOIN people AS child ON mother.id = child.mother_ Parentheses are used to resolve ambiguities. In the following example, the first statement assigns the alias b to the second - instance of my_table, but the second statement assigns the + instance of my_table, but the second statement assigns the alias to the result of the join: SELECT * FROM my_table AS a CROSS JOIN my_table AS b ... @@ -615,9 +615,9 @@ FROM table_reference AS - When an alias is applied to the output of a JOIN + When an alias is applied to the output of a JOIN clause, the alias hides the original - name(s) within the JOIN. For example: + name(s) within the JOIN. For example: SELECT a.* FROM my_table AS a JOIN your_table AS b ON ... @@ -625,8 +625,8 @@ SELECT a.* FROM my_table AS a JOIN your_table AS b ON ... SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c - is not valid; the table alias a is not visible - outside the alias c. + is not valid; the table alias a is not visible + outside the alias c. @@ -655,13 +655,13 @@ FROM (SELECT * FROM table1) AS alias_name - A subquery can also be a VALUES list: + A subquery can also be a VALUES list: FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) AS names(first, last) Again, a table alias is required. Assigning alias names to the columns - of the VALUES list is optional, but is good practice. + of the VALUES list is optional, but is good practice. For more information see . @@ -669,25 +669,25 @@ FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) Table Functions - table function + table function - function - in the FROM clause + function + in the FROM clause Table functions are functions that produce a set of rows, made up of either base data types (scalar types) or composite data types (table rows). They are used like a table, view, or subquery in - the FROM clause of a query. Columns returned by table - functions can be included in SELECT, - JOIN, or WHERE clauses in the same manner + the FROM clause of a query. Columns returned by table + functions can be included in SELECT, + JOIN, or WHERE clauses in the same manner as columns of a table, view, or subquery. - Table functions may also be combined using the ROWS FROM + Table functions may also be combined using the ROWS FROM syntax, with the results returned in parallel columns; the number of result rows in this case is that of the largest function result, with smaller results padded with null values to match. @@ -704,7 +704,7 @@ ROWS FROM( function_call , ... function result columns. This column numbers the rows of the function result set, starting from 1. (This is a generalization of the SQL-standard syntax for UNNEST ... WITH ORDINALITY.) - By default, the ordinal column is called ordinality, but + By default, the ordinal column is called ordinality, but a different column name can be assigned to it using an AS clause. @@ -723,7 +723,7 @@ UNNEST( array_expression , ... If no table_alias is specified, the function - name is used as the table name; in the case of a ROWS FROM() + name is used as the table name; in the case of a ROWS FROM() construct, the first function's name is used. @@ -762,7 +762,7 @@ SELECT * FROM vw_getfoo; In some cases it is useful to define table functions that can return different column sets depending on how they are invoked. To support this, the table function can be declared as returning - the pseudo-type record. When such a function is used in + the pseudo-type record. When such a function is used in a query, the expected row structure must be specified in the query itself, so that the system can know how to parse and plan the query. This syntax looks like: @@ -775,16 +775,16 @@ ROWS FROM( ... function_call AS (column_ - When not using the ROWS FROM() syntax, + When not using the ROWS FROM() syntax, the column_definition list replaces the column - alias list that could otherwise be attached to the FROM + alias list that could otherwise be attached to the FROM item; the names in the column definitions serve as column aliases. - When using the ROWS FROM() syntax, + When using the ROWS FROM() syntax, a column_definition list can be attached to each member function separately; or if there is only one member function - and no WITH ORDINALITY clause, + and no WITH ORDINALITY clause, a column_definition list can be written in - place of a column alias list following ROWS FROM(). + place of a column alias list following ROWS FROM(). @@ -798,49 +798,49 @@ SELECT * The function (part of the module) executes a remote query. It is declared to return - record since it might be used for any kind of query. + record since it might be used for any kind of query. The actual column set must be specified in the calling query so - that the parser knows, for example, what * should + that the parser knows, for example, what * should expand to. - <literal>LATERAL</> Subqueries + <literal>LATERAL</literal> Subqueries - LATERAL - in the FROM clause + LATERAL + in the FROM clause - Subqueries appearing in FROM can be - preceded by the key word LATERAL. This allows them to - reference columns provided by preceding FROM items. + Subqueries appearing in FROM can be + preceded by the key word LATERAL. This allows them to + reference columns provided by preceding FROM items. (Without LATERAL, each subquery is evaluated independently and so cannot cross-reference any other - FROM item.) + FROM item.) - Table functions appearing in FROM can also be - preceded by the key word LATERAL, but for functions the + Table functions appearing in FROM can also be + preceded by the key word LATERAL, but for functions the key word is optional; the function's arguments can contain references - to columns provided by preceding FROM items in any case. + to columns provided by preceding FROM items in any case. A LATERAL item can appear at top level in the - FROM list, or within a JOIN tree. In the latter + FROM list, or within a JOIN tree. In the latter case it can also refer to any items that are on the left-hand side of a - JOIN that it is on the right-hand side of. + JOIN that it is on the right-hand side of. - When a FROM item contains LATERAL + When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the - FROM item providing the cross-referenced column(s), or - set of rows of multiple FROM items providing the + FROM item providing the cross-referenced column(s), or + set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is @@ -860,7 +860,7 @@ SELECT * FROM foo, bar WHERE bar.id = foo.bar_id; LATERAL is primarily useful when the cross-referenced column is necessary for computing the row(s) to be joined. A common application is providing an argument value for a set-returning function. - For example, supposing that vertices(polygon) returns the + For example, supposing that vertices(polygon) returns the set of vertices of a polygon, we could identify close-together vertices of polygons stored in a table with: @@ -878,15 +878,15 @@ FROM polygons p1 CROSS JOIN LATERAL vertices(p1.poly) v1, WHERE (v1 <-> v2) < 10 AND p1.id != p2.id; or in several other equivalent formulations. (As already mentioned, - the LATERAL key word is unnecessary in this example, but + the LATERAL key word is unnecessary in this example, but we use it for clarity.) - It is often particularly handy to LEFT JOIN to a + It is often particularly handy to LEFT JOIN to a LATERAL subquery, so that source rows will appear in the result even if the LATERAL subquery produces no - rows for them. For example, if get_product_names() returns + rows for them. For example, if get_product_names() returns the names of products made by a manufacturer, but some manufacturers in our table currently produce no products, we could find out which ones those are like this: @@ -918,20 +918,20 @@ WHERE search_condition - After the processing of the FROM clause is done, each + After the processing of the FROM clause is done, each row of the derived virtual table is checked against the search condition. If the result of the condition is true, the row is kept in the output table, otherwise (i.e., if the result is false or null) it is discarded. The search condition typically references at least one column of the table generated in the - FROM clause; this is not required, but otherwise the - WHERE clause will be fairly useless. + FROM clause; this is not required, but otherwise the + WHERE clause will be fairly useless. The join condition of an inner join can be written either in - the WHERE clause or in the JOIN clause. + the WHERE clause or in the JOIN clause. For example, these table expressions are equivalent: FROM a, b WHERE a.id = b.id AND b.val > 5 @@ -945,13 +945,13 @@ FROM a INNER JOIN b ON (a.id = b.id) WHERE b.val > 5 FROM a NATURAL JOIN b WHERE b.val > 5 Which one of these you use is mainly a matter of style. The - JOIN syntax in the FROM clause is + JOIN syntax in the FROM clause is probably not as portable to other SQL database management systems, even though it is in the SQL standard. For outer joins there is no choice: they must be done in - the FROM clause. The ON or USING - clause of an outer join is not equivalent to a - WHERE condition, because it results in the addition + the FROM clause. The ON or USING + clause of an outer join is not equivalent to a + WHERE condition, because it results in the addition of rows (for unmatched input rows) as well as the removal of rows in the final result. @@ -973,14 +973,14 @@ SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) fdt is the table derived in the - FROM clause. Rows that do not meet the search - condition of the WHERE clause are eliminated from + FROM clause. Rows that do not meet the search + condition of the WHERE clause are eliminated from fdt. Notice the use of scalar subqueries as value expressions. Just like any other query, the subqueries can employ complex table expressions. Notice also how fdt is referenced in the subqueries. - Qualifying c1 as fdt.c1 is only necessary - if c1 is also the name of a column in the derived + Qualifying c1 as fdt.c1 is only necessary + if c1 is also the name of a column in the derived input table of the subquery. But qualifying the column name adds clarity even when it is not needed. This example shows how the column naming scope of an outer query extends into its inner queries. @@ -1000,9 +1000,9 @@ SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) - After passing the WHERE filter, the derived input - table might be subject to grouping, using the GROUP BY - clause, and elimination of group rows using the HAVING + After passing the WHERE filter, the derived input + table might be subject to grouping, using the GROUP BY + clause, and elimination of group rows using the HAVING clause. @@ -1023,7 +1023,7 @@ SELECT select_list eliminate redundancy in the output and/or compute aggregates that apply to these groups. For instance: -=> SELECT * FROM test1; +=> SELECT * FROM test1; x | y ---+--- a | 3 @@ -1032,7 +1032,7 @@ SELECT select_list a | 1 (4 rows) -=> SELECT x FROM test1 GROUP BY x; +=> SELECT x FROM test1 GROUP BY x; x --- a @@ -1045,17 +1045,17 @@ SELECT select_list In the second query, we could not have written SELECT * FROM test1 GROUP BY x, because there is no single value - for the column y that could be associated with each + for the column y that could be associated with each group. The grouped-by columns can be referenced in the select list since they have a single value in each group. In general, if a table is grouped, columns that are not - listed in GROUP BY cannot be referenced except in aggregate + listed in GROUP BY cannot be referenced except in aggregate expressions. An example with aggregate expressions is: -=> SELECT x, sum(y) FROM test1 GROUP BY x; +=> SELECT x, sum(y) FROM test1 GROUP BY x; x | sum ---+----- a | 4 @@ -1073,7 +1073,7 @@ SELECT select_list Grouping without aggregate expressions effectively calculates the set of distinct values in a column. This can also be achieved - using the DISTINCT clause (see DISTINCT clause (see ). @@ -1088,10 +1088,10 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales In this example, the columns product_id, p.name, and p.price must be - in the GROUP BY clause since they are referenced in + in the GROUP BY clause since they are referenced in the query select list (but see below). The column - s.units does not have to be in the GROUP - BY list since it is only used in an aggregate expression + s.units does not have to be in the GROUP + BY list since it is only used in an aggregate expression (sum(...)), which represents the sales of a product. For each product, the query returns a summary row about all sales of the product. @@ -1110,9 +1110,9 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales - In strict SQL, GROUP BY can only group by columns of + In strict SQL, GROUP BY can only group by columns of the source table but PostgreSQL extends - this to also allow GROUP BY to group by columns in the + this to also allow GROUP BY to group by columns in the select list. Grouping by value expressions instead of simple column names is also allowed. @@ -1125,12 +1125,12 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales If a table has been grouped using GROUP BY, but only certain groups are of interest, the HAVING clause can be used, much like a - WHERE clause, to eliminate groups from the result. + WHERE clause, to eliminate groups from the result. The syntax is: SELECT select_list FROM ... WHERE ... GROUP BY ... HAVING boolean_expression - Expressions in the HAVING clause can refer both to + Expressions in the HAVING clause can refer both to grouped expressions and to ungrouped expressions (which necessarily involve an aggregate function). @@ -1138,14 +1138,14 @@ SELECT select_list FROM ... WHERE ... Example: -=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING sum(y) > 3; +=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING sum(y) > 3; x | sum ---+----- a | 4 b | 5 (2 rows) -=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING x < 'c'; +=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING x < 'c'; x | sum ---+----- a | 4 @@ -1163,26 +1163,26 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit GROUP BY product_id, p.name, p.price, p.cost HAVING sum(p.price * s.units) > 5000; - In the example above, the WHERE clause is selecting + In the example above, the WHERE clause is selecting rows by a column that is not grouped (the expression is only true for - sales during the last four weeks), while the HAVING + sales during the last four weeks), while the HAVING clause restricts the output to groups with total gross sales over 5000. Note that the aggregate expressions do not necessarily need to be the same in all parts of the query. - If a query contains aggregate function calls, but no GROUP BY + If a query contains aggregate function calls, but no GROUP BY clause, grouping still occurs: the result is a single group row (or perhaps no rows at all, if the single row is then eliminated by - HAVING). - The same is true if it contains a HAVING clause, even - without any aggregate function calls or GROUP BY clause. + HAVING). + The same is true if it contains a HAVING clause, even + without any aggregate function calls or GROUP BY clause. - <literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</> + <literal>GROUPING SETS</literal>, <literal>CUBE</literal>, and <literal>ROLLUP</literal> GROUPING SETS @@ -1196,13 +1196,13 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit More complex grouping operations than those described above are possible - using the concept of grouping sets. The data selected by - the FROM and WHERE clauses is grouped separately + using the concept of grouping sets. The data selected by + the FROM and WHERE clauses is grouped separately by each specified grouping set, aggregates computed for each group just as - for simple GROUP BY clauses, and then the results returned. + for simple GROUP BY clauses, and then the results returned. For example: -=> SELECT * FROM items_sold; +=> SELECT * FROM items_sold; brand | size | sales -------+------+------- Foo | L | 10 @@ -1211,7 +1211,7 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit Bar | L | 5 (4 rows) -=> SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ()); +=> SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ()); brand | size | sum -------+------+----- Foo | | 30 @@ -1224,12 +1224,12 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit - Each sublist of GROUPING SETS may specify zero or more columns + Each sublist of GROUPING SETS may specify zero or more columns or expressions and is interpreted the same way as though it were directly - in the GROUP BY clause. An empty grouping set means that all + in the GROUP BY clause. An empty grouping set means that all rows are aggregated down to a single group (which is output even if no input rows were present), as described above for the case of aggregate - functions with no GROUP BY clause. + functions with no GROUP BY clause. @@ -1243,16 +1243,16 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit A shorthand notation is provided for specifying two common types of grouping set. A clause of the form -ROLLUP ( e1, e2, e3, ... ) +ROLLUP ( e1, e2, e3, ... ) represents the given list of expressions and all prefixes of the list including the empty list; thus it is equivalent to GROUPING SETS ( - ( e1, e2, e3, ... ), + ( e1, e2, e3, ... ), ... - ( e1, e2 ), - ( e1 ), + ( e1, e2 ), + ( e1 ), ( ) ) @@ -1263,7 +1263,7 @@ GROUPING SETS ( A clause of the form -CUBE ( e1, e2, ... ) +CUBE ( e1, e2, ... ) represents the given list and all of its possible subsets (i.e. the power set). Thus @@ -1286,7 +1286,7 @@ GROUPING SETS ( - The individual elements of a CUBE or ROLLUP + The individual elements of a CUBE or ROLLUP clause may be either individual expressions, or sublists of elements in parentheses. In the latter case, the sublists are treated as single units for the purposes of generating the individual grouping sets. @@ -1319,15 +1319,15 @@ GROUPING SETS ( - The CUBE and ROLLUP constructs can be used either - directly in the GROUP BY clause, or nested inside a - GROUPING SETS clause. If one GROUPING SETS clause + The CUBE and ROLLUP constructs can be used either + directly in the GROUP BY clause, or nested inside a + GROUPING SETS clause. If one GROUPING SETS clause is nested inside another, the effect is the same as if all the elements of the inner clause had been written directly in the outer clause. - If multiple grouping items are specified in a single GROUP BY + If multiple grouping items are specified in a single GROUP BY clause, then the final list of grouping sets is the cross product of the individual items. For example: @@ -1346,12 +1346,12 @@ GROUP BY GROUPING SETS ( - The construct (a, b) is normally recognized in expressions as + The construct (a, b) is normally recognized in expressions as a row constructor. - Within the GROUP BY clause, this does not apply at the top - levels of expressions, and (a, b) is parsed as a list of - expressions as described above. If for some reason you need - a row constructor in a grouping expression, use ROW(a, b). + Within the GROUP BY clause, this does not apply at the top + levels of expressions, and (a, b) is parsed as a list of + expressions as described above. If for some reason you need + a row constructor in a grouping expression, use ROW(a, b). @@ -1361,7 +1361,7 @@ GROUP BY GROUPING SETS ( window function - order of execution + order of execution @@ -1369,32 +1369,32 @@ GROUP BY GROUPING SETS ( , and ), these functions are evaluated - after any grouping, aggregation, and HAVING filtering is + after any grouping, aggregation, and HAVING filtering is performed. That is, if the query uses any aggregates, GROUP - BY, or HAVING, then the rows seen by the window functions + BY, or HAVING, then the rows seen by the window functions are the group rows instead of the original table rows from - FROM/WHERE. + FROM/WHERE. When multiple window functions are used, all the window functions having - syntactically equivalent PARTITION BY and ORDER BY + syntactically equivalent PARTITION BY and ORDER BY clauses in their window definitions are guaranteed to be evaluated in a single pass over the data. Therefore they will see the same sort ordering, - even if the ORDER BY does not uniquely determine an ordering. + even if the ORDER BY does not uniquely determine an ordering. However, no guarantees are made about the evaluation of functions having - different PARTITION BY or ORDER BY specifications. + different PARTITION BY or ORDER BY specifications. (In such cases a sort step is typically required between the passes of window function evaluations, and the sort is not guaranteed to preserve - ordering of rows that its ORDER BY sees as equivalent.) + ordering of rows that its ORDER BY sees as equivalent.) Currently, window functions always require presorted data, and so the query output will be ordered according to one or another of the window - functions' PARTITION BY/ORDER BY clauses. + functions' PARTITION BY/ORDER BY clauses. It is not recommended to rely on this, however. Use an explicit - top-level ORDER BY clause if you want to be sure the + top-level ORDER BY clause if you want to be sure the results are sorted in a particular way. @@ -1435,13 +1435,13 @@ GROUP BY GROUPING SETS ( SELECT a, b, c FROM ... - The columns names a, b, and c + The columns names a, b, and c are either the actual names of the columns of tables referenced - in the FROM clause, or the aliases given to them as + in the FROM clause, or the aliases given to them as explained in . The name space available in the select list is the same as in the - WHERE clause, unless grouping is used, in which case - it is the same as in the HAVING clause. + WHERE clause, unless grouping is used, in which case + it is the same as in the HAVING clause. @@ -1456,7 +1456,7 @@ SELECT tbl1.a, tbl2.a, tbl1.b FROM ... SELECT tbl1.*, tbl2.a FROM ... See for more about - the table_name.* notation. + the table_name.* notation. @@ -1465,7 +1465,7 @@ SELECT tbl1.*, tbl2.a FROM ... value expression is evaluated once for each result row, with the row's values substituted for any column references. But the expressions in the select list do not have to reference any - columns in the table expression of the FROM clause; + columns in the table expression of the FROM clause; they can be constant arithmetic expressions, for instance. @@ -1480,7 +1480,7 @@ SELECT tbl1.*, tbl2.a FROM ... The entries in the select list can be assigned names for subsequent - processing, such as for use in an ORDER BY clause + processing, such as for use in an ORDER BY clause or for display by the client application. For example: SELECT a AS value, b + c AS sum FROM ... @@ -1488,7 +1488,7 @@ SELECT a AS value, b + c AS sum FROM ... - If no output column name is specified using AS, + If no output column name is specified using AS, the system assigns a default column name. For simple column references, this is the name of the referenced column. For function calls, this is the name of the function. For complex expressions, @@ -1496,12 +1496,12 @@ SELECT a AS value, b + c AS sum FROM ... - The AS keyword is optional, but only if the new column + The AS keyword is optional, but only if the new column name does not match any PostgreSQL keyword (see ). To avoid an accidental match to a keyword, you can double-quote the column name. For example, - VALUE is a keyword, so this does not work: + VALUE is a keyword, so this does not work: SELECT a value, b + c AS sum FROM ... @@ -1517,7 +1517,7 @@ SELECT a "value", b + c AS sum FROM ... The naming of output columns here is different from that done in - the FROM clause (see FROM clause (see ). It is possible to rename the same column twice, but the name assigned in the select list is the one that will be passed on. @@ -1544,13 +1544,13 @@ SELECT a "value", b + c AS sum FROM ... SELECT DISTINCT select_list ... - (Instead of DISTINCT the key word ALL + (Instead of DISTINCT the key word ALL can be used to specify the default behavior of retaining all rows.) - null value - in DISTINCT + null value + in DISTINCT @@ -1571,16 +1571,16 @@ SELECT DISTINCT ON (expression , first row of a set is unpredictable unless the query is sorted on enough columns to guarantee a unique ordering - of the rows arriving at the DISTINCT filter. - (DISTINCT ON processing occurs after ORDER - BY sorting.) + of the rows arriving at the DISTINCT filter. + (DISTINCT ON processing occurs after ORDER + BY sorting.) - The DISTINCT ON clause is not part of the SQL standard + The DISTINCT ON clause is not part of the SQL standard and is sometimes considered bad style because of the potentially indeterminate nature of its results. With judicious use of - GROUP BY and subqueries in FROM, this + GROUP BY and subqueries in FROM, this construct can be avoided, but it is often the most convenient alternative. @@ -1635,27 +1635,27 @@ SELECT DISTINCT ON (expression , - UNION effectively appends the result of + UNION effectively appends the result of query2 to the result of query1 (although there is no guarantee that this is the order in which the rows are actually returned). Furthermore, it eliminates duplicate rows from its result, in the same - way as DISTINCT, unless UNION ALL is used. + way as DISTINCT, unless UNION ALL is used. - INTERSECT returns all rows that are both in the result + INTERSECT returns all rows that are both in the result of query1 and in the result of query2. Duplicate rows are eliminated - unless INTERSECT ALL is used. + unless INTERSECT ALL is used. - EXCEPT returns all rows that are in the result of + EXCEPT returns all rows that are in the result of query1 but not in the result of query2. (This is sometimes called the - difference between two queries.) Again, duplicates - are eliminated unless EXCEPT ALL is used. + difference between two queries.) Again, duplicates + are eliminated unless EXCEPT ALL is used. @@ -1690,7 +1690,7 @@ SELECT DISTINCT ON (expression , - The ORDER BY clause specifies the sort order: + The ORDER BY clause specifies the sort order: SELECT select_list FROM table_expression @@ -1705,17 +1705,17 @@ SELECT a, b FROM table1 ORDER BY a + b, c; When more than one expression is specified, the later values are used to sort rows that are equal according to the earlier values. Each expression can be followed by an optional - ASC or DESC keyword to set the sort direction to - ascending or descending. ASC order is the default. + ASC or DESC keyword to set the sort direction to + ascending or descending. ASC order is the default. Ascending order puts smaller values first, where smaller is defined in terms of the < operator. Similarly, descending order is determined with the > operator. - Actually, PostgreSQL uses the default B-tree - operator class for the expression's data type to determine the sort - ordering for ASC and DESC. Conventionally, + Actually, PostgreSQL uses the default B-tree + operator class for the expression's data type to determine the sort + ordering for ASC and DESC. Conventionally, data types will be set up so that the < and > operators correspond to this sort ordering, but a user-defined data type's designer could choose to do something @@ -1725,22 +1725,22 @@ SELECT a, b FROM table1 ORDER BY a + b, c; - The NULLS FIRST and NULLS LAST options can be + The NULLS FIRST and NULLS LAST options can be used to determine whether nulls appear before or after non-null values in the sort ordering. By default, null values sort as if larger than any - non-null value; that is, NULLS FIRST is the default for - DESC order, and NULLS LAST otherwise. + non-null value; that is, NULLS FIRST is the default for + DESC order, and NULLS LAST otherwise. Note that the ordering options are considered independently for each - sort column. For example ORDER BY x, y DESC means - ORDER BY x ASC, y DESC, which is not the same as - ORDER BY x DESC, y DESC. + sort column. For example ORDER BY x, y DESC means + ORDER BY x ASC, y DESC, which is not the same as + ORDER BY x DESC, y DESC. - A sort_expression can also be the column label or number + A sort_expression can also be the column label or number of an output column, as in: SELECT a + b AS sum, c FROM table1 ORDER BY sum; @@ -1748,21 +1748,21 @@ SELECT a, max(b) FROM table1 GROUP BY a ORDER BY 1; both of which sort by the first output column. Note that an output column name has to stand alone, that is, it cannot be used in an expression - — for example, this is not correct: + — for example, this is not correct: SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong This restriction is made to reduce ambiguity. There is still - ambiguity if an ORDER BY item is a simple name that + ambiguity if an ORDER BY item is a simple name that could match either an output column name or a column from the table expression. The output column is used in such cases. This would - only cause confusion if you use AS to rename an output + only cause confusion if you use AS to rename an output column to match some other table column's name. - ORDER BY can be applied to the result of a - UNION, INTERSECT, or EXCEPT + ORDER BY can be applied to the result of a + UNION, INTERSECT, or EXCEPT combination, but in this case it is only permitted to sort by output column names or numbers, not by expressions. @@ -1781,7 +1781,7 @@ SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong - LIMIT and OFFSET allow you to retrieve just + LIMIT and OFFSET allow you to retrieve just a portion of the rows that are generated by the rest of the query: SELECT select_list @@ -1794,49 +1794,49 @@ SELECT select_list If a limit count is given, no more than that many rows will be returned (but possibly fewer, if the query itself yields fewer rows). - LIMIT ALL is the same as omitting the LIMIT - clause, as is LIMIT with a NULL argument. + LIMIT ALL is the same as omitting the LIMIT + clause, as is LIMIT with a NULL argument. - OFFSET says to skip that many rows before beginning to - return rows. OFFSET 0 is the same as omitting the - OFFSET clause, as is OFFSET with a NULL argument. + OFFSET says to skip that many rows before beginning to + return rows. OFFSET 0 is the same as omitting the + OFFSET clause, as is OFFSET with a NULL argument. - If both OFFSET - and LIMIT appear, then OFFSET rows are - skipped before starting to count the LIMIT rows that + If both OFFSET + and LIMIT appear, then OFFSET rows are + skipped before starting to count the LIMIT rows that are returned. - When using LIMIT, it is important to use an - ORDER BY clause that constrains the result rows into a + When using LIMIT, it is important to use an + ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The - ordering is unknown, unless you specified ORDER BY. + ordering is unknown, unless you specified ORDER BY. - The query optimizer takes LIMIT into account when + The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give - for LIMIT and OFFSET. Thus, using - different LIMIT/OFFSET values to select + for LIMIT and OFFSET. Thus, using + different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable - result ordering with ORDER BY. This is not a bug; it + result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless - ORDER BY is used to constrain the order. + ORDER BY is used to constrain the order. - The rows skipped by an OFFSET clause still have to be - computed inside the server; therefore a large OFFSET + The rows skipped by an OFFSET clause still have to be + computed inside the server; therefore a large OFFSET might be inefficient. @@ -1850,7 +1850,7 @@ SELECT select_list - VALUES provides a way to generate a constant table + VALUES provides a way to generate a constant table that can be used in a query without having to actually create and populate a table on-disk. The syntax is @@ -1860,7 +1860,7 @@ VALUES ( expression [, ...] ) [, .. The lists must all have the same number of elements (i.e., the number of columns in the table), and corresponding entries in each list must have compatible data types. The actual data type assigned to each column - of the result is determined using the same rules as for UNION + of the result is determined using the same rules as for UNION (see ). @@ -1881,8 +1881,8 @@ SELECT 3, 'three'; By default, PostgreSQL assigns the names - column1, column2, etc. to the columns of a - VALUES table. The column names are not specified by the + column1, column2, etc. to the columns of a + VALUES table. The column names are not specified by the SQL standard and different database systems do it differently, so it's usually better to override the default names with a table alias list, like this: @@ -1898,16 +1898,16 @@ SELECT 3, 'three'; - Syntactically, VALUES followed by expression lists is + Syntactically, VALUES followed by expression lists is treated as equivalent to: SELECT select_list FROM table_expression - and can appear anywhere a SELECT can. For example, you can - use it as part of a UNION, or attach a - sort_specification (ORDER BY, - LIMIT, and/or OFFSET) to it. VALUES - is most commonly used as the data source in an INSERT command, + and can appear anywhere a SELECT can. For example, you can + use it as part of a UNION, or attach a + sort_specification (ORDER BY, + LIMIT, and/or OFFSET) to it. VALUES + is most commonly used as the data source in an INSERT command, and next most commonly as a subquery. @@ -1932,22 +1932,22 @@ SELECT select_list FROM table_expression - WITH provides a way to write auxiliary statements for use in a + WITH provides a way to write auxiliary statements for use in a larger query. These statements, which are often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query. Each auxiliary statement - in a WITH clause can be a SELECT, - INSERT, UPDATE, or DELETE; and the - WITH clause itself is attached to a primary statement that can - also be a SELECT, INSERT, UPDATE, or - DELETE. + in a WITH clause can be a SELECT, + INSERT, UPDATE, or DELETE; and the + WITH clause itself is attached to a primary statement that can + also be a SELECT, INSERT, UPDATE, or + DELETE. - <command>SELECT</> in <literal>WITH</> + <command>SELECT</command> in <literal>WITH</literal> - The basic value of SELECT in WITH is to + The basic value of SELECT in WITH is to break down complicated queries into simpler parts. An example is: @@ -1970,21 +1970,21 @@ GROUP BY region, product; which displays per-product sales totals in only the top sales regions. - The WITH clause defines two auxiliary statements named - regional_sales and top_regions, - where the output of regional_sales is used in - top_regions and the output of top_regions - is used in the primary SELECT query. - This example could have been written without WITH, + The WITH clause defines two auxiliary statements named + regional_sales and top_regions, + where the output of regional_sales is used in + top_regions and the output of top_regions + is used in the primary SELECT query. + This example could have been written without WITH, but we'd have needed two levels of nested sub-SELECTs. It's a bit easier to follow this way. - The optional RECURSIVE modifier changes WITH + The optional RECURSIVE modifier changes WITH from a mere syntactic convenience into a feature that accomplishes things not otherwise possible in standard SQL. Using - RECURSIVE, a WITH query can refer to its own + RECURSIVE, a WITH query can refer to its own output. A very simple example is this query to sum the integers from 1 through 100: @@ -1997,10 +1997,10 @@ WITH RECURSIVE t(n) AS ( SELECT sum(n) FROM t; - The general form of a recursive WITH query is always a - non-recursive term, then UNION (or - UNION ALL), then a - recursive term, where only the recursive term can contain + The general form of a recursive WITH query is always a + non-recursive term, then UNION (or + UNION ALL), then a + recursive term, where only the recursive term can contain a reference to the query's own output. Such a query is executed as follows: @@ -2010,10 +2010,10 @@ SELECT sum(n) FROM t; - Evaluate the non-recursive term. For UNION (but not - UNION ALL), discard duplicate rows. Include all remaining + Evaluate the non-recursive term. For UNION (but not + UNION ALL), discard duplicate rows. Include all remaining rows in the result of the recursive query, and also place them in a - temporary working table. + temporary working table. @@ -2026,10 +2026,10 @@ SELECT sum(n) FROM t; Evaluate the recursive term, substituting the current contents of the working table for the recursive self-reference. - For UNION (but not UNION ALL), discard + For UNION (but not UNION ALL), discard duplicate rows and rows that duplicate any previous result row. Include all remaining rows in the result of the recursive query, and - also place them in a temporary intermediate table. + also place them in a temporary intermediate table. @@ -2046,7 +2046,7 @@ SELECT sum(n) FROM t; Strictly speaking, this process is iteration not recursion, but - RECURSIVE is the terminology chosen by the SQL standards + RECURSIVE is the terminology chosen by the SQL standards committee. @@ -2054,7 +2054,7 @@ SELECT sum(n) FROM t; In the example above, the working table has just a single row in each step, and it takes on the values from 1 through 100 in successive steps. In - the 100th step, there is no output because of the WHERE + the 100th step, there is no output because of the WHERE clause, and so the query terminates. @@ -2082,14 +2082,14 @@ GROUP BY sub_part When working with recursive queries it is important to be sure that the recursive part of the query will eventually return no tuples, or else the query will loop indefinitely. Sometimes, using - UNION instead of UNION ALL can accomplish this + UNION instead of UNION ALL can accomplish this by discarding rows that duplicate previous output rows. However, often a cycle does not involve output rows that are completely duplicate: it may be necessary to check just one or a few fields to see if the same point has been reached before. The standard method for handling such situations is to compute an array of the already-visited values. For example, consider - the following query that searches a table graph using a - link field: + the following query that searches a table graph using a + link field: WITH RECURSIVE search_graph(id, link, data, depth) AS ( @@ -2103,12 +2103,12 @@ WITH RECURSIVE search_graph(id, link, data, depth) AS ( SELECT * FROM search_graph; - This query will loop if the link relationships contain - cycles. Because we require a depth output, just changing - UNION ALL to UNION would not eliminate the looping. + This query will loop if the link relationships contain + cycles. Because we require a depth output, just changing + UNION ALL to UNION would not eliminate the looping. Instead we need to recognize whether we have reached the same row again while following a particular path of links. We add two columns - path and cycle to the loop-prone query: + path and cycle to the loop-prone query: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( @@ -2127,13 +2127,13 @@ SELECT * FROM search_graph; Aside from preventing cycles, the array value is often useful in its own - right as representing the path taken to reach any particular row. + right as representing the path taken to reach any particular row. In the general case where more than one field needs to be checked to recognize a cycle, use an array of rows. For example, if we needed to - compare fields f1 and f2: + compare fields f1 and f2: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( @@ -2154,7 +2154,7 @@ SELECT * FROM search_graph; - Omit the ROW() syntax in the common case where only one field + Omit the ROW() syntax in the common case where only one field needs to be checked to recognize a cycle. This allows a simple array rather than a composite-type array to be used, gaining efficiency. @@ -2164,16 +2164,16 @@ SELECT * FROM search_graph; The recursive query evaluation algorithm produces its output in breadth-first search order. You can display the results in depth-first - search order by making the outer query ORDER BY a - path column constructed in this way. + search order by making the outer query ORDER BY a + path column constructed in this way. A helpful trick for testing queries - when you are not certain if they might loop is to place a LIMIT + when you are not certain if they might loop is to place a LIMIT in the parent query. For example, this query would loop forever without - the LIMIT: + the LIMIT: WITH RECURSIVE t(n) AS ( @@ -2185,26 +2185,26 @@ SELECT n FROM t LIMIT 100; This works because PostgreSQL's implementation - evaluates only as many rows of a WITH query as are actually + evaluates only as many rows of a WITH query as are actually fetched by the parent query. Using this trick in production is not recommended, because other systems might work differently. Also, it usually won't work if you make the outer query sort the recursive query's results or join them to some other table, because in such cases the - outer query will usually try to fetch all of the WITH query's + outer query will usually try to fetch all of the WITH query's output anyway. - A useful property of WITH queries is that they are evaluated + A useful property of WITH queries is that they are evaluated only once per execution of the parent query, even if they are referred to - more than once by the parent query or sibling WITH queries. + more than once by the parent query or sibling WITH queries. Thus, expensive calculations that are needed in multiple places can be - placed within a WITH query to avoid redundant work. Another + placed within a WITH query to avoid redundant work. Another possible application is to prevent unwanted multiple evaluations of functions with side-effects. However, the other side of this coin is that the optimizer is less able to - push restrictions from the parent query down into a WITH query - than an ordinary subquery. The WITH query will generally be + push restrictions from the parent query down into a WITH query + than an ordinary subquery. The WITH query will generally be evaluated as written, without suppression of rows that the parent query might discard afterwards. (But, as mentioned above, evaluation might stop early if the reference(s) to the query demand only a limited number of @@ -2212,20 +2212,20 @@ SELECT n FROM t LIMIT 100; - The examples above only show WITH being used with - SELECT, but it can be attached in the same way to - INSERT, UPDATE, or DELETE. + The examples above only show WITH being used with + SELECT, but it can be attached in the same way to + INSERT, UPDATE, or DELETE. In each case it effectively provides temporary table(s) that can be referred to in the main command. - Data-Modifying Statements in <literal>WITH</> + Data-Modifying Statements in <literal>WITH</literal> - You can use data-modifying statements (INSERT, - UPDATE, or DELETE) in WITH. This + You can use data-modifying statements (INSERT, + UPDATE, or DELETE) in WITH. This allows you to perform several different operations in the same query. An example is: @@ -2241,32 +2241,32 @@ INSERT INTO products_log SELECT * FROM moved_rows; - This query effectively moves rows from products to - products_log. The DELETE in WITH - deletes the specified rows from products, returning their - contents by means of its RETURNING clause; and then the + This query effectively moves rows from products to + products_log. The DELETE in WITH + deletes the specified rows from products, returning their + contents by means of its RETURNING clause; and then the primary query reads that output and inserts it into - products_log. + products_log. - A fine point of the above example is that the WITH clause is - attached to the INSERT, not the sub-SELECT within - the INSERT. This is necessary because data-modifying - statements are only allowed in WITH clauses that are attached - to the top-level statement. However, normal WITH visibility - rules apply, so it is possible to refer to the WITH - statement's output from the sub-SELECT. + A fine point of the above example is that the WITH clause is + attached to the INSERT, not the sub-SELECT within + the INSERT. This is necessary because data-modifying + statements are only allowed in WITH clauses that are attached + to the top-level statement. However, normal WITH visibility + rules apply, so it is possible to refer to the WITH + statement's output from the sub-SELECT. - Data-modifying statements in WITH usually have - RETURNING clauses (see ), + Data-modifying statements in WITH usually have + RETURNING clauses (see ), as shown in the example above. - It is the output of the RETURNING clause, not the + It is the output of the RETURNING clause, not the target table of the data-modifying statement, that forms the temporary table that can be referred to by the rest of the query. If a - data-modifying statement in WITH lacks a RETURNING + data-modifying statement in WITH lacks a RETURNING clause, then it forms no temporary table and cannot be referred to in the rest of the query. Such a statement will be executed nonetheless. A not-particularly-useful example is: @@ -2278,15 +2278,15 @@ WITH t AS ( DELETE FROM bar; - This example would remove all rows from tables foo and - bar. The number of affected rows reported to the client - would only include rows removed from bar. + This example would remove all rows from tables foo and + bar. The number of affected rows reported to the client + would only include rows removed from bar. Recursive self-references in data-modifying statements are not allowed. In some cases it is possible to work around this limitation by - referring to the output of a recursive WITH, for example: + referring to the output of a recursive WITH, for example: WITH RECURSIVE included_parts(sub_part, part) AS ( @@ -2304,24 +2304,24 @@ DELETE FROM parts - Data-modifying statements in WITH are executed exactly once, + Data-modifying statements in WITH are executed exactly once, and always to completion, independently of whether the primary query reads all (or indeed any) of their output. Notice that this is different - from the rule for SELECT in WITH: as stated in the - previous section, execution of a SELECT is carried only as far + from the rule for SELECT in WITH: as stated in the + previous section, execution of a SELECT is carried only as far as the primary query demands its output. - The sub-statements in WITH are executed concurrently with + The sub-statements in WITH are executed concurrently with each other and with the main query. Therefore, when using data-modifying - statements in WITH, the order in which the specified updates + statements in WITH, the order in which the specified updates actually happen is unpredictable. All the statements are executed with - the same snapshot (see ), so they - cannot see one another's effects on the target tables. This + the same snapshot (see ), so they + cannot see one another's effects on the target tables. This alleviates the effects of the unpredictability of the actual order of row - updates, and means that RETURNING data is the only way to - communicate changes between different WITH sub-statements and + updates, and means that RETURNING data is the only way to + communicate changes between different WITH sub-statements and the main query. An example of this is that in @@ -2332,8 +2332,8 @@ WITH t AS ( SELECT * FROM products; - the outer SELECT would return the original prices before the - action of the UPDATE, while in + the outer SELECT would return the original prices before the + action of the UPDATE, while in WITH t AS ( @@ -2343,7 +2343,7 @@ WITH t AS ( SELECT * FROM t; - the outer SELECT would return the updated data. + the outer SELECT would return the updated data. @@ -2353,15 +2353,15 @@ SELECT * FROM t; applies to deleting a row that was already updated in the same statement: only the update is performed. Therefore you should generally avoid trying to modify a single row twice in a single statement. In particular avoid - writing WITH sub-statements that could affect the same rows + writing WITH sub-statements that could affect the same rows changed by the main statement or a sibling sub-statement. The effects of such a statement will not be predictable. At present, any table used as the target of a data-modifying statement in - WITH must not have a conditional rule, nor an ALSO - rule, nor an INSTEAD rule that expands to multiple statements. + WITH must not have a conditional rule, nor an ALSO + rule, nor an INSTEAD rule that expands to multiple statements. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index 98434925df..fc60febcbd 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -29,7 +29,7 @@ in the directory src/tutorial/. (Binary distributions of PostgreSQL might not compile these files.) To use those - files, first change to that directory and run make: + files, first change to that directory and run make: $ cd ..../src/tutorial @@ -50,7 +50,7 @@ The \i command reads in commands from the - specified file. psql's -s option puts you in + specified file. psql's -s option puts you in single step mode which pauses before sending each statement to the server. The commands used in this section are in the file basics.sql. @@ -155,8 +155,8 @@ CREATE TABLE weather ( PostgreSQL supports the standard SQL types int, smallint, real, double - precision, char(N), - varchar(N), date, + precision, char(N), + varchar(N), date, time, timestamp, and interval, as well as other types of general utility and a rich set of geometric types. @@ -211,7 +211,7 @@ INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27'); Note that all data types use rather obvious input formats. Constants that are not simple numeric values usually must be - surrounded by single quotes ('), as in the example. + surrounded by single quotes ('), as in the example. The date type is actually quite flexible in what it accepts, but for this tutorial we will stick to the unambiguous @@ -336,8 +336,8 @@ SELECT city, (temp_hi+temp_lo)/2 AS temp_avg, date FROM weather; - A query can be qualified by adding a WHERE - clause that specifies which rows are wanted. The WHERE + A query can be qualified by adding a WHERE + clause that specifies which rows are wanted. The WHERE clause contains a Boolean (truth value) expression, and only rows for which the Boolean expression is true are returned. The usual Boolean operators (AND, @@ -446,9 +446,9 @@ SELECT DISTINCT city of the same or different tables at one time is called a join query. As an example, say you wish to list all the weather records together with the location of the - associated city. To do that, we need to compare the city - column of each row of the weather table with the - name column of all rows in the cities + associated city. To do that, we need to compare the city + column of each row of the weather table with the + name column of all rows in the cities table, and select the pairs of rows where these values match. @@ -483,7 +483,7 @@ SELECT * There is no result row for the city of Hayward. This is because there is no matching entry in the cities table for Hayward, so the join - ignores the unmatched rows in the weather table. We will see + ignores the unmatched rows in the weather table. We will see shortly how this can be fixed. @@ -520,7 +520,7 @@ SELECT city, temp_lo, temp_hi, prcp, date, location Since the columns all had different names, the parser automatically found which table they belong to. If there were duplicate column names in the two tables you'd need to - qualify the column names to show which one you + qualify the column names to show which one you meant, as in: @@ -599,7 +599,7 @@ SELECT * self join. As an example, suppose we wish to find all the weather records that are in the temperature range of other weather records. So we need to compare the - temp_lo and temp_hi columns of + temp_lo and temp_hi columns of each weather row to the temp_lo and temp_hi columns of all other @@ -620,8 +620,8 @@ SELECT W1.city, W1.temp_lo AS low, W1.temp_hi AS high, (2 rows) - Here we have relabeled the weather table as W1 and - W2 to be able to distinguish the left and right side + Here we have relabeled the weather table as W1 and + W2 to be able to distinguish the left and right side of the join. You can also use these kinds of aliases in other queries to save some typing, e.g.: @@ -644,7 +644,7 @@ SELECT * Like most other relational database products, PostgreSQL supports - aggregate functions. + aggregate functions. An aggregate function computes a single result from multiple input rows. For example, there are aggregates to compute the count, sum, @@ -747,7 +747,7 @@ SELECT city, max(temp_lo) which gives us the same results for only the cities that have all - temp_lo values below 40. Finally, if we only care about + temp_lo values below 40. Finally, if we only care about cities whose names begin with S, we might do: @@ -871,7 +871,7 @@ DELETE FROM tablename; Without a qualification, DELETE will - remove all rows from the given table, leaving it + remove all rows from the given table, leaving it empty. The system will not request confirmation before doing this! diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml index 9557c16a4d..b585fd3d2a 100644 --- a/doc/src/sgml/rangetypes.sgml +++ b/doc/src/sgml/rangetypes.sgml @@ -9,7 +9,7 @@ Range types are data types representing a range of values of some - element type (called the range's subtype). + element type (called the range's subtype). For instance, ranges of timestamp might be used to represent the ranges of time that a meeting room is reserved. In this case the data type @@ -148,12 +148,12 @@ SELECT isempty(numrange(1, 5)); - Also, some element types have a notion of infinity, but that + Also, some element types have a notion of infinity, but that is just another value so far as the range type mechanisms are concerned. - For example, in timestamp ranges, [today,] means the same - thing as [today,). But [today,infinity] means - something different from [today,infinity) — the latter - excludes the special timestamp value infinity. + For example, in timestamp ranges, [today,] means the same + thing as [today,). But [today,infinity] means + something different from [today,infinity) — the latter + excludes the special timestamp value infinity. @@ -284,25 +284,25 @@ SELECT numrange(NULL, 2.2); no valid values between them. This contrasts with continuous ranges, where it's always (or almost always) possible to identify other element values between two given values. For example, a range over the - numeric type is continuous, as is a range over timestamp. - (Even though timestamp has limited precision, and so could + numeric type is continuous, as is a range over timestamp. + (Even though timestamp has limited precision, and so could theoretically be treated as discrete, it's better to consider it continuous since the step size is normally not of interest.) Another way to think about a discrete range type is that there is a clear - idea of a next or previous value for each element value. + idea of a next or previous value for each element value. Knowing that, it is possible to convert between inclusive and exclusive representations of a range's bounds, by choosing the next or previous element value instead of the one originally given. - For example, in an integer range type [4,8] and - (3,9) denote the same set of values; but this would not be so + For example, in an integer range type [4,8] and + (3,9) denote the same set of values; but this would not be so for a range over numeric. - A discrete range type should have a canonicalization + A discrete range type should have a canonicalization function that is aware of the desired step size for the element type. The canonicalization function is charged with converting equivalent values of the range type to have identical representations, in particular @@ -352,8 +352,8 @@ SELECT '[1.234, 5.678]'::floatrange; If the subtype is considered to have discrete rather than continuous - values, the CREATE TYPE command should specify a - canonical function. + values, the CREATE TYPE command should specify a + canonical function. The canonicalization function takes an input range value, and must return an equivalent range value that may have different bounds and formatting. The canonical output for two ranges that represent the same set of values, @@ -364,7 +364,7 @@ SELECT '[1.234, 5.678]'::floatrange; formatting. In addition to adjusting the inclusive/exclusive bounds format, a canonicalization function might round off boundary values, in case the desired step size is larger than what the subtype is capable of - storing. For instance, a range type over timestamp could be + storing. For instance, a range type over timestamp could be defined to have a step size of an hour, in which case the canonicalization function would need to round off bounds that weren't a multiple of an hour, or perhaps throw an error instead. @@ -372,25 +372,25 @@ SELECT '[1.234, 5.678]'::floatrange; In addition, any range type that is meant to be used with GiST or SP-GiST - indexes should define a subtype difference, or subtype_diff, - function. (The index will still work without subtype_diff, + indexes should define a subtype difference, or subtype_diff, + function. (The index will still work without subtype_diff, but it is likely to be considerably less efficient than if a difference function is provided.) The subtype difference function takes two input values of the subtype, and returns their difference - (i.e., X minus Y) represented as - a float8 value. In our example above, the - function float8mi that underlies the regular float8 + (i.e., X minus Y) represented as + a float8 value. In our example above, the + function float8mi that underlies the regular float8 minus operator can be used; but for any other subtype, some type conversion would be necessary. Some creative thought about how to represent differences as numbers might be needed, too. To the greatest - extent possible, the subtype_diff function should agree with + extent possible, the subtype_diff function should agree with the sort ordering implied by the selected operator class and collation; that is, its result should be positive whenever its first argument is greater than its second according to the sort ordering. - A less-oversimplified example of a subtype_diff function is: + A less-oversimplified example of a subtype_diff function is: @@ -426,15 +426,15 @@ SELECT '[11:10, 23:00]'::timerange; CREATE INDEX reservation_idx ON reservation USING GIST (during); A GiST or SP-GiST index can accelerate queries involving these range operators: - =, - &&, - <@, - @>, - <<, - >>, - -|-, - &<, and - &> + =, + &&, + <@, + @>, + <<, + >>, + -|-, + &<, and + &> (see for more information). @@ -442,7 +442,7 @@ CREATE INDEX reservation_idx ON reservation USING GIST (during); In addition, B-tree and hash indexes can be created for table columns of range types. For these index types, basically the only useful range operation is equality. There is a B-tree sort ordering defined for range - values, with corresponding < and > operators, + values, with corresponding < and > operators, but the ordering is rather arbitrary and not usually useful in the real world. Range types' B-tree and hash support is primarily meant to allow sorting and hashing internally in queries, rather than creation of @@ -491,7 +491,7 @@ with existing key (during)=(["2010-01-01 11:30:00","2010-01-01 15:00:00")). - You can use the btree_gist + You can use the btree_gist extension to define exclusion constraints on plain scalar data types, which can then be combined with range exclusions for maximum flexibility. For example, after btree_gist is installed, the following diff --git a/doc/src/sgml/recovery-config.sgml b/doc/src/sgml/recovery-config.sgml index 0a5d086248..4e1aa74c1f 100644 --- a/doc/src/sgml/recovery-config.sgml +++ b/doc/src/sgml/recovery-config.sgml @@ -11,23 +11,23 @@ This chapter describes the settings available in the - recovery.confrecovery.conf + recovery.confrecovery.conf file. They apply only for the duration of the recovery. They must be reset for any subsequent recovery you wish to perform. They cannot be changed once recovery has begun. - Settings in recovery.conf are specified in the format - name = 'value'. One parameter is specified per line. + Settings in recovery.conf are specified in the format + name = 'value'. One parameter is specified per line. Hash marks (#) designate the rest of the line as a comment. To embed a single quote in a parameter - value, write two quotes (''). + value, write two quotes (''). - A sample file, share/recovery.conf.sample, - is provided in the installation's share/ directory. + A sample file, share/recovery.conf.sample, + is provided in the installation's share/ directory. @@ -38,7 +38,7 @@ restore_command (string) - restore_command recovery parameter + restore_command recovery parameter @@ -46,25 +46,25 @@ The local shell command to execute to retrieve an archived segment of the WAL file series. This parameter is required for archive recovery, but optional for streaming replication. - Any %f in the string is + Any %f in the string is replaced by the name of the file to retrieve from the archive, - and any %p is replaced by the copy destination path name + and any %p is replaced by the copy destination path name on the server. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point. That is the earliest file that must be kept to allow a restore to be restartable, so this information can be used to truncate the archive to just the minimum required to support - restarting from the current restore. %r is typically only + restarting from the current restore. %r is typically only used by warm-standby configurations (see ). - Write %% to embed an actual % character. + Write %% to embed an actual % character. It is important for the command to return a zero exit status - only if it succeeds. The command will be asked for file + only if it succeeds. The command will be asked for file names that are not present in the archive; it must return nonzero when so asked. Examples: @@ -82,33 +82,33 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows archive_cleanup_command (string) - archive_cleanup_command recovery parameter + archive_cleanup_command recovery parameter This optional parameter specifies a shell command that will be executed at every restartpoint. The purpose of - archive_cleanup_command is to provide a mechanism for + archive_cleanup_command is to provide a mechanism for cleaning up old archived WAL files that are no longer needed by the standby server. - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point. - That is the earliest file that must be kept to allow a - restore to be restartable, and so all files earlier than %r + That is the earliest file that must be kept to allow a + restore to be restartable, and so all files earlier than %r may be safely removed. This information can be used to truncate the archive to just the minimum required to support restart from the current restore. The module - is often used in archive_cleanup_command for + is often used in archive_cleanup_command for single-standby configurations, for example: archive_cleanup_command = 'pg_archivecleanup /mnt/server/archivedir %r' Note however that if multiple standby servers are restoring from the same archive directory, you will need to ensure that you do not delete WAL files until they are no longer needed by any of the servers. - archive_cleanup_command would typically be used in a + archive_cleanup_command would typically be used in a warm-standby configuration (see ). - Write %% to embed an actual % character in the + Write %% to embed an actual % character in the command. @@ -123,16 +123,16 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_end_command (string) - recovery_end_command recovery parameter + recovery_end_command recovery parameter This parameter specifies a shell command that will be executed once only at the end of recovery. This parameter is optional. The purpose of the - recovery_end_command is to provide a mechanism for cleanup + recovery_end_command is to provide a mechanism for cleanup following replication or recovery. - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point, like in . @@ -156,9 +156,9 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows By default, recovery will recover to the end of the WAL log. The following parameters can be used to specify an earlier stopping point. - At most one of recovery_target, - recovery_target_lsn, recovery_target_name, - recovery_target_time, or recovery_target_xid + At most one of recovery_target, + recovery_target_lsn, recovery_target_name, + recovery_target_time, or recovery_target_xid can be used; if more than one of these is specified in the configuration file, the last entry will be used. @@ -167,7 +167,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target = 'immediate' - recovery_target recovery parameter + recovery_target recovery parameter @@ -178,7 +178,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ended. - Technically, this is a string parameter, but 'immediate' + Technically, this is a string parameter, but 'immediate' is currently the only allowed value. @@ -187,13 +187,13 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_name (string) - recovery_target_name recovery parameter + recovery_target_name recovery parameter This parameter specifies the named restore point (created with - pg_create_restore_point()) to which recovery will proceed. + pg_create_restore_point()) to which recovery will proceed. @@ -201,7 +201,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_time (timestamp) - recovery_target_time recovery parameter + recovery_target_time recovery parameter @@ -217,7 +217,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_xid (string) - recovery_target_xid recovery parameter + recovery_target_xid recovery parameter @@ -237,7 +237,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_lsn (pg_lsn) - recovery_target_lsn recovery parameter + recovery_target_lsn recovery parameter @@ -246,7 +246,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows to which recovery will proceed. The precise stopping point is also influenced by . This parameter is parsed using the system data type - pg_lsn. + pg_lsn. @@ -262,7 +262,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_inclusive"> recovery_target_inclusive (boolean) - recovery_target_inclusive recovery parameter + recovery_target_inclusive recovery parameter @@ -274,7 +274,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows or is specified. This setting controls whether transactions having exactly the target commit time or ID, respectively, will - be included in the recovery. Default is true. + be included in the recovery. Default is true. @@ -283,14 +283,14 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_timeline"> recovery_target_timeline (string) - recovery_target_timeline recovery parameter + recovery_target_timeline recovery parameter Specifies recovering into a particular timeline. The default is to recover along the same timeline that was current when the - base backup was taken. Setting this to latest recovers + base backup was taken. Setting this to latest recovers to the latest timeline found in the archive, which is useful in a standby server. Other than that you only need to set this parameter in complex re-recovery situations, where you need to return to @@ -304,24 +304,24 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_action"> recovery_target_action (enum) - recovery_target_action recovery parameter + recovery_target_action recovery parameter Specifies what action the server should take once the recovery target is - reached. The default is pause, which means recovery will - be paused. promote means the recovery process will finish + reached. The default is pause, which means recovery will + be paused. promote means the recovery process will finish and the server will start to accept connections. - Finally shutdown will stop the server after reaching the + Finally shutdown will stop the server after reaching the recovery target. - The intended use of the pause setting is to allow queries + The intended use of the pause setting is to allow queries to be executed against the database to check if this recovery target is the most desirable point for recovery. The paused state can be resumed by - using pg_wal_replay_resume() (see + using pg_wal_replay_resume() (see ), which then causes recovery to end. If this recovery target is not the desired stopping point, then shut down the server, change the @@ -329,22 +329,22 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows continue recovery. - The shutdown setting is useful to have the instance ready + The shutdown setting is useful to have the instance ready at the exact replay point desired. The instance will still be able to replay more WAL records (and in fact will have to replay WAL records since the last checkpoint next time it is started). - Note that because recovery.conf will not be renamed when - recovery_target_action is set to shutdown, + Note that because recovery.conf will not be renamed when + recovery_target_action is set to shutdown, any subsequent start will end with immediate shutdown unless the - configuration is changed or the recovery.conf file is + configuration is changed or the recovery.conf file is removed manually. This setting has no effect if no recovery target is set. If is not enabled, a setting of - pause will act the same as shutdown. + pause will act the same as shutdown. @@ -360,25 +360,25 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows standby_mode (boolean) - standby_mode recovery parameter + standby_mode recovery parameter - Specifies whether to start the PostgreSQL server as - a standby. If this parameter is on, the server will + Specifies whether to start the PostgreSQL server as + a standby. If this parameter is on, the server will not stop recovery when the end of archived WAL is reached, but will keep trying to continue recovery by fetching new WAL segments - using restore_command + using restore_command and/or by connecting to the primary server as specified by the - primary_conninfo setting. + primary_conninfo setting. primary_conninfo (string) - primary_conninfo recovery parameter + primary_conninfo recovery parameter @@ -401,20 +401,20 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows A password needs to be provided too, if the primary demands password authentication. It can be provided in the primary_conninfo string, or in a separate - ~/.pgpass file on the standby server (use - replication as the database name). + ~/.pgpass file on the standby server (use + replication as the database name). Do not specify a database name in the primary_conninfo string. - This setting has no effect if standby_mode is off. + This setting has no effect if standby_mode is off. primary_slot_name (string) - primary_slot_name recovery parameter + primary_slot_name recovery parameter @@ -423,7 +423,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows connecting to the primary via streaming replication to control resource removal on the upstream node (see ). - This setting has no effect if primary_conninfo is not + This setting has no effect if primary_conninfo is not set. @@ -431,15 +431,15 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows trigger_file (string) - trigger_file recovery parameter + trigger_file recovery parameter Specifies a trigger file whose presence ends recovery in the standby. Even if this value is not set, you can still promote - the standby using pg_ctl promote. - This setting has no effect if standby_mode is off. + the standby using pg_ctl promote. + This setting has no effect if standby_mode is off. @@ -447,7 +447,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_min_apply_delay (integer) - recovery_min_apply_delay recovery parameter + recovery_min_apply_delay recovery parameter @@ -488,7 +488,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows This parameter is intended for use with streaming replication deployments; however, if the parameter is specified it will be honored in all cases. - hot_standby_feedback will be delayed by use of this feature + hot_standby_feedback will be delayed by use of this feature which could lead to bloat on the master; use both together with care. diff --git a/doc/src/sgml/ref/abort.sgml b/doc/src/sgml/ref/abort.sgml index ed9332c395..285d0d4ac6 100644 --- a/doc/src/sgml/ref/abort.sgml +++ b/doc/src/sgml/ref/abort.sgml @@ -63,7 +63,7 @@ ABORT [ WORK | TRANSACTION ] - Issuing ABORT outside of a transaction block + Issuing ABORT outside of a transaction block emits a warning and otherwise has no effect. diff --git a/doc/src/sgml/ref/alter_aggregate.sgml b/doc/src/sgml/ref/alter_aggregate.sgml index 7b7616ca01..43f0a1609b 100644 --- a/doc/src/sgml/ref/alter_aggregate.sgml +++ b/doc/src/sgml/ref/alter_aggregate.sgml @@ -43,7 +43,7 @@ ALTER AGGREGATE name ( aggregate_signatu - You must own the aggregate function to use ALTER AGGREGATE. + You must own the aggregate function to use ALTER AGGREGATE. To change the schema of an aggregate function, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -73,8 +73,8 @@ ALTER AGGREGATE name ( aggregate_signatu - The mode of an argument: IN or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. @@ -97,10 +97,10 @@ ALTER AGGREGATE name ( aggregate_signatu An input data type on which the aggregate function operates. - To reference a zero-argument aggregate function, write * + To reference a zero-argument aggregate function, write * in place of the list of argument specifications. To reference an ordered-set aggregate function, write - ORDER BY between the direct and aggregated argument + ORDER BY between the direct and aggregated argument specifications. @@ -140,13 +140,13 @@ ALTER AGGREGATE name ( aggregate_signatu The recommended syntax for referencing an ordered-set aggregate - is to write ORDER BY between the direct and aggregated + is to write ORDER BY between the direct and aggregated argument specifications, in the same style as in . However, it will also work to - omit ORDER BY and just run the direct and aggregated + omit ORDER BY and just run the direct and aggregated argument specifications into a single list. In this abbreviated form, - if VARIADIC "any" was used in both the direct and - aggregated argument lists, write VARIADIC "any" only once. + if VARIADIC "any" was used in both the direct and + aggregated argument lists, write VARIADIC "any" only once. diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml index 30e8c756a1..9d77ee5c2c 100644 --- a/doc/src/sgml/ref/alter_collation.sgml +++ b/doc/src/sgml/ref/alter_collation.sgml @@ -38,7 +38,7 @@ ALTER COLLATION name SET SCHEMA new_sche - You must own the collation to use ALTER COLLATION. + You must own the collation to use ALTER COLLATION. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the collation's schema. (These restrictions enforce that altering the diff --git a/doc/src/sgml/ref/alter_conversion.sgml b/doc/src/sgml/ref/alter_conversion.sgml index 3514720d03..83fcbbd5a5 100644 --- a/doc/src/sgml/ref/alter_conversion.sgml +++ b/doc/src/sgml/ref/alter_conversion.sgml @@ -36,7 +36,7 @@ ALTER CONVERSION name SET SCHEMA new_sch - You must own the conversion to use ALTER CONVERSION. + You must own the conversion to use ALTER CONVERSION. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the conversion's schema. (These restrictions enforce that altering the diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index 59639d3729..35e4123cad 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -89,7 +89,7 @@ ALTER DATABASE name RESET ALL database. Whenever a new session is subsequently started in that database, the specified value becomes the session default value. The database-specific default overrides whatever setting is present - in postgresql.conf or has been received from the + in postgresql.conf or has been received from the postgres command line. Only the database owner or a superuser can change the session defaults for a database. Certain variables cannot be set this way, or can only be @@ -183,7 +183,7 @@ ALTER DATABASE name RESET ALL database-specific setting is removed, so the system-wide default setting will be inherited in new sessions. Use RESET ALL to clear all database-specific settings. - SET FROM CURRENT saves the session's current value of + SET FROM CURRENT saves the session's current value of the parameter as the database-specific value. diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index 09eabda68a..6c34f2446a 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -88,7 +88,7 @@ REVOKE [ GRANT OPTION FOR ] Description - ALTER DEFAULT PRIVILEGES allows you to set the privileges + ALTER DEFAULT PRIVILEGES allows you to set the privileges that will be applied to objects created in the future. (It does not affect privileges assigned to already-existing objects.) Currently, only the privileges for schemas, tables (including views and foreign @@ -109,9 +109,9 @@ REVOKE [ GRANT OPTION FOR ] As explained under , the default privileges for any object type normally grant all grantable permissions to the object owner, and may grant some privileges to - PUBLIC as well. However, this behavior can be changed by + PUBLIC as well. However, this behavior can be changed by altering the global default privileges with - ALTER DEFAULT PRIVILEGES. + ALTER DEFAULT PRIVILEGES. @@ -123,7 +123,7 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing role of which the current role is a member. - If FOR ROLE is omitted, the current role is assumed. + If FOR ROLE is omitted, the current role is assumed. @@ -134,9 +134,9 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing schema. If specified, the default privileges are altered for objects later created in that schema. - If IN SCHEMA is omitted, the global default privileges + If IN SCHEMA is omitted, the global default privileges are altered. - IN SCHEMA is not allowed when using ON SCHEMAS + IN SCHEMA is not allowed when using ON SCHEMAS as schemas can't be nested. @@ -148,7 +148,7 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing role to grant or revoke privileges for. This parameter, and all the other parameters in - abbreviated_grant_or_revoke, + abbreviated_grant_or_revoke, act as described under or , @@ -175,7 +175,7 @@ REVOKE [ GRANT OPTION FOR ] If you wish to drop a role for which the default privileges have been altered, it is necessary to reverse the changes in its default privileges - or use DROP OWNED BY to get rid of the default privileges entry + or use DROP OWNED BY to get rid of the default privileges entry for the role. @@ -186,7 +186,7 @@ REVOKE [ GRANT OPTION FOR ] Grant SELECT privilege to everyone for all tables (and views) you subsequently create in schema myschema, and allow - role webuser to INSERT into them too: + role webuser to INSERT into them too: ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO PUBLIC; @@ -206,7 +206,7 @@ ALTER DEFAULT PRIVILEGES IN SCHEMA myschema REVOKE INSERT ON TABLES FROM webuser Remove the public EXECUTE permission that is normally granted on functions, - for all functions subsequently created by role admin: + for all functions subsequently created by role admin: ALTER DEFAULT PRIVILEGES FOR ROLE admin REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC; diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 827a1c7d20..96a7db95ec 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -69,7 +69,7 @@ ALTER DOMAIN name These forms change whether a domain is marked to allow NULL - values or to reject NULL values. You can only SET NOT NULL + values or to reject NULL values. You can only SET NOT NULL when the columns using the domain contain no null values. @@ -88,7 +88,7 @@ ALTER DOMAIN name valid using ALTER DOMAIN ... VALIDATE CONSTRAINT. Newly inserted or updated rows are always checked against all constraints, even those marked NOT VALID. - NOT VALID is only accepted for CHECK constraints. + NOT VALID is only accepted for CHECK constraints. @@ -118,7 +118,7 @@ ALTER DOMAIN name This form validates a constraint previously added as - NOT VALID, that is, verify that all data in columns using the + NOT VALID, that is, verify that all data in columns using the domain satisfy the specified constraint. @@ -154,7 +154,7 @@ ALTER DOMAIN name - You must own the domain to use ALTER DOMAIN. + You must own the domain to use ALTER DOMAIN. To change the schema of a domain, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -273,8 +273,8 @@ ALTER DOMAIN name Notes - Currently, ALTER DOMAIN ADD CONSTRAINT, ALTER - DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT NULL + Currently, ALTER DOMAIN ADD CONSTRAINT, ALTER + DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT NULL will fail if the validated named domain or any derived domain is used within a composite-type column of any table in the database. They should eventually be improved to be @@ -330,10 +330,10 @@ ALTER DOMAIN zipcode SET SCHEMA customers; ALTER DOMAIN conforms to the SQL - standard, except for the OWNER, RENAME, SET SCHEMA, and - VALIDATE CONSTRAINT variants, which are - PostgreSQL extensions. The NOT VALID - clause of the ADD CONSTRAINT variant is also a + standard, except for the OWNER, RENAME, SET SCHEMA, and + VALIDATE CONSTRAINT variants, which are + PostgreSQL extensions. The NOT VALID + clause of the ADD CONSTRAINT variant is also a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index ae84e98e49..c6c831fa30 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -89,7 +89,7 @@ ALTER EXTENSION name DROP This form moves the extension's objects into another schema. The - extension has to be relocatable for this command to + extension has to be relocatable for this command to succeed. @@ -125,7 +125,7 @@ ALTER EXTENSION name DROP You must own the extension to use ALTER EXTENSION. - The ADD/DROP forms require ownership of the + The ADD/DROP forms require ownership of the added/dropped object as well. @@ -150,7 +150,7 @@ ALTER EXTENSION name DROP The desired new version of the extension. This can be written as either an identifier or a string literal. If not specified, - ALTER EXTENSION UPDATE attempts to update to whatever is + ALTER EXTENSION UPDATE attempts to update to whatever is shown as the default version in the extension's control file. @@ -205,14 +205,14 @@ ALTER EXTENSION name DROP The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that ALTER EXTENSION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -246,7 +246,7 @@ ALTER EXTENSION name DROP The data type(s) of the operator's arguments (optionally - schema-qualified). Write NONE for the missing argument + schema-qualified). Write NONE for the missing argument of a prefix or postfix operator. @@ -314,7 +314,7 @@ ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore); Compatibility - ALTER EXTENSION is a PostgreSQL + ALTER EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml index 9c5b84fe64..1c0a26de6b 100644 --- a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml @@ -93,11 +93,11 @@ ALTER FOREIGN DATA WRAPPER name REN Note that it is possible that pre-existing options of the foreign-data wrapper, or of dependent servers, user mappings, or foreign tables, are - invalid according to the new validator. PostgreSQL does + invalid according to the new validator. PostgreSQL does not check for this. It is up to the user to make sure that these options are correct before using the modified foreign-data wrapper. However, any options specified in this ALTER FOREIGN DATA - WRAPPER command will be checked using the new validator. + WRAPPER command will be checked using the new validator. @@ -117,8 +117,8 @@ ALTER FOREIGN DATA WRAPPER name REN Change options for the foreign-data - wrapper. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + wrapper. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; names and values are also validated using the foreign data wrapper's validator function, if any. @@ -150,16 +150,16 @@ ALTER FOREIGN DATA WRAPPER name REN Examples - Change a foreign-data wrapper dbi, add - option foo, drop bar: + Change a foreign-data wrapper dbi, add + option foo, drop bar: ALTER FOREIGN DATA WRAPPER dbi OPTIONS (ADD foo '1', DROP 'bar'); - Change the foreign-data wrapper dbi validator - to bob.myvalidator: + Change the foreign-data wrapper dbi validator + to bob.myvalidator: ALTER FOREIGN DATA WRAPPER dbi VALIDATOR bob.myvalidator; @@ -171,7 +171,7 @@ ALTER FOREIGN DATA WRAPPER dbi VALIDATOR bob.myvalidator; ALTER FOREIGN DATA WRAPPER conforms to ISO/IEC 9075-9 (SQL/MED), except that the HANDLER, - VALIDATOR, OWNER TO, and RENAME + VALIDATOR, OWNER TO, and RENAME clauses are extensions. diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml index cb4e7044fb..44d981a5bd 100644 --- a/doc/src/sgml/ref/alter_foreign_table.sgml +++ b/doc/src/sgml/ref/alter_foreign_table.sgml @@ -85,7 +85,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form drops a column from a foreign table. - You will need to say CASCADE if + You will need to say CASCADE if anything outside the table depends on the column; for example, views. If IF EXISTS is specified and the column @@ -101,7 +101,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form changes the type of a column of a foreign table. Again, this has no effect on any underlying storage: this action simply - changes the type that PostgreSQL believes the column to + changes the type that PostgreSQL believes the column to have. @@ -113,7 +113,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name These forms set or remove the default value for a column. Default values only apply in subsequent INSERT - or UPDATE commands; they do not cause rows already in the + or UPDATE commands; they do not cause rows already in the table to change. @@ -174,7 +174,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new constraint to a foreign table, using the same syntax as . - Currently only CHECK constraints are supported. + Currently only CHECK constraints are supported. @@ -183,7 +183,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name.) - If the constraint is marked NOT VALID, then it isn't + If the constraint is marked NOT VALID, then it isn't assumed to hold, but is only recorded for possible future use. @@ -235,9 +235,9 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - Note that this is not equivalent to ADD COLUMN oid oid; + Note that this is not equivalent to ADD COLUMN oid oid; that would add a normal column that happened to be named - oid, not a system column. + oid, not a system column. @@ -292,8 +292,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name Change options for the foreign table or one of its columns. - ADD, SET, and DROP - specify the action to be performed. ADD is assumed + ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Duplicate option names are not allowed (although it's OK for a table option and a column option to have the same name). Option names and values are also validated using the @@ -325,7 +325,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - All the actions except RENAME and SET SCHEMA + All the actions except RENAME and SET SCHEMA can be combined into a list of multiple alterations to apply in parallel. For example, it is possible to add several columns and/or alter the type of several @@ -333,13 +333,13 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - If the command is written as ALTER FOREIGN TABLE IF EXISTS ... + If the command is written as ALTER FOREIGN TABLE IF EXISTS ... and the foreign table does not exist, no error is thrown. A notice is issued in this case. - You must own the table to use ALTER FOREIGN TABLE. + You must own the table to use ALTER FOREIGN TABLE. To change the schema of a foreign table, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -362,10 +362,10 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name The name (possibly schema-qualified) of an existing foreign table to - alter. If ONLY is specified before the table name, only - that table is altered. If ONLY is not specified, the table + alter. If ONLY is specified before the table name, only + that table is altered. If ONLY is not specified, the table and all its descendant tables (if any) are altered. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -518,9 +518,9 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name Consistency with the foreign server is not checked when a column is added or removed with ADD COLUMN or - DROP COLUMN, a NOT NULL - or CHECK constraint is added, or a column type is changed - with SET DATA TYPE. It is the user's responsibility to ensure + DROP COLUMN, a NOT NULL + or CHECK constraint is added, or a column type is changed + with SET DATA TYPE. It is the user's responsibility to ensure that the table definition matches the remote side. @@ -552,16 +552,16 @@ ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'v Compatibility - The forms ADD, DROP, + The forms ADD, DROP, and SET DATA TYPE conform with the SQL standard. The other forms are PostgreSQL extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single - ALTER FOREIGN TABLE command is an extension. + ALTER FOREIGN TABLE command is an extension. - ALTER FOREIGN TABLE DROP COLUMN can be used to drop the only + ALTER FOREIGN TABLE DROP COLUMN can be used to drop the only column of a foreign table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column foreign tables. diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index 8d9fec6005..cdecf631b1 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -56,8 +56,8 @@ ALTER FUNCTION name [ ( [ [ - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that ALTER FUNCTION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -260,8 +260,8 @@ ALTER FUNCTION name [ ( [ [ group_name RENAME TO The first two variants add users to a group or remove them from a group. - (Any role can play the part of either a user or a - group for this purpose.) These variants are effectively + (Any role can play the part of either a user or a + group for this purpose.) These variants are effectively equivalent to granting or revoking membership in the role named as the - group; so the preferred way to do this is to use + group; so the preferred way to do this is to use or . @@ -79,7 +79,7 @@ ALTER GROUP group_name RENAME TO Users (roles) that are to be added to or removed from the group. - The users must already exist; ALTER GROUP does not + The users must already exist; ALTER GROUP does not create or drop users. diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index 4c777f8675..30e399e62c 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -106,7 +106,7 @@ ALTER INDEX ALL IN TABLESPACE name This form resets one or more index-method-specific storage parameters to - their defaults. As with SET, a REINDEX + their defaults. As with SET, a REINDEX might be needed to update the index entirely. @@ -226,12 +226,12 @@ ALTER INDEX ALL IN TABLESPACE name These operations are also possible using . - ALTER INDEX is in fact just an alias for the forms - of ALTER TABLE that apply to indexes. + ALTER INDEX is in fact just an alias for the forms + of ALTER TABLE that apply to indexes. - There was formerly an ALTER INDEX OWNER variant, but + There was formerly an ALTER INDEX OWNER variant, but this is now ignored (with a warning). An index cannot have an owner different from its table's owner. Changing the table's owner automatically changes the index as well. @@ -280,7 +280,7 @@ ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000; Compatibility - ALTER INDEX is a PostgreSQL + ALTER INDEX is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_materialized_view.sgml b/doc/src/sgml/ref/alter_materialized_view.sgml index a1cced1581..eaea819744 100644 --- a/doc/src/sgml/ref/alter_materialized_view.sgml +++ b/doc/src/sgml/ref/alter_materialized_view.sgml @@ -58,8 +58,8 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name You must own the materialized view to use ALTER MATERIALIZED - VIEW. To change a materialized view's schema, you must also have - CREATE privilege on the new schema. + VIEW. To change a materialized view's schema, you must also have + CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the materialized view's schema. (These restrictions enforce that altering diff --git a/doc/src/sgml/ref/alter_opclass.sgml b/doc/src/sgml/ref/alter_opclass.sgml index 58de603aa4..834f3e4231 100644 --- a/doc/src/sgml/ref/alter_opclass.sgml +++ b/doc/src/sgml/ref/alter_opclass.sgml @@ -41,7 +41,7 @@ ALTER OPERATOR CLASS name USING When operators and support functions are added to a family with ALTER OPERATOR FAMILY, they are not part of any - specific operator class within the family, but are just loose + specific operator class within the family, but are just loose within the family. This indicates that these operators and functions are compatible with the family's semantics, but are not required for correct functioning of any specific index. (Operators and functions @@ -74,7 +74,7 @@ ALTER OPERATOR FAMILY name USING op_type - In an OPERATOR clause, - the operand data type(s) of the operator, or NONE to + In an OPERATOR clause, + the operand data type(s) of the operator, or NONE to signify a left-unary or right-unary operator. Unlike the comparable - syntax in CREATE OPERATOR CLASS, the operand data types + syntax in CREATE OPERATOR CLASS, the operand data types must always be specified. - In an ADD FUNCTION clause, the operand data type(s) the + In an ADD FUNCTION clause, the operand data type(s) the function is intended to support, if different from the input data type(s) of the function. For B-tree comparison functions and hash functions it is not necessary to specify name USING - If neither FOR SEARCH nor FOR ORDER BY is - specified, FOR SEARCH is the default. + If neither FOR SEARCH nor FOR ORDER BY is + specified, FOR SEARCH is the default. @@ -240,7 +240,7 @@ ALTER OPERATOR FAMILY name USING Notes - Notice that the DROP syntax only specifies the slot + Notice that the DROP syntax only specifies the slot in the operator family, by strategy or support number and input data type(s). The name of the operator or function occupying the slot is not - mentioned. Also, for DROP FUNCTION the type(s) to specify + mentioned. Also, for DROP FUNCTION the type(s) to specify are the input data type(s) the function is intended to support; for GiST, SP-GiST and GIN indexes this might have nothing to do with the actual input argument types of the function. @@ -274,9 +274,9 @@ ALTER OPERATOR FAMILY name USING The following example command adds cross-data-type operators and support functions to an operator family that already contains B-tree - operator classes for data types int4 and int2. + operator classes for data types int4 and int2. diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index 7392e6f3de..801404e0cf 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -87,10 +87,10 @@ ALTER PUBLICATION name RENAME TO table_name - Name of an existing table. If ONLY is specified before the - table name, only that table is affected. If ONLY is not + Name of an existing table. If ONLY is specified before the + table name, only that table is affected. If ONLY is not specified, the table and all its descendant tables (if any) are - affected. Optionally, * can be specified after the table + affected. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -147,7 +147,7 @@ ALTER PUBLICATION mypublication ADD TABLE users, departments; Compatibility - ALTER PUBLICATION is a PostgreSQL + ALTER PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index 607b25962f..e30ca10454 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -69,7 +69,7 @@ ALTER ROLE { role_specification | A for that.) Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. - Roles having CREATEROLE privilege can change any of these + Roles having CREATEROLE privilege can change any of these settings, but only for non-superuser and non-replication roles. Ordinary roles can only change their own password. @@ -77,13 +77,13 @@ ALTER ROLE { role_specification | A The second variant changes the name of the role. Database superusers can rename any role. - Roles having CREATEROLE privilege can rename non-superuser + Roles having CREATEROLE privilege can rename non-superuser roles. The current session user cannot be renamed. (Connect as a different user if you need to do that.) - Because MD5-encrypted passwords use the role name as + Because MD5-encrypted passwords use the role name as cryptographic salt, renaming a role clears its password if the - password is MD5-encrypted. + password is MD5-encrypted. @@ -100,7 +100,7 @@ ALTER ROLE { role_specification | A Whenever the role subsequently starts a new session, the specified value becomes the session default, overriding whatever setting is present in - postgresql.conf or has been received from the postgres + postgresql.conf or has been received from the postgres command line. This only happens at login time; executing or does not cause new @@ -112,7 +112,7 @@ ALTER ROLE { role_specification | A Superusers can change anyone's session defaults. Roles having - CREATEROLE privilege can change defaults for non-superuser + CREATEROLE privilege can change defaults for non-superuser roles. Ordinary roles can only set defaults for themselves. Certain configuration variables cannot be set this way, or can only be set if a superuser issues the command. Only superusers can change a setting @@ -155,8 +155,8 @@ ALTER ROLE { role_specification | A SUPERUSER NOSUPERUSER - CREATEDB - NOCREATEDB + CREATEDB + NOCREATEDB CREATEROLE NOCREATEROLE INHERIT @@ -168,7 +168,7 @@ ALTER ROLE { role_specification | A BYPASSRLS NOBYPASSRLS CONNECTION LIMIT connlimit - [ ENCRYPTED ] PASSWORD password + [ ENCRYPTED ] PASSWORD password VALID UNTIL 'timestamp' @@ -209,7 +209,7 @@ ALTER ROLE { role_specification | A role-specific variable setting is removed, so the role will inherit the system-wide default setting in new sessions. Use RESET ALL to clear all role-specific settings. - SET FROM CURRENT saves the session's current value of + SET FROM CURRENT saves the session's current value of the parameter as the role-specific value. If IN DATABASE is specified, the configuration parameter is set or removed for the given role and database only. @@ -288,7 +288,7 @@ ALTER ROLE davide WITH PASSWORD NULL; Change a password expiration date, specifying that the password should expire at midday on 4th May 2015 using - the time zone which is one hour ahead of UTC: + the time zone which is one hour ahead of UTC: ALTER ROLE chris VALID UNTIL 'May 4 12:00:00 2015 +1'; diff --git a/doc/src/sgml/ref/alter_schema.sgml b/doc/src/sgml/ref/alter_schema.sgml index dbc5c2d45f..2ca406b914 100644 --- a/doc/src/sgml/ref/alter_schema.sgml +++ b/doc/src/sgml/ref/alter_schema.sgml @@ -34,7 +34,7 @@ ALTER SCHEMA name OWNER TO { new_owner - You must own the schema to use ALTER SCHEMA. + You must own the schema to use ALTER SCHEMA. To rename a schema you must also have the CREATE privilege for the database. To alter the owner, you must also be a direct or diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index c505935fcc..9b8ad36522 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -47,8 +47,8 @@ ALTER SEQUENCE [ IF EXISTS ] name S - You must own the sequence to use ALTER SEQUENCE. - To change a sequence's schema, you must also have CREATE + You must own the sequence to use ALTER SEQUENCE. + To change a sequence's schema, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on @@ -159,8 +159,8 @@ ALTER SEQUENCE [ IF EXISTS ] name S The optional clause START WITH start changes the recorded start value of the sequence. This has no effect on the - current sequence value; it simply sets the value - that future ALTER SEQUENCE RESTART commands will use. + current sequence value; it simply sets the value + that future ALTER SEQUENCE RESTART commands will use. @@ -172,13 +172,13 @@ ALTER SEQUENCE [ IF EXISTS ] name S The optional clause RESTART [ WITH restart ] changes the current value of the sequence. This is similar to calling the - setval function with is_called = - false: the specified value will be returned by the - next call of nextval. - Writing RESTART with no restart value is equivalent to supplying - the start value that was recorded by CREATE SEQUENCE - or last set by ALTER SEQUENCE START WITH. + setval function with is_called = + false: the specified value will be returned by the + next call of nextval. + Writing RESTART with no restart value is equivalent to supplying + the start value that was recorded by CREATE SEQUENCE + or last set by ALTER SEQUENCE START WITH. @@ -186,7 +186,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S a RESTART operation on a sequence is transactional and blocks concurrent transactions from obtaining numbers from the same sequence. If that's not the desired mode of - operation, setval should be used. + operation, setval should be used. @@ -250,7 +250,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S table must have the same owner and be in the same schema as the sequence. Specifying OWNED BY NONE removes any existing - association, making the sequence free-standing. + association, making the sequence free-standing. @@ -291,7 +291,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S ALTER SEQUENCE will not immediately affect - nextval results in backends, + nextval results in backends, other than the current one, that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the changed sequence generation parameters. The current backend will be affected @@ -299,7 +299,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S - ALTER SEQUENCE does not affect the currval + ALTER SEQUENCE does not affect the currval status for the sequence. (Before PostgreSQL 8.3, it sometimes did.) @@ -332,8 +332,8 @@ ALTER SEQUENCE serial RESTART WITH 105; ALTER SEQUENCE conforms to the SQL - standard, except for the AS, START WITH, - OWNED BY, OWNER TO, RENAME TO, and + standard, except for the AS, START WITH, + OWNED BY, OWNER TO, RENAME TO, and SET SCHEMA clauses, which are PostgreSQL extensions. diff --git a/doc/src/sgml/ref/alter_server.sgml b/doc/src/sgml/ref/alter_server.sgml index 7f5def30a4..05e11f5ef2 100644 --- a/doc/src/sgml/ref/alter_server.sgml +++ b/doc/src/sgml/ref/alter_server.sgml @@ -42,7 +42,7 @@ ALTER SERVER name RENAME TO USAGE privilege on the server's foreign-data + have USAGE privilege on the server's foreign-data wrapper. (Note that superusers satisfy all these criteria automatically.) @@ -75,8 +75,8 @@ ALTER SERVER name RENAME TO Change options for the - server. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + server. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; names and values are also validated using the server's foreign-data wrapper library. @@ -108,15 +108,15 @@ ALTER SERVER name RENAME TO Examples - Alter server foo, add connection options: + Alter server foo, add connection options: ALTER SERVER foo OPTIONS (host 'foo', dbname 'foodb'); - Alter server foo, change version, - change host option: + Alter server foo, change version, + change host option: ALTER SERVER foo VERSION '8.4' OPTIONS (SET host 'baz'); diff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml index db4f2f0d52..87acb879b0 100644 --- a/doc/src/sgml/ref/alter_statistics.sgml +++ b/doc/src/sgml/ref/alter_statistics.sgml @@ -39,9 +39,9 @@ ALTER STATISTICS name SET SCHEMA - You must own the statistics object to use ALTER STATISTICS. + You must own the statistics object to use ALTER STATISTICS. To change a statistics object's schema, you must also - have CREATE privilege on the new schema. + have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the statistics object's schema. (These restrictions enforce that altering diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml index 44c0b35069..b76a21f654 100644 --- a/doc/src/sgml/ref/alter_subscription.sgml +++ b/doc/src/sgml/ref/alter_subscription.sgml @@ -42,7 +42,7 @@ ALTER SUBSCRIPTION name RENAME TO < - You must own the subscription to use ALTER SUBSCRIPTION. + You must own the subscription to use ALTER SUBSCRIPTION. To alter the owner, you must also be a direct or indirect member of the new owning role. The new owner has to be a superuser. (Currently, all subscription owners must be superusers, so the owner checks @@ -211,7 +211,7 @@ ALTER SUBSCRIPTION mysub DISABLE; Compatibility - ALTER SUBSCRIPTION is a PostgreSQL + ALTER SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_system.sgml b/doc/src/sgml/ref/alter_system.sgml index e3a4af4041..b8ef117b7d 100644 --- a/doc/src/sgml/ref/alter_system.sgml +++ b/doc/src/sgml/ref/alter_system.sgml @@ -50,8 +50,8 @@ ALTER SYSTEM RESET ALL the next server configuration reload, or after the next server restart in the case of parameters that can only be changed at server start. A server configuration reload can be commanded by calling the SQL - function pg_reload_conf(), running pg_ctl reload, - or sending a SIGHUP signal to the main server process. + function pg_reload_conf(), running pg_ctl reload, + or sending a SIGHUP signal to the main server process. @@ -95,8 +95,8 @@ ALTER SYSTEM RESET ALL This command can't be used to set , - nor parameters that are not allowed in postgresql.conf - (e.g., preset options). + nor parameters that are not allowed in postgresql.conf + (e.g., preset options). @@ -108,7 +108,7 @@ ALTER SYSTEM RESET ALL Examples - Set the wal_level: + Set the wal_level: ALTER SYSTEM SET wal_level = replica; @@ -116,7 +116,7 @@ ALTER SYSTEM SET wal_level = replica; Undo that, restoring whatever setting was effective - in postgresql.conf: + in postgresql.conf: ALTER SYSTEM RESET wal_level; diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 0559f80549..68393d70b4 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -126,7 +126,7 @@ ALTER TABLE [ IF EXISTS ] name Multivariate statistics referencing the dropped column will also be removed if the removal of the column would cause the statistics to contain data for only a single column. - You will need to say CASCADE if anything outside the table + You will need to say CASCADE if anything outside the table depends on the column, for example, foreign key references or views. If IF EXISTS is specified and the column does not exist, no error is thrown. In this case a notice @@ -162,7 +162,7 @@ ALTER TABLE [ IF EXISTS ] name These forms set or remove the default value for a column. Default values only apply in subsequent INSERT - or UPDATE commands; they do not cause rows already in the + or UPDATE commands; they do not cause rows already in the table to change. @@ -174,7 +174,7 @@ ALTER TABLE [ IF EXISTS ] name These forms change whether a column is marked to allow null values or to reject null values. You can only use SET - NOT NULL when the column contains no null values. + NOT NULL when the column contains no null values. @@ -182,7 +182,7 @@ ALTER TABLE [ IF EXISTS ] name on a column if it is marked NOT NULL in the parent table. To drop the NOT NULL constraint from all the partitions, perform DROP NOT NULL on the parent - table. Even if there is no NOT NULL constraint on the + table. Even if there is no NOT NULL constraint on the parent, such a constraint can still be added to individual partitions, if desired; that is, the children can disallow nulls even if the parent allows them, but not the other way around. @@ -249,17 +249,17 @@ ALTER TABLE [ IF EXISTS ] name This form sets or resets per-attribute options. Currently, the only - defined per-attribute options are n_distinct and - n_distinct_inherited, which override the + defined per-attribute options are n_distinct and + n_distinct_inherited, which override the number-of-distinct-values estimates made by subsequent - operations. n_distinct affects the statistics for the table - itself, while n_distinct_inherited affects the statistics + operations. n_distinct affects the statistics for the table + itself, while n_distinct_inherited affects the statistics gathered for the table plus its inheritance children. When set to a - positive value, ANALYZE will assume that the column contains + positive value, ANALYZE will assume that the column contains exactly the specified number of distinct nonnull values. When set to a negative value, which must be greater - than or equal to -1, ANALYZE will assume that the number of + than or equal to -1, ANALYZE will assume that the number of distinct nonnull values in the column is linear in the size of the table; the exact count is to be computed by multiplying the estimated table size by the absolute value of the given number. For example, @@ -290,7 +290,7 @@ ALTER TABLE [ IF EXISTS ] name This form sets the storage mode for a column. This controls whether this - column is held inline or in a secondary TOAST table, and + column is held inline or in a secondary TOAST table, and whether the data should be compressed or not. PLAIN must be used for fixed-length values such as integer and is @@ -302,7 +302,7 @@ ALTER TABLE [ IF EXISTS ] name Use of EXTERNAL will make substring operations on very large text and bytea values run faster, at the penalty of increased storage space. Note that - SET STORAGE doesn't itself change anything in the table, + SET STORAGE doesn't itself change anything in the table, it just sets the strategy to be pursued during future table updates. See for more information. @@ -335,7 +335,7 @@ ALTER TABLE [ IF EXISTS ] name ADD table_constraint_using_index - This form adds a new PRIMARY KEY or UNIQUE + This form adds a new PRIMARY KEY or UNIQUE constraint to a table based on an existing unique index. All the columns of the index will be included in the constraint. @@ -344,14 +344,14 @@ ALTER TABLE [ IF EXISTS ] name The index cannot have expression columns nor be a partial index. Also, it must be a b-tree index with default sort ordering. These restrictions ensure that the index is equivalent to one that would be - built by a regular ADD PRIMARY KEY or ADD UNIQUE + built by a regular ADD PRIMARY KEY or ADD UNIQUE command. - If PRIMARY KEY is specified, and the index's columns are not - already marked NOT NULL, then this command will attempt to - do ALTER COLUMN SET NOT NULL against each such column. + If PRIMARY KEY is specified, and the index's columns are not + already marked NOT NULL, then this command will attempt to + do ALTER COLUMN SET NOT NULL against each such column. That requires a full table scan to verify the column(s) contain no nulls. In all other cases, this is a fast operation. @@ -363,9 +363,9 @@ ALTER TABLE [ IF EXISTS ] name - After this command is executed, the index is owned by the + After this command is executed, the index is owned by the constraint, in the same way as if the index had been built by - a regular ADD PRIMARY KEY or ADD UNIQUE + a regular ADD PRIMARY KEY or ADD UNIQUE command. In particular, dropping the constraint will make the index disappear too. @@ -375,7 +375,7 @@ ALTER TABLE [ IF EXISTS ] name Adding a constraint using an existing index can be helpful in situations where a new constraint needs to be added without blocking table updates for a long time. To do that, create the index using - CREATE INDEX CONCURRENTLY, and then install it as an + CREATE INDEX CONCURRENTLY, and then install it as an official constraint using this syntax. See the example below. @@ -447,9 +447,9 @@ ALTER TABLE [ IF EXISTS ] name triggers are not executed. The trigger firing mechanism is also affected by the configuration variable . Simply enabled - triggers will fire when the replication role is origin - (the default) or local. Triggers configured as ENABLE - REPLICA will only fire if the session is in replica + triggers will fire when the replication role is origin + (the default) or local. Triggers configured as ENABLE + REPLICA will only fire if the session is in replica mode, and triggers configured as ENABLE ALWAYS will fire regardless of the current replication mode. @@ -542,9 +542,9 @@ ALTER TABLE [ IF EXISTS ] name - Note that this is not equivalent to ADD COLUMN oid oid; + Note that this is not equivalent to ADD COLUMN oid oid; that would add a normal column that happened to be named - oid, not a system column. + oid, not a system column. @@ -609,8 +609,8 @@ ALTER TABLE [ IF EXISTS ] name will not be modified immediately by this command; depending on the parameter you might need to rewrite the table to get the desired effects. That can be done with VACUUM - FULL, or one of the forms - of ALTER TABLE that forces a table rewrite. + FULL, or one of the forms + of ALTER TABLE that forces a table rewrite. For planner related parameters, changes will take effect from the next time the table is locked so currently executing queries will not be affected. @@ -620,18 +620,18 @@ ALTER TABLE [ IF EXISTS ] name SHARE UPDATE EXCLUSIVE lock will be taken for fillfactor and autovacuum storage parameters, as well as the following planner related parameters: - effective_io_concurrency, parallel_workers, seq_page_cost, - random_page_cost, n_distinct and n_distinct_inherited. + effective_io_concurrency, parallel_workers, seq_page_cost, + random_page_cost, n_distinct and n_distinct_inherited. - While CREATE TABLE allows OIDS to be specified + While CREATE TABLE allows OIDS to be specified in the WITH (storage_parameter) syntax, - ALTER TABLE does not treat OIDS as a - storage parameter. Instead use the SET WITH OIDS - and SET WITHOUT OIDS forms to change OID status. + class="parameter">storage_parameter) syntax, + ALTER TABLE does not treat OIDS as a + storage parameter. Instead use the SET WITH OIDS + and SET WITHOUT OIDS forms to change OID status. @@ -642,7 +642,7 @@ ALTER TABLE [ IF EXISTS ] name This form resets one or more storage parameters to their - defaults. As with SET, a table rewrite might be + defaults. As with SET, a table rewrite might be needed to update the table entirely. @@ -693,11 +693,11 @@ ALTER TABLE [ IF EXISTS ] name This form links the table to a composite type as though CREATE - TABLE OF had formed it. The table's list of column names and types + TABLE OF had formed it. The table's list of column names and types must precisely match that of the composite type; the presence of - an oid system column is permitted to differ. The table must + an oid system column is permitted to differ. The table must not inherit from any other table. These restrictions ensure - that CREATE TABLE OF would permit an equivalent table + that CREATE TABLE OF would permit an equivalent table definition. @@ -728,13 +728,13 @@ ALTER TABLE [ IF EXISTS ] name This form changes the information which is written to the write-ahead log to identify rows which are updated or deleted. This option has no effect - except when logical replication is in use. DEFAULT + except when logical replication is in use. DEFAULT (the default for non-system tables) records the - old values of the columns of the primary key, if any. USING INDEX + old values of the columns of the primary key, if any. USING INDEX records the old values of the columns covered by the named index, which must be unique, not partial, not deferrable, and include only columns marked - NOT NULL. FULL records the old values of all columns - in the row. NOTHING records no information about the old row. + NOT NULL. FULL records the old values of all columns + in the row. NOTHING records no information about the old row. (This is the default for system tables.) In all cases, no old values are logged unless at least one of the columns that would be logged differs between the old and new versions of the row. @@ -853,7 +853,7 @@ ALTER TABLE [ IF EXISTS ] name - You must own the table to use ALTER TABLE. + You must own the table to use ALTER TABLE. To change the schema or tablespace of a table, you must also have CREATE privilege on the new schema or tablespace. To add the table as a new child of a parent table, you must own the parent @@ -890,10 +890,10 @@ ALTER TABLE [ IF EXISTS ] name The name (optionally schema-qualified) of an existing table to - alter. If ONLY is specified before the table name, only - that table is altered. If ONLY is not specified, the table + alter. If ONLY is specified before the table name, only + that table is altered. If ONLY is not specified, the table and all its descendant tables (if any) are altered. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -1106,28 +1106,28 @@ ALTER TABLE [ IF EXISTS ] name When a column is added with ADD COLUMN, all existing rows in the table are initialized with the column's default value - (NULL if no DEFAULT clause is specified). - If there is no DEFAULT clause, this is merely a metadata + (NULL if no DEFAULT clause is specified). + If there is no DEFAULT clause, this is merely a metadata change and does not require any immediate update of the table's data; the added NULL values are supplied on readout, instead. - Adding a column with a DEFAULT clause or changing the type of + Adding a column with a DEFAULT clause or changing the type of an existing column will require the entire table and its indexes to be rewritten. As an exception when changing the type of an existing column, - if the USING clause does not change the column + if the USING clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt. Adding or - removing a system oid column also requires rewriting the entire + removing a system oid column also requires rewriting the entire table. Table and/or index rebuilds may take a significant amount of time for a large table; and will temporarily require as much as double the disk space. - Adding a CHECK or NOT NULL constraint requires + Adding a CHECK or NOT NULL constraint requires scanning the table to verify that existing rows meet the constraint, but does not require a table rewrite. @@ -1139,7 +1139,7 @@ ALTER TABLE [ IF EXISTS ] name The main reason for providing the option to specify multiple changes - in a single ALTER TABLE is that multiple table scans or + in a single ALTER TABLE is that multiple table scans or rewrites can thereby be combined into a single pass over the table. @@ -1151,37 +1151,37 @@ ALTER TABLE [ IF EXISTS ] name reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. (These statements do - not apply when dropping the system oid column; that is done + not apply when dropping the system oid column; that is done with an immediate rewrite.) To force immediate reclamation of space occupied by a dropped column, - you can execute one of the forms of ALTER TABLE that + you can execute one of the forms of ALTER TABLE that performs a rewrite of the whole table. This results in reconstructing each row with the dropped column replaced by a null value. - The rewriting forms of ALTER TABLE are not MVCC-safe. + The rewriting forms of ALTER TABLE are not MVCC-safe. After a table rewrite, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the rewrite occurred. See for more details. - The USING option of SET DATA TYPE can actually + The USING option of SET DATA TYPE can actually specify any expression involving the old values of the row; that is, it can refer to other columns as well as the one being converted. This allows - very general conversions to be done with the SET DATA TYPE + very general conversions to be done with the SET DATA TYPE syntax. Because of this flexibility, the USING expression is not applied to the column's default value (if any); the result might not be a constant expression as required for a default. This means that when there is no implicit or assignment cast from old to - new type, SET DATA TYPE might fail to convert the default even + new type, SET DATA TYPE might fail to convert the default even though a USING clause is supplied. In such cases, - drop the default with DROP DEFAULT, perform the ALTER - TYPE, and then use SET DEFAULT to add a suitable new + drop the default with DROP DEFAULT, perform the ALTER + TYPE, and then use SET DEFAULT to add a suitable new default. Similar considerations apply to indexes and constraints involving the column. @@ -1216,11 +1216,11 @@ ALTER TABLE [ IF EXISTS ] name The actions for identity columns (ADD GENERATED, SET etc., DROP IDENTITY), as well as the actions - TRIGGER, CLUSTER, OWNER, - and TABLESPACE never recurse to descendant tables; - that is, they always act as though ONLY were specified. - Adding a constraint recurses only for CHECK constraints - that are not marked NO INHERIT. + TRIGGER, CLUSTER, OWNER, + and TABLESPACE never recurse to descendant tables; + that is, they always act as though ONLY were specified. + Adding a constraint recurses only for CHECK constraints + that are not marked NO INHERIT. @@ -1434,17 +1434,17 @@ ALTER TABLE measurement The forms ADD (without USING INDEX), - DROP [COLUMN], DROP IDENTITY, RESTART, - SET DEFAULT, SET DATA TYPE (without USING), + DROP [COLUMN], DROP IDENTITY, RESTART, + SET DEFAULT, SET DATA TYPE (without USING), SET GENERATED, and SET sequence_option conform with the SQL standard. The other forms are PostgreSQL extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single - ALTER TABLE command is an extension. + ALTER TABLE command is an extension. - ALTER TABLE DROP COLUMN can be used to drop the only + ALTER TABLE DROP COLUMN can be used to drop the only column of a table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column tables. diff --git a/doc/src/sgml/ref/alter_tablespace.sgml b/doc/src/sgml/ref/alter_tablespace.sgml index 4542bd90a2..def554bfb3 100644 --- a/doc/src/sgml/ref/alter_tablespace.sgml +++ b/doc/src/sgml/ref/alter_tablespace.sgml @@ -83,8 +83,8 @@ ALTER TABLESPACE name RESET ( name ON The ability to temporarily enable or disable a trigger is provided by , not by - ALTER TRIGGER, because ALTER TRIGGER has no + ALTER TRIGGER, because ALTER TRIGGER has no convenient way to express the option of enabling or disabling all of a table's triggers at once. @@ -117,7 +117,7 @@ ALTER TRIGGER emp_stamp ON emp DEPENDS ON EXTENSION emplib; Compatibility - ALTER TRIGGER is a PostgreSQL + ALTER TRIGGER is a PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml index 72a719b862..b44aac9bf5 100644 --- a/doc/src/sgml/ref/alter_tsconfig.sgml +++ b/doc/src/sgml/ref/alter_tsconfig.sgml @@ -49,7 +49,7 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA You must be the owner of the configuration to use - ALTER TEXT SEARCH CONFIGURATION. + ALTER TEXT SEARCH CONFIGURATION. @@ -136,20 +136,20 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA - The ADD MAPPING FOR form installs a list of dictionaries to be + The ADD MAPPING FOR form installs a list of dictionaries to be consulted for the specified token type(s); it is an error if there is already a mapping for any of the token types. - The ALTER MAPPING FOR form does the same, but first removing + The ALTER MAPPING FOR form does the same, but first removing any existing mapping for those token types. - The ALTER MAPPING REPLACE forms substitute ALTER MAPPING REPLACE forms substitute new_dictionary for old_dictionary anywhere the latter appears. - This is done for only the specified token types when FOR + This is done for only the specified token types when FOR appears, or for all mappings of the configuration when it doesn't. - The DROP MAPPING form removes all dictionaries for the + The DROP MAPPING form removes all dictionaries for the specified token type(s), causing tokens of those types to be ignored by the text search configuration. It is an error if there is no mapping - for the token types, unless IF EXISTS appears. + for the token types, unless IF EXISTS appears. @@ -158,9 +158,9 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA Examples - The following example replaces the english dictionary - with the swedish dictionary anywhere that english - is used within my_config. + The following example replaces the english dictionary + with the swedish dictionary anywhere that english + is used within my_config. diff --git a/doc/src/sgml/ref/alter_tsdictionary.sgml b/doc/src/sgml/ref/alter_tsdictionary.sgml index 7cecabea83..16d76687ab 100644 --- a/doc/src/sgml/ref/alter_tsdictionary.sgml +++ b/doc/src/sgml/ref/alter_tsdictionary.sgml @@ -41,7 +41,7 @@ ALTER TEXT SEARCH DICTIONARY name SET SCHEMA You must be the owner of the dictionary to use - ALTER TEXT SEARCH DICTIONARY. + ALTER TEXT SEARCH DICTIONARY. @@ -126,7 +126,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( StopWords = newrussian ); - The following example command changes the language option to dutch, + The following example command changes the language option to dutch, and removes the stopword option entirely. @@ -135,7 +135,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( language = dutch, StopWords ); - The following example command updates the dictionary's + The following example command updates the dictionary's definition without actually changing anything. @@ -144,7 +144,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( dummy ); (The reason this works is that the option removal code doesn't complain if there is no such option.) This trick is useful when changing - configuration files for the dictionary: the ALTER will + configuration files for the dictionary: the ALTER will force existing database sessions to re-read the configuration files, which otherwise they would never do if they had read them earlier. diff --git a/doc/src/sgml/ref/alter_tsparser.sgml b/doc/src/sgml/ref/alter_tsparser.sgml index e2b6060a17..737a507565 100644 --- a/doc/src/sgml/ref/alter_tsparser.sgml +++ b/doc/src/sgml/ref/alter_tsparser.sgml @@ -36,7 +36,7 @@ ALTER TEXT SEARCH PARSER name SET SCHEMA - You must be a superuser to use ALTER TEXT SEARCH PARSER. + You must be a superuser to use ALTER TEXT SEARCH PARSER. diff --git a/doc/src/sgml/ref/alter_tstemplate.sgml b/doc/src/sgml/ref/alter_tstemplate.sgml index e7ae91c0a0..d9a753017b 100644 --- a/doc/src/sgml/ref/alter_tstemplate.sgml +++ b/doc/src/sgml/ref/alter_tstemplate.sgml @@ -36,7 +36,7 @@ ALTER TEXT SEARCH TEMPLATE name SET SCHEMA - You must be a superuser to use ALTER TEXT SEARCH TEMPLATE. + You must be a superuser to use ALTER TEXT SEARCH TEMPLATE. diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 6c5201ccb5..75be3187f1 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -147,7 +147,7 @@ ALTER TYPE name RENAME VALUE - You must own the type to use ALTER TYPE. + You must own the type to use ALTER TYPE. To change the schema of a type, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -290,7 +290,7 @@ ALTER TYPE name RENAME VALUE Notes - ALTER TYPE ... ADD VALUE (the form that adds a new value to an + ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum type) cannot be executed inside a transaction block. @@ -301,7 +301,7 @@ ALTER TYPE name RENAME VALUE wrapped - around since the original creation of the enum type). The slowdown is + around since the original creation of the enum type). The slowdown is usually insignificant; but if it matters, optimal performance can be regained by dropping and recreating the enum type, or by dumping and reloading the database. diff --git a/doc/src/sgml/ref/alter_user_mapping.sgml b/doc/src/sgml/ref/alter_user_mapping.sgml index 5cc49210ed..18271d5199 100644 --- a/doc/src/sgml/ref/alter_user_mapping.sgml +++ b/doc/src/sgml/ref/alter_user_mapping.sgml @@ -38,7 +38,7 @@ ALTER USER MAPPING FOR { user_name The owner of a foreign server can alter user mappings for that server for any user. Also, a user can alter a user mapping for - their own user name if USAGE privilege on the server has + their own user name if USAGE privilege on the server has been granted to the user. @@ -51,9 +51,9 @@ ALTER USER MAPPING FOR { user_name user_name - User name of the mapping. CURRENT_USER - and USER match the name of the current - user. PUBLIC is used to match all present and future + User name of the mapping. CURRENT_USER + and USER match the name of the current + user. PUBLIC is used to match all present and future user names in the system. @@ -74,8 +74,8 @@ ALTER USER MAPPING FOR { user_name Change options for the user mapping. The new options override any previously specified - options. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + options. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; options are also validated by the server's foreign-data wrapper. @@ -89,7 +89,7 @@ ALTER USER MAPPING FOR { user_name Examples - Change the password for user mapping bob, server foo: + Change the password for user mapping bob, server foo: ALTER USER MAPPING FOR bob SERVER foo OPTIONS (SET password 'public'); diff --git a/doc/src/sgml/ref/alter_view.sgml b/doc/src/sgml/ref/alter_view.sgml index 788eda5d58..e7180b4409 100644 --- a/doc/src/sgml/ref/alter_view.sgml +++ b/doc/src/sgml/ref/alter_view.sgml @@ -37,12 +37,12 @@ ALTER VIEW [ IF EXISTS ] name RESET ALTER VIEW changes various auxiliary properties of a view. (If you want to modify the view's defining query, - use CREATE OR REPLACE VIEW.) + use CREATE OR REPLACE VIEW.) - You must own the view to use ALTER VIEW. - To change a view's schema, you must also have CREATE + You must own the view to use ALTER VIEW. + To change a view's schema, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on @@ -81,7 +81,7 @@ ALTER VIEW [ IF EXISTS ] name RESET These forms set or remove the default value for a column. A view column's default value is substituted into any - INSERT or UPDATE command whose target is the + INSERT or UPDATE command whose target is the view, before applying any rules or triggers for the view. The view's default will therefore take precedence over any default values from underlying relations. @@ -185,7 +185,7 @@ INSERT INTO a_view(id) VALUES(2); -- ts will receive the current time Compatibility - ALTER VIEW is a PostgreSQL + ALTER VIEW is a PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index eae7fe92e0..12f2f09337 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -35,7 +35,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns ANALYZE collects statistics about the contents of tables in the database, and stores the results in the pg_statistic + linkend="catalog-pg-statistic">pg_statistic system catalog. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries. @@ -93,7 +93,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsOutputs - When VERBOSE is specified, ANALYZE emits + When VERBOSE is specified, ANALYZE emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well. @@ -104,8 +104,8 @@ ANALYZE [ VERBOSE ] [ table_and_columns Foreign tables are analyzed only when explicitly selected. Not all - foreign data wrappers support ANALYZE. If the table's - wrapper does not support ANALYZE, the command prints a + foreign data wrappers support ANALYZE. If the table's + wrapper does not support ANALYZE, the command prints a warning and does nothing. @@ -172,8 +172,8 @@ ANALYZE [ VERBOSE ] [ table_and_columnspg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are - never used as part of the WHERE, GROUP BY, - or ORDER BY clauses of queries, since the planner will + never used as part of the WHERE, GROUP BY, + or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns. @@ -191,7 +191,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) + ALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) (see ). @@ -210,7 +210,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns If any of the child tables are foreign tables whose foreign data wrappers - do not support ANALYZE, those child tables are ignored while + do not support ANALYZE, those child tables are ignored while gathering inheritance statistics. diff --git a/doc/src/sgml/ref/begin.sgml b/doc/src/sgml/ref/begin.sgml index c04f1c8064..fd6f073d18 100644 --- a/doc/src/sgml/ref/begin.sgml +++ b/doc/src/sgml/ref/begin.sgml @@ -91,7 +91,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode has the same functionality - as BEGIN. + as BEGIN. @@ -101,7 +101,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode - Issuing BEGIN when already inside a transaction block will + Issuing BEGIN when already inside a transaction block will provoke a warning message. The state of the transaction is not affected. To nest transactions within a transaction block, use savepoints (see ). diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index aaa2f89a30..4d71c45797 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -90,7 +90,7 @@ CLOSE { name | ALL } You can see all available cursors by querying the pg_cursors system view. + linkend="view-pg-cursors">pg_cursors system view. @@ -115,7 +115,7 @@ CLOSE liahona; CLOSE is fully conforming with the SQL - standard. CLOSE ALL is a PostgreSQL + standard. CLOSE ALL is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml index b55734d35c..5c5db75077 100644 --- a/doc/src/sgml/ref/cluster.sgml +++ b/doc/src/sgml/ref/cluster.sgml @@ -128,7 +128,7 @@ CLUSTER [VERBOSE] - CLUSTER can re-sort the table using either an index scan + CLUSTER can re-sort the table using either an index scan on the specified index, or (if the index is a b-tree) a sequential scan followed by sorting. It will attempt to choose the method that will be faster, based on planner cost parameters and available statistical @@ -148,13 +148,13 @@ CLUSTER [VERBOSE] as double the table size, plus the index sizes. This method is often faster than the index scan method, but if the disk space requirement is intolerable, you can disable this choice by temporarily setting to off. + linkend="guc-enable-sort"> to off. It is advisable to set to a reasonably large value (but not more than the amount of RAM you can - dedicate to the CLUSTER operation) before clustering. + dedicate to the CLUSTER operation) before clustering. @@ -168,7 +168,7 @@ CLUSTER [VERBOSE] Because CLUSTER remembers which indexes are clustered, one can cluster the tables one wants clustered manually the first time, then set up a periodic maintenance script that executes - CLUSTER without any parameters, so that the desired tables + CLUSTER without any parameters, so that the desired tables are periodically reclustered. @@ -212,7 +212,7 @@ CLUSTER; CLUSTER index_name ON table_name - is also supported for compatibility with pre-8.3 PostgreSQL + is also supported for compatibility with pre-8.3 PostgreSQL versions. diff --git a/doc/src/sgml/ref/clusterdb.sgml b/doc/src/sgml/ref/clusterdb.sgml index 67582fd6e6..081bbc5f7a 100644 --- a/doc/src/sgml/ref/clusterdb.sgml +++ b/doc/src/sgml/ref/clusterdb.sgml @@ -76,8 +76,8 @@ PostgreSQL documentation - - + + Cluster all databases. @@ -86,8 +86,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be clustered. @@ -101,8 +101,8 @@ PostgreSQL documentation - - + + Echo the commands that clusterdb generates @@ -112,8 +112,8 @@ PostgreSQL documentation - - + + Do not display progress messages. @@ -122,20 +122,20 @@ PostgreSQL documentation - - + + Cluster table only. Multiple tables can be clustered by writing multiple - switches. - - + + Print detailed information during processing. @@ -144,8 +144,8 @@ PostgreSQL documentation - - + + Print the clusterdb version and exit. @@ -154,8 +154,8 @@ PostgreSQL documentation - - + + Show help about clusterdb command line @@ -173,8 +173,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the server is @@ -185,8 +185,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -197,8 +197,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -207,8 +207,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -222,8 +222,8 @@ PostgreSQL documentation - - + + Force clusterdb to prompt for a @@ -236,14 +236,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, clusterdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -277,8 +277,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index 059d6f41d8..ab2e09d521 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -83,16 +83,16 @@ COMMENT ON Only one comment string is stored for each object, so to modify a comment, - issue a new COMMENT command for the same object. To remove a + issue a new COMMENT command for the same object. To remove a comment, write NULL in place of the text string. Comments are automatically dropped when their object is dropped. For most kinds of object, only the object's owner can set the comment. - Roles don't have owners, so the rule for COMMENT ON ROLE is + Roles don't have owners, so the rule for COMMENT ON ROLE is that you must be superuser to comment on a superuser role, or have the - CREATEROLE privilege to comment on non-superuser roles. + CREATEROLE privilege to comment on non-superuser roles. Likewise, access methods don't have owners either; you must be superuser to comment on an access method. Of course, a superuser can comment on anything. @@ -103,8 +103,8 @@ COMMENT ON \d family of commands. Other user interfaces to retrieve comments can be built atop the same built-in functions that psql uses, namely - obj_description, col_description, - and shobj_description + obj_description, col_description, + and shobj_description (see ). @@ -171,14 +171,14 @@ COMMENT ON The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that COMMENT does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -219,7 +219,7 @@ COMMENT ON The data type(s) of the operator's arguments (optionally - schema-qualified). Write NONE for the missing argument + schema-qualified). Write NONE for the missing argument of a prefix or postfix operator. @@ -258,7 +258,7 @@ COMMENT ON text - The new comment, written as a string literal; or NULL + The new comment, written as a string literal; or NULL to drop the comment. diff --git a/doc/src/sgml/ref/commit.sgml b/doc/src/sgml/ref/commit.sgml index e93c216849..8e3f53957e 100644 --- a/doc/src/sgml/ref/commit.sgml +++ b/doc/src/sgml/ref/commit.sgml @@ -60,7 +60,7 @@ COMMIT [ WORK | TRANSACTION ] - Issuing COMMIT when not inside a transaction does + Issuing COMMIT when not inside a transaction does no harm, but it will provoke a warning message. diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml index 716aed3ac2..35bbf85af7 100644 --- a/doc/src/sgml/ref/commit_prepared.sgml +++ b/doc/src/sgml/ref/commit_prepared.sgml @@ -75,7 +75,7 @@ COMMIT PREPARED transaction_id Examples Commit the transaction identified by the transaction - identifier foobar: + identifier foobar: COMMIT PREPARED 'foobar'; diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index 732efe69e6..8f0974b256 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -54,10 +54,10 @@ COPY { table_name [ ( COPY moves data between PostgreSQL tables and standard file-system files. COPY TO copies the contents of a table - to a file, while COPY FROM copies - data from a file to a table (appending the data to + to a file, while COPY FROM copies + data from a file to a table (appending the data to whatever is in the table already). COPY TO - can also copy the results of a SELECT query. + can also copy the results of a SELECT query. @@ -118,10 +118,10 @@ COPY { table_name [ ( - For INSERT, UPDATE and - DELETE queries a RETURNING clause must be provided, + For INSERT, UPDATE and + DELETE queries a RETURNING clause must be provided, and the target relation must not have a conditional rule, nor - an ALSO rule, nor an INSTEAD rule + an ALSO rule, nor an INSTEAD rule that expands to multiple statements. @@ -133,7 +133,7 @@ COPY { table_name [ ( The path name of the input or output file. An input file name can be an absolute or relative path, but an output file name must be an absolute - path. Windows users might need to use an E'' string and + path. Windows users might need to use an E'' string and double any backslashes used in the path name. @@ -144,7 +144,7 @@ COPY { table_name [ ( A command to execute. In COPY FROM, the input is - read from standard output of the command, and in COPY TO, + read from standard output of the command, and in COPY TO, the output is written to the standard input of the command. @@ -181,9 +181,9 @@ COPY { table_name [ ( Specifies whether the selected option should be turned on or off. - You can write TRUE, ON, or + You can write TRUE, ON, or 1 to enable the option, and FALSE, - OFF, or 0 to disable it. The + OFF, or 0 to disable it. The boolean value can also be omitted, in which case TRUE is assumed. @@ -195,10 +195,10 @@ COPY { table_name [ ( Selects the data format to be read or written: - text, - csv (Comma Separated Values), - or binary. - The default is text. + text, + csv (Comma Separated Values), + or binary. + The default is text. @@ -220,7 +220,7 @@ COPY { table_name [ ( Requests copying the data with rows already frozen, just as they - would be after running the VACUUM FREEZE command. + would be after running the VACUUM FREEZE command. This is intended as a performance option for initial data loading. Rows will be frozen only if the table being loaded has been created or truncated in the current subtransaction, there are no cursors @@ -241,9 +241,9 @@ COPY { table_name [ ( Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, - a comma in CSV format. + a comma in CSV format. This must be a single one-byte character. - This option is not allowed when using binary format. + This option is not allowed when using binary format. @@ -254,10 +254,10 @@ COPY { table_name [ ( Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty - string in CSV format. You might prefer an + string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. - This option is not allowed when using binary format. + This option is not allowed when using binary format. @@ -279,7 +279,7 @@ COPY { table_name [ ( CSV format. + This option is allowed only when using CSV format. @@ -291,7 +291,7 @@ COPY { table_name [ ( CSV format. + This option is allowed only when using CSV format. @@ -301,59 +301,59 @@ COPY { table_name [ ( Specifies the character that should appear before a - data character that matches the QUOTE value. - The default is the same as the QUOTE value (so that + data character that matches the QUOTE value. + The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). This must be a single one-byte character. - This option is allowed only when using CSV format. + This option is allowed only when using CSV format. - FORCE_QUOTE + FORCE_QUOTE Forces quoting to be - used for all non-NULL values in each specified column. - NULL output is never quoted. If * is specified, - non-NULL values will be quoted in all columns. - This option is allowed only in COPY TO, and only when - using CSV format. + used for all non-NULL values in each specified column. + NULL output is never quoted. If * is specified, + non-NULL values will be quoted in all columns. + This option is allowed only in COPY TO, and only when + using CSV format. - FORCE_NOT_NULL + FORCE_NOT_NULL Do not match the specified columns' values against the null string. In the default case where the null string is empty, this means that empty values will be read as zero-length strings rather than nulls, even when they are not quoted. - This option is allowed only in COPY FROM, and only when - using CSV format. + This option is allowed only in COPY FROM, and only when + using CSV format. - FORCE_NULL + FORCE_NULL Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to - NULL. In the default case where the null string is empty, + NULL. In the default case where the null string is empty, this converts a quoted empty string into NULL. - This option is allowed only in COPY FROM, and only when - using CSV format. + This option is allowed only in COPY FROM, and only when + using CSV format. - ENCODING + ENCODING Specifies that the file is encoded in the table_name [ ( Outputs - On successful completion, a COPY command returns a command + On successful completion, a COPY command returns a command tag of the form COPY count @@ -382,10 +382,10 @@ COPY count - psql will print this command tag only if the command - was not COPY ... TO STDOUT, or the - equivalent psql meta-command - \copy ... to stdout. This is to prevent confusing the + psql will print this command tag only if the command + was not COPY ... TO STDOUT, or the + equivalent psql meta-command + \copy ... to stdout. This is to prevent confusing the command tag with the data that was just printed. @@ -403,16 +403,16 @@ COPY count COPY FROM can be used with plain tables and with views - that have INSTEAD OF INSERT triggers. + that have INSTEAD OF INSERT triggers. COPY only deals with the specific table named; it does not copy data to or from child tables. Thus for example - COPY table TO + COPY table TO shows the same data as SELECT * FROM ONLY table. But COPY - (SELECT * FROM table) TO ... + class="parameter">table. But COPY + (SELECT * FROM table) TO ... can be used to dump all of the data in an inheritance hierarchy. @@ -427,7 +427,7 @@ COPY count If row-level security is enabled for the table, the relevant SELECT policies will apply to COPY - table TO statements. + table TO statements. Currently, COPY FROM is not supported for tables with row-level security. Use equivalent INSERT statements instead. @@ -491,10 +491,10 @@ COPY count DateStyle. To ensure portability to other PostgreSQL installations that might use non-default DateStyle settings, - DateStyle should be set to ISO before - using COPY TO. It is also a good idea to avoid dumping + DateStyle should be set to ISO before + using COPY TO. It is also a good idea to avoid dumping data with IntervalStyle set to - sql_standard, because negative interval values might be + sql_standard, because negative interval values might be misinterpreted by a server that has a different setting for IntervalStyle. @@ -519,7 +519,7 @@ COPY count - FORCE_NULL and FORCE_NOT_NULL can be used + FORCE_NULL and FORCE_NOT_NULL can be used simultaneously on the same column. This results in converting quoted null strings to null values and unquoted null strings to empty strings. @@ -533,7 +533,7 @@ COPY count Text Format - When the text format is used, + When the text format is used, the data read or written is a text file with one line per table row. Columns in a row are separated by the delimiter character. The column values themselves are strings generated by the @@ -548,17 +548,17 @@ COPY count End of data can be represented by a single line containing just - backslash-period (\.). An end-of-data marker is + backslash-period (\.). An end-of-data marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed only when copying data to or from client applications using pre-3.0 client protocol. - Backslash characters (\) can be used in the + Backslash characters (\) can be used in the COPY data to quote data characters that might otherwise be taken as row or column delimiters. In particular, the - following characters must be preceded by a backslash if + following characters must be preceded by a backslash if they appear as part of a column value: backslash itself, newline, carriage return, and the current delimiter character. @@ -587,37 +587,37 @@ COPY count - \b + \b Backspace (ASCII 8) - \f + \f Form feed (ASCII 12) - \n + \n Newline (ASCII 10) - \r + \r Carriage return (ASCII 13) - \t + \t Tab (ASCII 9) - \v + \v Vertical tab (ASCII 11) - \digits + \digits Backslash followed by one to three octal digits specifies the character with that numeric code - \xdigits - Backslash x followed by one or two hex digits specifies + \xdigits + Backslash x followed by one or two hex digits specifies the character with that numeric code @@ -633,15 +633,15 @@ COPY count Any other backslashed character that is not mentioned in the above table will be taken to represent itself. However, beware of adding backslashes unnecessarily, since that might accidentally produce a string matching the - end-of-data marker (\.) or the null string (\N by + end-of-data marker (\.) or the null string (\N by default). These strings will be recognized before any other backslash processing is done. It is strongly recommended that applications generating COPY data convert - data newlines and carriage returns to the \n and - \r sequences respectively. At present it is + data newlines and carriage returns to the \n and + \r sequences respectively. At present it is possible to represent a data carriage return by a backslash and carriage return, and to represent a data newline by a backslash and newline. However, these representations might not be accepted in future releases. @@ -652,10 +652,10 @@ COPY count COPY TO will terminate each row with a Unix-style - newline (\n). Servers running on Microsoft Windows instead - output carriage return/newline (\r\n), but only for - COPY to a server file; for consistency across platforms, - COPY TO STDOUT always sends \n + newline (\n). Servers running on Microsoft Windows instead + output carriage return/newline (\r\n), but only for + COPY to a server file; for consistency across platforms, + COPY TO STDOUT always sends \n regardless of server platform. COPY FROM can handle lines ending with newlines, carriage returns, or carriage return/newlines. To reduce the risk of @@ -670,62 +670,62 @@ COPY count This format option is used for importing and exporting the Comma - Separated Value (CSV) file format used by many other + Separated Value (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping rules used by PostgreSQL's standard text format, it produces and recognizes the common CSV escaping mechanism. - The values in each record are separated by the DELIMITER + The values in each record are separated by the DELIMITER character. If the value contains the delimiter character, the - QUOTE character, the NULL string, a carriage + QUOTE character, the NULL string, a carriage return, or line feed character, then the whole value is prefixed and - suffixed by the QUOTE character, and any occurrence - within the value of a QUOTE character or the - ESCAPE character is preceded by the escape character. - You can also use FORCE_QUOTE to force quotes when outputting - non-NULL values in specific columns. + suffixed by the QUOTE character, and any occurrence + within the value of a QUOTE character or the + ESCAPE character is preceded by the escape character. + You can also use FORCE_QUOTE to force quotes when outputting + non-NULL values in specific columns. - The CSV format has no standard way to distinguish a - NULL value from an empty string. - PostgreSQL's COPY handles this by quoting. - A NULL is output as the NULL parameter string - and is not quoted, while a non-NULL value matching the - NULL parameter string is quoted. For example, with the - default settings, a NULL is written as an unquoted empty + The CSV format has no standard way to distinguish a + NULL value from an empty string. + PostgreSQL's COPY handles this by quoting. + A NULL is output as the NULL parameter string + and is not quoted, while a non-NULL value matching the + NULL parameter string is quoted. For example, with the + default settings, a NULL is written as an unquoted empty string, while an empty string data value is written with double quotes - (""). Reading values follows similar rules. You can - use FORCE_NOT_NULL to prevent NULL input + (""). Reading values follows similar rules. You can + use FORCE_NOT_NULL to prevent NULL input comparisons for specific columns. You can also use - FORCE_NULL to convert quoted null string data values to - NULL. + FORCE_NULL to convert quoted null string data values to + NULL. - Because backslash is not a special character in the CSV - format, \., the end-of-data marker, could also appear - as a data value. To avoid any misinterpretation, a \. + Because backslash is not a special character in the CSV + format, \., the end-of-data marker, could also appear + as a data value. To avoid any misinterpretation, a \. data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a - value of \., you might need to quote that value in the + value of \., you might need to quote that value in the input file. - In CSV format, all characters are significant. A quoted value + In CSV format, all characters are significant. A quoted value surrounded by white space, or any characters other than - DELIMITER, will include those characters. This can cause - errors if you import data from a system that pads CSV + DELIMITER, will include those characters. This can cause + errors if you import data from a system that pads CSV lines with white space out to some fixed width. If such a situation - arises you might need to preprocess the CSV file to remove + arises you might need to preprocess the CSV file to remove the trailing white space, before importing the data into - PostgreSQL. + PostgreSQL. @@ -743,7 +743,7 @@ COPY count Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus you might encounter some files that cannot be imported using this - mechanism, and COPY might produce files that other + mechanism, and COPY might produce files that other programs cannot process. @@ -756,17 +756,17 @@ COPY count The binary format option causes all data to be stored/read as binary format rather than as text. It is - somewhat faster than the text and CSV formats, + somewhat faster than the text and CSV formats, but a binary-format file is less portable across machine architectures and PostgreSQL versions. Also, the binary format is very data type specific; for example - it will not work to output binary data from a smallint column - and read it into an integer column, even though that would work + it will not work to output binary data from a smallint column + and read it into an integer column, even though that would work fine in text format. - The binary file format consists + The binary file format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order. @@ -790,7 +790,7 @@ COPY count Signature -11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte +11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation @@ -804,7 +804,7 @@ filters, dropped zero bytes, dropped high bits, or parity changes.) 32-bit integer bit mask to denote important aspects of the file format. Bits -are numbered from 0 (LSB) to 31 (MSB). Note that +are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader @@ -880,7 +880,7 @@ to be specified. To determine the appropriate binary format for the actual tuple data you should consult the PostgreSQL source, in -particular the *send and *recv functions for +particular the *send and *recv functions for each column's data type (typically these functions are found in the src/backend/utils/adt/ directory of the source distribution). @@ -924,7 +924,7 @@ COPY country TO STDOUT (DELIMITER '|'); - To copy data from a file into the country table: + To copy data from a file into the country table: COPY country FROM '/usr1/proj/bray/sql/country_data'; @@ -986,7 +986,7 @@ ZW ZIMBABWE - The following syntax was used before PostgreSQL + The following syntax was used before PostgreSQL version 9.0 and is still supported: @@ -1015,13 +1015,13 @@ COPY { table_name [ ( column_name [, ...] | * } ] ] ] - Note that in this syntax, BINARY and CSV are - treated as independent keywords, not as arguments of a FORMAT + Note that in this syntax, BINARY and CSV are + treated as independent keywords, not as arguments of a FORMAT option. - The following syntax was used before PostgreSQL + The following syntax was used before PostgreSQL version 7.3 and is still supported: diff --git a/doc/src/sgml/ref/create_access_method.sgml b/doc/src/sgml/ref/create_access_method.sgml index 891926dba5..1bb1a79bd2 100644 --- a/doc/src/sgml/ref/create_access_method.sgml +++ b/doc/src/sgml/ref/create_access_method.sgml @@ -73,7 +73,7 @@ CREATE ACCESS METHOD name handler_function is the name (possibly schema-qualified) of a previously registered function that represents the access method. The handler function must be - declared to take a single argument of type internal, + declared to take a single argument of type internal, and its return type depends on the type of access method; for INDEX access methods, it must be index_am_handler. The C-level API that the handler @@ -89,8 +89,8 @@ CREATE ACCESS METHOD name Examples - Create an index access method heptree with - handler function heptree_handler: + Create an index access method heptree with + handler function heptree_handler: CREATE ACCESS METHOD heptree TYPE INDEX HANDLER heptree_handler; @@ -101,7 +101,7 @@ CREATE ACCESS METHOD heptree TYPE INDEX HANDLER heptree_handler; CREATE ACCESS METHOD is a - PostgreSQL extension. + PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index ee79c90df2..3de30fa580 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -98,7 +98,7 @@ CREATE AGGREGATE name ( If a schema name is given (for example, CREATE AGGREGATE - myschema.myagg ...) then the aggregate function is created in the + myschema.myagg ...) then the aggregate function is created in the specified schema. Otherwise it is created in the current schema. @@ -191,57 +191,57 @@ CREATE AGGREGATE name ( is polymorphic and the state value's data type would be inadequate to pin down the result type. These extra parameters are always passed as NULL (and so the final function must not be strict when - the FINALFUNC_EXTRA option is used), but nonetheless they + the FINALFUNC_EXTRA option is used), but nonetheless they are valid parameters. The final function could for example make use - of get_fn_expr_argtype to identify the actual argument type + of get_fn_expr_argtype to identify the actual argument type in the current call. - An aggregate can optionally support moving-aggregate mode, + An aggregate can optionally support moving-aggregate mode, as described in . This requires - specifying the MSFUNC, MINVFUNC, - and MSTYPE parameters, and optionally - the MSPACE, MFINALFUNC, - MFINALFUNC_EXTRA, MFINALFUNC_MODIFY, - and MINITCOND parameters. Except for MINVFUNC, + specifying the MSFUNC, MINVFUNC, + and MSTYPE parameters, and optionally + the MSPACE, MFINALFUNC, + MFINALFUNC_EXTRA, MFINALFUNC_MODIFY, + and MINITCOND parameters. Except for MINVFUNC, these parameters work like the corresponding simple-aggregate parameters - without M; they define a separate implementation of the + without M; they define a separate implementation of the aggregate that includes an inverse transition function. The syntax with ORDER BY in the parameter list creates a special type of aggregate called an ordered-set - aggregate; or if HYPOTHETICAL is specified, then + aggregate; or if HYPOTHETICAL is specified, then a hypothetical-set aggregate is created. These aggregates operate over groups of sorted values in order-dependent ways, so that specification of an input sort order is an essential part of a - call. Also, they can have direct arguments, which are + call. Also, they can have direct arguments, which are arguments that are evaluated only once per aggregation rather than once per input row. Hypothetical-set aggregates are a subclass of ordered-set aggregates in which some of the direct arguments are required to match, in number and data types, the aggregated argument columns. This allows the values of those direct arguments to be added to the collection of - aggregate-input rows as an additional hypothetical row. + aggregate-input rows as an additional hypothetical row. - An aggregate can optionally support partial aggregation, + An aggregate can optionally support partial aggregation, as described in . - This requires specifying the COMBINEFUNC parameter. + This requires specifying the COMBINEFUNC parameter. If the state_data_type - is internal, it's usually also appropriate to provide the - SERIALFUNC and DESERIALFUNC parameters so that + is internal, it's usually also appropriate to provide the + SERIALFUNC and DESERIALFUNC parameters so that parallel aggregation is possible. Note that the aggregate must also be - marked PARALLEL SAFE to enable parallel aggregation. + marked PARALLEL SAFE to enable parallel aggregation. - Aggregates that behave like MIN or MAX can + Aggregates that behave like MIN or MAX can sometimes be optimized by looking into an index instead of scanning every input row. If this aggregate can be so optimized, indicate it by - specifying a sort operator. The basic requirement is that + specifying a sort operator. The basic requirement is that the aggregate must yield the first element in the sort ordering induced by the operator; in other words: @@ -253,9 +253,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Further assumptions are that the aggregate ignores null inputs, and that it delivers a null result if and only if there were no non-null inputs. - Ordinarily, a data type's < operator is the proper sort - operator for MIN, and > is the proper sort - operator for MAX. Note that the optimization will never + Ordinarily, a data type's < operator is the proper sort + operator for MIN, and > is the proper sort + operator for MAX. Note that the optimization will never actually take effect unless the specified operator is the less than or greater than strategy member of a B-tree index operator class. @@ -288,10 +288,10 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - The mode of an argument: IN or VARIADIC. - (Aggregate functions do not support OUT arguments.) - If omitted, the default is IN. Only the last argument - can be marked VARIADIC. + The mode of an argument: IN or VARIADIC. + (Aggregate functions do not support OUT arguments.) + If omitted, the default is IN. Only the last argument + can be marked VARIADIC. @@ -312,7 +312,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; An input data type on which this aggregate function operates. - To create a zero-argument aggregate function, write * + To create a zero-argument aggregate function, write * in place of the list of argument specifications. (An example of such an aggregate is count(*).) @@ -323,12 +323,12 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; base_type - In the old syntax for CREATE AGGREGATE, the input data type - is specified by a basetype parameter rather than being + In the old syntax for CREATE AGGREGATE, the input data type + is specified by a basetype parameter rather than being written next to the aggregate name. Note that this syntax allows only one input parameter. To define a zero-argument aggregate function - with this syntax, specify the basetype as - "ANY" (not *). + with this syntax, specify the basetype as + "ANY" (not *). Ordered-set aggregates cannot be defined with the old syntax. @@ -339,9 +339,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the state transition function to be called for each - input row. For a normal N-argument - aggregate function, the sfunc - must take N+1 arguments, + input row. For a normal N-argument + aggregate function, the sfunc + must take N+1 arguments, the first being of type state_data_type and the rest matching the declared input data type(s) of the aggregate. @@ -375,7 +375,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The approximate average size (in bytes) of the aggregate's state value. If this parameter is omitted or is zero, a default estimate is used - based on the state_data_type. + based on the state_data_type. The planner uses this value to estimate the memory required for a grouped aggregate query. The planner will consider using hash aggregation for such a query only if the hash table is estimated to fit @@ -408,7 +408,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - If FINALFUNC_EXTRA is specified, then in addition to the + If FINALFUNC_EXTRA is specified, then in addition to the final state value and any direct arguments, the final function receives extra NULL values corresponding to the aggregate's regular (aggregated) arguments. This is mainly useful to allow correct @@ -419,16 +419,16 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } This option specifies whether the final function is a pure function - that does not modify its arguments. READ_ONLY indicates + that does not modify its arguments. READ_ONLY indicates it does not; the other two values indicate that it may change the transition state value. See below for more detail. The - default is READ_ONLY, except for ordered-set aggregates, - for which the default is READ_WRITE. + default is READ_ONLY, except for ordered-set aggregates, + for which the default is READ_WRITE. @@ -482,11 +482,11 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; An aggregate function whose state_data_type - is internal can participate in parallel aggregation only if it + is internal can participate in parallel aggregation only if it has a serialfunc function, - which must serialize the aggregate state into a bytea value for + which must serialize the aggregate state into a bytea value for transmission to another process. This function must take a single - argument of type internal and return type bytea. A + argument of type internal and return type bytea. A corresponding deserialfunc is also required. @@ -499,9 +499,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Deserialize a previously serialized aggregate state back into state_data_type. This - function must take two arguments of types bytea - and internal, and produce a result of type internal. - (Note: the second, internal argument is unused, but is required + function must take two arguments of types bytea + and internal, and produce a result of type internal. + (Note: the second, internal argument is unused, but is required for type safety reasons.) @@ -526,8 +526,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the forward state transition function to be called for each input row in moving-aggregate mode. This is exactly like the regular transition function, except that its first argument and result are of - type mstate_data_type, which might be different - from state_data_type. + type mstate_data_type, which might be different + from state_data_type. @@ -538,7 +538,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the inverse state transition function to be used in moving-aggregate mode. This function has the same argument and - result types as msfunc, but it is used to remove + result types as msfunc, but it is used to remove a value from the current aggregate state, rather than add a value to it. The inverse transition function must have the same strictness attribute as the forward state transition function. @@ -562,7 +562,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The approximate average size (in bytes) of the aggregate's state value, when using moving-aggregate mode. This works the same as - state_data_size. + state_data_size. @@ -573,22 +573,22 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the final function called to compute the aggregate's result after all input rows have been traversed, when using - moving-aggregate mode. This works the same as ffunc, + moving-aggregate mode. This works the same as ffunc, except that its first argument's type - is mstate_data_type and extra dummy arguments are - specified by writing MFINALFUNC_EXTRA. - The aggregate result type determined by mffunc - or mstate_data_type must match that determined by the + is mstate_data_type and extra dummy arguments are + specified by writing MFINALFUNC_EXTRA. + The aggregate result type determined by mffunc + or mstate_data_type must match that determined by the aggregate's regular implementation. - MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } - This option is like FINALFUNC_MODIFY, but it describes + This option is like FINALFUNC_MODIFY, but it describes the behavior of the moving-aggregate final function. @@ -599,7 +599,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The initial setting for the state value, when using moving-aggregate - mode. This works the same as initial_condition. + mode. This works the same as initial_condition. @@ -608,8 +608,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; sort_operator - The associated sort operator for a MIN- or - MAX-like aggregate. + The associated sort operator for a MIN- or + MAX-like aggregate. This is just an operator name (possibly schema-qualified). The operator is assumed to have the same input data types as the aggregate (which must be a single-argument normal aggregate). @@ -618,14 +618,14 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - PARALLEL = { SAFE | RESTRICTED | UNSAFE } + PARALLEL = { SAFE | RESTRICTED | UNSAFE } - The meanings of PARALLEL SAFE, PARALLEL - RESTRICTED, and PARALLEL UNSAFE are the same as + The meanings of PARALLEL SAFE, PARALLEL + RESTRICTED, and PARALLEL UNSAFE are the same as in . An aggregate will not be considered for parallelization if it is marked PARALLEL - UNSAFE (which is the default!) or PARALLEL RESTRICTED. + UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself. @@ -640,7 +640,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; For ordered-set aggregates only, this flag specifies that the aggregate arguments are to be processed according to the requirements for hypothetical-set aggregates: that is, the last few direct arguments must - match the data types of the aggregated (WITHIN GROUP) + match the data types of the aggregated (WITHIN GROUP) arguments. The HYPOTHETICAL flag has no effect on run-time behavior, only on parse-time resolution of the data types and collations of the aggregate's arguments. @@ -660,7 +660,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; In parameters that specify support function names, you can write - a schema name if needed, for example SFUNC = public.sum. + a schema name if needed, for example SFUNC = public.sum. Do not write argument types there, however — the argument types of the support functions are determined from other parameters. @@ -668,7 +668,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Ordinarily, PostgreSQL functions are expected to be true functions that do not modify their input values. However, an aggregate transition - function, when used in the context of an aggregate, + function, when used in the context of an aggregate, is allowed to cheat and modify its transition-state argument in place. This can provide substantial performance benefits compared to making a fresh copy of the transition state each time. @@ -678,26 +678,26 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Likewise, while an aggregate final function is normally expected not to modify its input values, sometimes it is impractical to avoid modifying the transition-state argument. Such behavior must be declared using - the FINALFUNC_MODIFY parameter. The READ_WRITE + the FINALFUNC_MODIFY parameter. The READ_WRITE value indicates that the final function modifies the transition state in unspecified ways. This value prevents use of the aggregate as a window function, and it also prevents merging of transition states for aggregate calls that share the same input values and transition functions. - The SHARABLE value indicates that the transition function + The SHARABLE value indicates that the transition function cannot be applied after the final function, but multiple final-function calls can be performed on the ending transition state value. This value prevents use of the aggregate as a window function, but it allows merging of transition states. (That is, the optimization of interest here is not applying the same final function repeatedly, but applying different final functions to the same ending transition state value. This is allowed as - long as none of the final functions are marked READ_WRITE.) + long as none of the final functions are marked READ_WRITE.) If an aggregate supports moving-aggregate mode, it will improve calculation efficiency when the aggregate is used as a window function for a window with moving frame start (that is, a frame start mode other - than UNBOUNDED PRECEDING). Conceptually, the forward + than UNBOUNDED PRECEDING). Conceptually, the forward transition function adds input values to the aggregate's state when they enter the window frame from the bottom, and the inverse transition function removes them again when they leave the frame at the top. So, @@ -738,20 +738,20 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - The syntax for ordered-set aggregates allows VARIADIC + The syntax for ordered-set aggregates allows VARIADIC to be specified for both the last direct parameter and the last - aggregated (WITHIN GROUP) parameter. However, the - current implementation restricts use of VARIADIC + aggregated (WITHIN GROUP) parameter. However, the + current implementation restricts use of VARIADIC in two ways. First, ordered-set aggregates can only use - VARIADIC "any", not other variadic array types. - Second, if the last direct parameter is VARIADIC "any", + VARIADIC "any", not other variadic array types. + Second, if the last direct parameter is VARIADIC "any", then there can be only one aggregated parameter and it must also - be VARIADIC "any". (In the representation used in the + be VARIADIC "any". (In the representation used in the system catalogs, these two parameters are merged into a single - VARIADIC "any" item, since pg_proc cannot - represent functions with more than one VARIADIC parameter.) + VARIADIC "any" item, since pg_proc cannot + represent functions with more than one VARIADIC parameter.) If the aggregate is a hypothetical-set aggregate, the direct arguments - that match the VARIADIC "any" parameter are the hypothetical + that match the VARIADIC "any" parameter are the hypothetical ones; any preceding parameters represent additional direct arguments that are not constrained to match the aggregated arguments. @@ -764,7 +764,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Partial (including parallel) aggregation is currently not supported for ordered-set aggregates. Also, it will never be used for aggregate calls - that include DISTINCT or ORDER BY clauses, since + that include DISTINCT or ORDER BY clauses, since those semantics cannot be supported during partial aggregation. diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml index a7d13edc22..89af1e5051 100644 --- a/doc/src/sgml/ref/create_cast.sgml +++ b/doc/src/sgml/ref/create_cast.sgml @@ -44,7 +44,7 @@ SELECT CAST(42 AS float8); converts the integer constant 42 to type float8 by invoking a previously specified function, in this case - float8(int4). (If no suitable cast has been defined, the + float8(int4). (If no suitable cast has been defined, the conversion fails.) @@ -64,7 +64,7 @@ SELECT CAST(42 AS float8); - You can define a cast as an I/O conversion cast by using + You can define a cast as an I/O conversion cast by using the WITH INOUT syntax. An I/O conversion cast is performed by invoking the output function of the source data type, and passing the resulting string to the input function of the target data type. @@ -75,14 +75,14 @@ SELECT CAST(42 AS float8); By default, a cast can be invoked only by an explicit cast request, - that is an explicit CAST(x AS - typename) or - x::typename + that is an explicit CAST(x AS + typename) or + x::typename construct. - If the cast is marked AS ASSIGNMENT then it can be invoked + If the cast is marked AS ASSIGNMENT then it can be invoked implicitly when assigning a value to a column of the target data type. For example, supposing that foo.f1 is a column of type text, then: @@ -90,13 +90,13 @@ SELECT CAST(42 AS float8); INSERT INTO foo (f1) VALUES (42); will be allowed if the cast from type integer to type - text is marked AS ASSIGNMENT, otherwise not. + text is marked AS ASSIGNMENT, otherwise not. (We generally use the term assignment cast to describe this kind of cast.) - If the cast is marked AS IMPLICIT then it can be invoked + If the cast is marked AS IMPLICIT then it can be invoked implicitly in any context, whether assignment or internally in an expression. (We generally use the term implicit cast to describe this kind of cast.) @@ -104,12 +104,12 @@ INSERT INTO foo (f1) VALUES (42); SELECT 2 + 4.0; - The parser initially marks the constants as being of type integer - and numeric respectively. There is no integer - + numeric operator in the system catalogs, - but there is a numeric + numeric operator. - The query will therefore succeed if a cast from integer to - numeric is available and is marked AS IMPLICIT — + The parser initially marks the constants as being of type integer + and numeric respectively. There is no integer + + numeric operator in the system catalogs, + but there is a numeric + numeric operator. + The query will therefore succeed if a cast from integer to + numeric is available and is marked AS IMPLICIT — which in fact it is. The parser will apply the implicit cast and resolve the query as if it had been written @@ -118,17 +118,17 @@ SELECT CAST ( 2 AS numeric ) + 4.0; - Now, the catalogs also provide a cast from numeric to - integer. If that cast were marked AS IMPLICIT — + Now, the catalogs also provide a cast from numeric to + integer. If that cast were marked AS IMPLICIT — which it is not — then the parser would be faced with choosing between the above interpretation and the alternative of casting the - numeric constant to integer and applying the - integer + integer operator. Lacking any + numeric constant to integer and applying the + integer + integer operator. Lacking any knowledge of which choice to prefer, it would give up and declare the query ambiguous. The fact that only one of the two casts is implicit is the way in which we teach the parser to prefer resolution - of a mixed numeric-and-integer expression as - numeric; there is no built-in knowledge about that. + of a mixed numeric-and-integer expression as + numeric; there is no built-in knowledge about that. @@ -142,8 +142,8 @@ SELECT CAST ( 2 AS numeric ) + 4.0; general type category. For example, the cast from int2 to int4 can reasonably be implicit, but the cast from float8 to int4 should probably be - assignment-only. Cross-type-category casts, such as text - to int4, are best made explicit-only. + assignment-only. Cross-type-category casts, such as text + to int4, are best made explicit-only. @@ -151,8 +151,8 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Sometimes it is necessary for usability or standards-compliance reasons to provide multiple implicit casts among a set of types, resulting in ambiguity that cannot be avoided as above. The parser has a fallback - heuristic based on type categories and preferred - types that can help to provide desired behavior in such cases. See + heuristic based on type categories and preferred + types that can help to provide desired behavior in such cases. See for more information. @@ -255,11 +255,11 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Cast implementation functions can have one to three arguments. The first argument type must be identical to or binary-coercible from the cast's source type. The second argument, - if present, must be type integer; it receives the type - modifier associated with the destination type, or -1 + if present, must be type integer; it receives the type + modifier associated with the destination type, or -1 if there is none. The third argument, - if present, must be type boolean; it receives true - if the cast is an explicit cast, false otherwise. + if present, must be type boolean; it receives true + if the cast is an explicit cast, false otherwise. (Bizarrely, the SQL standard demands different behaviors for explicit and implicit casts in some cases. This argument is supplied for functions that must implement such casts. It is not recommended that you design @@ -316,9 +316,9 @@ SELECT CAST ( 2 AS numeric ) + 4.0; It is normally not necessary to create casts between user-defined types - and the standard string types (text, varchar, and - char(n), as well as user-defined types that - are defined to be in the string category). PostgreSQL + and the standard string types (text, varchar, and + char(n), as well as user-defined types that + are defined to be in the string category). PostgreSQL provides automatic I/O conversion casts for that. The automatic casts to string types are treated as assignment casts, while the automatic casts from string types are @@ -338,11 +338,11 @@ SELECT CAST ( 2 AS numeric ) + 4.0; convention of naming cast implementation functions after the target data type. Many users are used to being able to cast data types using a function-style notation, that is - typename(x). This notation is in fact + typename(x). This notation is in fact nothing more nor less than a call of the cast implementation function; it is not specially treated as a cast. If your conversion functions are not named to support this convention then you will have surprised users. - Since PostgreSQL allows overloading of the same function + Since PostgreSQL allows overloading of the same function name with different argument types, there is no difficulty in having multiple conversion functions from different types that all use the target type's name. @@ -353,14 +353,14 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Actually the preceding paragraph is an oversimplification: there are two cases in which a function-call construct will be treated as a cast request without having matched it to an actual function. - If a function call name(x) does not - exactly match any existing function, but name is the name - of a data type and pg_cast provides a binary-coercible cast - to this type from the type of x, then the call will be + If a function call name(x) does not + exactly match any existing function, but name is the name + of a data type and pg_cast provides a binary-coercible cast + to this type from the type of x, then the call will be construed as a binary-coercible cast. This exception is made so that binary-coercible casts can be invoked using functional syntax, even though they lack any function. Likewise, if there is no - pg_cast entry but the cast would be to or from a string + pg_cast entry but the cast would be to or from a string type, the call will be construed as an I/O conversion cast. This exception allows I/O conversion casts to be invoked using functional syntax. @@ -372,7 +372,7 @@ SELECT CAST ( 2 AS numeric ) + 4.0; There is also an exception to the exception: I/O conversion casts from composite types to string types cannot be invoked using functional syntax, but must be written in explicit cast syntax (either - CAST or :: notation). This exception was added + CAST or :: notation). This exception was added because after the introduction of automatically-provided I/O conversion casts, it was found too easy to accidentally invoke such a cast when a function or column reference was intended. @@ -402,7 +402,7 @@ CREATE CAST (bigint AS int4) WITH FUNCTION int4(bigint) AS ASSIGNMENT; SQL standard, except that SQL does not make provisions for binary-coercible types or extra arguments to implementation functions. - AS IMPLICIT is a PostgreSQL + AS IMPLICIT is a PostgreSQL extension, too. diff --git a/doc/src/sgml/ref/create_collation.sgml b/doc/src/sgml/ref/create_collation.sgml index f88758095f..d4e99e925f 100644 --- a/doc/src/sgml/ref/create_collation.sgml +++ b/doc/src/sgml/ref/create_collation.sgml @@ -116,7 +116,7 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM Specifies the provider to use for locale services associated with this collation. Possible values - are: icu,ICU + are: icu,ICU libc. libc is the default. The available choices depend on the operating system and build options. diff --git a/doc/src/sgml/ref/create_conversion.sgml b/doc/src/sgml/ref/create_conversion.sgml index d2e2c010ef..03e0315eef 100644 --- a/doc/src/sgml/ref/create_conversion.sgml +++ b/doc/src/sgml/ref/create_conversion.sgml @@ -29,7 +29,7 @@ CREATE [ DEFAULT ] CONVERSION name CREATE CONVERSION defines a new conversion between character set encodings. Also, conversions that - are marked DEFAULT can be used for automatic encoding + are marked DEFAULT can be used for automatic encoding conversion between client and server. For this purpose, two conversions, from encoding A to B and from encoding B to A, must be defined. @@ -51,7 +51,7 @@ CREATE [ DEFAULT ] CONVERSION name - The DEFAULT clause indicates that this conversion + The DEFAULT clause indicates that this conversion is the default for this particular source to destination encoding. There should be only one default encoding in a schema for the encoding pair. @@ -137,7 +137,7 @@ conv_proc( To create a conversion from encoding UTF8 to - LATIN1 using myfunc: + LATIN1 using myfunc: CREATE CONVERSION myconv FOR 'UTF8' TO 'LATIN1' FROM myfunc; diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 8e2a73402f..8adfa3a37b 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -44,21 +44,21 @@ CREATE DATABASE name To create a database, you must be a superuser or have the special - CREATEDB privilege. + CREATEDB privilege. See . By default, the new database will be created by cloning the standard - system database template1. A different template can be + system database template1. A different template can be specified by writing TEMPLATE name. In particular, - by writing TEMPLATE template0, you can create a virgin + by writing TEMPLATE template0, you can create a virgin database containing only the standard objects predefined by your version of PostgreSQL. This is useful if you wish to avoid copying any installation-local objects that might have been added to - template1. + template1. @@ -115,7 +115,7 @@ CREATE DATABASE name lc_collate - Collation order (LC_COLLATE) to use in the new database. + Collation order (LC_COLLATE) to use in the new database. This affects the sort order applied to strings, e.g. in queries with ORDER BY, as well as the order used in indexes on text columns. The default is to use the collation order of the template database. @@ -127,7 +127,7 @@ CREATE DATABASE name lc_ctype - Character classification (LC_CTYPE) to use in the new + Character classification (LC_CTYPE) to use in the new database. This affects the categorization of characters, e.g. lower, upper and digit. The default is to use the character classification of the template database. See below for additional restrictions. @@ -155,7 +155,7 @@ CREATE DATABASE name If false then no one can connect to this database. The default is true, allowing connections (except as restricted by other mechanisms, - such as GRANT/REVOKE CONNECT). + such as GRANT/REVOKE CONNECT). @@ -192,12 +192,12 @@ CREATE DATABASE name Notes - CREATE DATABASE cannot be executed inside a transaction + CREATE DATABASE cannot be executed inside a transaction block. - Errors along the line of could not initialize database directory + Errors along the line of could not initialize database directory are most likely related to insufficient permissions on the data directory, a full disk, or other file system problems. @@ -218,26 +218,26 @@ CREATE DATABASE name - Although it is possible to copy a database other than template1 + Although it is possible to copy a database other than template1 by specifying its name as the template, this is not (yet) intended as a general-purpose COPY DATABASE facility. The principal limitation is that no other sessions can be connected to the template database while it is being copied. CREATE - DATABASE will fail if any other connection exists when it starts; + DATABASE will fail if any other connection exists when it starts; otherwise, new connections to the template database are locked out - until CREATE DATABASE completes. + until CREATE DATABASE completes. See for more information. The character set encoding specified for the new database must be - compatible with the chosen locale settings (LC_COLLATE and - LC_CTYPE). If the locale is C (or equivalently - POSIX), then all encodings are allowed, but for other + compatible with the chosen locale settings (LC_COLLATE and + LC_CTYPE). If the locale is C (or equivalently + POSIX), then all encodings are allowed, but for other locale settings there is only one encoding that will work properly. (On Windows, however, UTF-8 encoding can be used with any locale.) - CREATE DATABASE will allow superusers to specify - SQL_ASCII encoding regardless of the locale settings, + CREATE DATABASE will allow superusers to specify + SQL_ASCII encoding regardless of the locale settings, but this choice is deprecated and may result in misbehavior of character-string functions if data that is not encoding-compatible with the locale is stored in the database. @@ -245,19 +245,19 @@ CREATE DATABASE name The encoding and locale settings must match those of the template database, - except when template0 is used as template. This is because + except when template0 is used as template. This is because other databases might contain data that does not match the specified encoding, or might contain indexes whose sort ordering is affected by - LC_COLLATE and LC_CTYPE. Copying such data would + LC_COLLATE and LC_CTYPE. Copying such data would result in a database that is corrupt according to the new settings. template0, however, is known to not contain any data or indexes that would be affected. - The CONNECTION LIMIT option is only enforced approximately; + The CONNECTION LIMIT option is only enforced approximately; if two new sessions start at about the same time when just one - connection slot remains for the database, it is possible that + connection slot remains for the database, it is possible that both will fail. Also, the limit is not enforced against superusers or background worker processes. @@ -275,8 +275,8 @@ CREATE DATABASE lusiadas; - To create a database sales owned by user salesapp - with a default tablespace of salesspace: + To create a database sales owned by user salesapp + with a default tablespace of salesspace: CREATE DATABASE sales OWNER salesapp TABLESPACE salesspace; @@ -284,19 +284,19 @@ CREATE DATABASE sales OWNER salesapp TABLESPACE salesspace; - To create a database music with a different locale: + To create a database music with a different locale: CREATE DATABASE music LC_COLLATE 'sv_SE.utf8' LC_CTYPE 'sv_SE.utf8' TEMPLATE template0; - In this example, the TEMPLATE template0 clause is required if - the specified locale is different from the one in template1. + In this example, the TEMPLATE template0 clause is required if + the specified locale is different from the one in template1. (If it is not, then specifying the locale explicitly is redundant.) - To create a database music2 with a different locale and a + To create a database music2 with a different locale and a different character set encoding: CREATE DATABASE music2 diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index 85ed57dd08..705ff55c49 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -45,7 +45,7 @@ CREATE DOMAIN name [ AS ] If a schema name is given (for example, CREATE DOMAIN - myschema.mydomain ...) then the domain is created in the + myschema.mydomain ...) then the domain is created in the specified schema. Otherwise it is created in the current schema. The domain name must be unique among the types and domains existing in its schema. @@ -95,7 +95,7 @@ CREATE DOMAIN name [ AS ] An optional collation for the domain. If no collation is specified, the underlying data type's default collation is used. - The underlying type must be collatable if COLLATE + The underlying type must be collatable if COLLATE is specified. @@ -106,7 +106,7 @@ CREATE DOMAIN name [ AS ] - The DEFAULT clause specifies a default value for + The DEFAULT clause specifies a default value for columns of the domain data type. The value is any variable-free expression (but subqueries are not allowed). The data type of the default expression must match the data @@ -136,7 +136,7 @@ CREATE DOMAIN name [ AS ] - NOT NULL + NOT NULL Values of this domain are prevented from being null @@ -146,7 +146,7 @@ CREATE DOMAIN name [ AS ] - NULL + NULL Values of this domain are allowed to be null. This is the default. @@ -163,10 +163,10 @@ CREATE DOMAIN name [ AS ] CHECK (expression) - CHECK clauses specify integrity constraints or tests + CHECK clauses specify integrity constraints or tests which values of the domain must satisfy. Each constraint must be an expression - producing a Boolean result. It should use the key word VALUE + producing a Boolean result. It should use the key word VALUE to refer to the value being tested. Expressions evaluating to TRUE or UNKNOWN succeed. If the expression produces a FALSE result, an error is reported and the value is not allowed to be converted @@ -175,13 +175,13 @@ CREATE DOMAIN name [ AS ] Currently, CHECK expressions cannot contain - subqueries nor refer to variables other than VALUE. + subqueries nor refer to variables other than VALUE. When a domain has multiple CHECK constraints, they will be tested in alphabetical order by name. - (PostgreSQL versions before 9.5 did not honor any + (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK constraints.) @@ -193,7 +193,7 @@ CREATE DOMAIN name [ AS ] Notes - Domain constraints, particularly NOT NULL, are checked when + Domain constraints, particularly NOT NULL, are checked when converting a value to the domain type. It is possible for a column that is nominally of the domain type to read as null despite there being such a constraint. For example, this can happen in an outer-join query, if @@ -211,7 +211,7 @@ INSERT INTO tab (domcol) VALUES ((SELECT domcol FROM tab WHERE false)); It is very difficult to avoid such problems, because of SQL's general assumption that a null value is a valid value of every data type. Best practice therefore is to design a domain's constraints so that a null value is allowed, - and then to apply column NOT NULL constraints to columns of + and then to apply column NOT NULL constraints to columns of the domain type as needed, rather than directly to the domain type. diff --git a/doc/src/sgml/ref/create_event_trigger.sgml b/doc/src/sgml/ref/create_event_trigger.sgml index 7decfbb983..9652f02412 100644 --- a/doc/src/sgml/ref/create_event_trigger.sgml +++ b/doc/src/sgml/ref/create_event_trigger.sgml @@ -33,7 +33,7 @@ CREATE EVENT TRIGGER name CREATE EVENT TRIGGER creates a new event trigger. - Whenever the designated event occurs and the WHEN condition + Whenever the designated event occurs and the WHEN condition associated with the trigger, if any, is satisfied, the trigger function will be executed. For a general introduction to event triggers, see . The user who creates an event trigger @@ -85,8 +85,8 @@ CREATE EVENT TRIGGER name A list of values for the associated filter_variable - for which the trigger should fire. For TAG, this means a - list of command tags (e.g. 'DROP FUNCTION'). + for which the trigger should fire. For TAG, this means a + list of command tags (e.g. 'DROP FUNCTION'). diff --git a/doc/src/sgml/ref/create_extension.sgml b/doc/src/sgml/ref/create_extension.sgml index 14e910115a..a3a7892812 100644 --- a/doc/src/sgml/ref/create_extension.sgml +++ b/doc/src/sgml/ref/create_extension.sgml @@ -39,7 +39,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name Loading an extension essentially amounts to running the extension's script - file. The script will typically create new SQL objects such as + file. The script will typically create new SQL objects such as functions, data types, operators and index support methods. CREATE EXTENSION additionally records the identities of all the created objects, so that they can be dropped again if @@ -62,7 +62,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if an extension with the same name already @@ -97,17 +97,17 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - If the extension specifies a schema parameter in its + If the extension specifies a schema parameter in its control file, then that schema cannot be overridden with - a SCHEMA clause. Normally, an error will be raised if - a SCHEMA clause is given and it conflicts with the - extension's schema parameter. However, if - the CASCADE clause is also given, + a SCHEMA clause. Normally, an error will be raised if + a SCHEMA clause is given and it conflicts with the + extension's schema parameter. However, if + the CASCADE clause is also given, then schema_name is ignored when it conflicts. The given schema_name will be used for installation of any needed extensions that do not - specify schema in their control files. + specify schema in their control files. @@ -134,13 +134,13 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name old_version - FROM old_version + FROM old_version must be specified when, and only when, you are attempting to install - an extension that replaces an old style module that is just + an extension that replaces an old style module that is just a collection of objects not packaged into an extension. This option - causes CREATE EXTENSION to run an alternative installation + causes CREATE EXTENSION to run an alternative installation script that absorbs the existing objects into the extension, instead - of creating new objects. Be careful that SCHEMA specifies + of creating new objects. Be careful that SCHEMA specifies the schema containing these pre-existing objects. @@ -150,7 +150,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name extension's author, and might vary if there is more than one version of the old-style module that can be upgraded into an extension. For the standard additional modules supplied with pre-9.1 - PostgreSQL, use unpackaged + PostgreSQL, use unpackaged for old_version when updating a module to extension style. @@ -158,12 +158,12 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - CASCADE + CASCADE Automatically install any extensions that this extension depends on that are not already installed. Their dependencies are likewise - automatically installed, recursively. The SCHEMA clause, + automatically installed, recursively. The SCHEMA clause, if given, applies to all extensions that get installed this way. Other options of the statement are not applied to automatically-installed extensions; in particular, their default @@ -178,7 +178,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name Notes - Before you can use CREATE EXTENSION to load an extension + Before you can use CREATE EXTENSION to load an extension into a database, the extension's supporting files must be installed. Information about installing the extensions supplied with PostgreSQL can be found in @@ -211,13 +211,13 @@ CREATE EXTENSION hstore; - Update a pre-9.1 installation of hstore into + Update a pre-9.1 installation of hstore into extension style: CREATE EXTENSION hstore SCHEMA public FROM unpackaged; Be careful to specify the schema in which you installed the existing - hstore objects. + hstore objects. @@ -225,7 +225,7 @@ CREATE EXTENSION hstore SCHEMA public FROM unpackaged; Compatibility - CREATE EXTENSION is a PostgreSQL + CREATE EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml index 1161e05d1c..87403a55e3 100644 --- a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml @@ -117,7 +117,7 @@ CREATE FOREIGN DATA WRAPPER name Notes - PostgreSQL's foreign-data functionality is still under + PostgreSQL's foreign-data functionality is still under active development. Optimization of queries is primitive (and mostly left to the wrapper, too). Thus, there is considerable room for future performance improvements. @@ -128,22 +128,22 @@ CREATE FOREIGN DATA WRAPPER name Examples - Create a useless foreign-data wrapper dummy: + Create a useless foreign-data wrapper dummy: CREATE FOREIGN DATA WRAPPER dummy; - Create a foreign-data wrapper file with - handler function file_fdw_handler: + Create a foreign-data wrapper file with + handler function file_fdw_handler: CREATE FOREIGN DATA WRAPPER file HANDLER file_fdw_handler; - Create a foreign-data wrapper mywrapper with some + Create a foreign-data wrapper mywrapper with some options: CREATE FOREIGN DATA WRAPPER mywrapper @@ -159,7 +159,7 @@ CREATE FOREIGN DATA WRAPPER mywrapper 9075-9 (SQL/MED), with the exception that the HANDLER and VALIDATOR clauses are extensions and the standard clauses LIBRARY and LANGUAGE - are not implemented in PostgreSQL. + are not implemented in PostgreSQL. diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml index f514b2d59f..47705fd187 100644 --- a/doc/src/sgml/ref/create_foreign_table.sgml +++ b/doc/src/sgml/ref/create_foreign_table.sgml @@ -62,7 +62,7 @@ CHECK ( expression ) [ NO INHERIT ] If a schema name is given (for example, CREATE FOREIGN TABLE - myschema.mytable ...) then the table is created in the specified + myschema.mytable ...) then the table is created in the specified schema. Otherwise it is created in the current schema. The name of the foreign table must be distinct from the name of any other foreign table, table, sequence, index, @@ -95,7 +95,7 @@ CHECK ( expression ) [ NO INHERIT ] - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -140,7 +140,7 @@ CHECK ( expression ) [ NO INHERIT ] COLLATE collation - The COLLATE clause assigns a collation to + The COLLATE clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used. @@ -151,7 +151,7 @@ CHECK ( expression ) [ NO INHERIT ] INHERITS ( parent_table [, ... ] ) - The optional INHERITS clause specifies a list of + The optional INHERITS clause specifies a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables. See the similar form of @@ -166,7 +166,7 @@ CHECK ( expression ) [ NO INHERIT ] An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, - so constraint names like col must be positive can be used + so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name. @@ -175,7 +175,7 @@ CHECK ( expression ) [ NO INHERIT ] - NOT NULL + NOT NULL The column is not allowed to contain null values. @@ -184,7 +184,7 @@ CHECK ( expression ) [ NO INHERIT ] - NULL + NULL The column is allowed to contain null values. This is the default. @@ -202,7 +202,7 @@ CHECK ( expression ) [ NO INHERIT ] CHECK ( expression ) [ NO INHERIT ] - The CHECK clause specifies an expression producing a + The CHECK clause specifies an expression producing a Boolean result which each row in the foreign table is expected to satisfy; that is, the expression should produce TRUE or UNKNOWN, never FALSE, for all rows in the foreign table. @@ -219,7 +219,7 @@ CHECK ( expression ) [ NO INHERIT ] - A constraint marked with NO INHERIT will not propagate to + A constraint marked with NO INHERIT will not propagate to child tables. @@ -230,7 +230,7 @@ CHECK ( expression ) [ NO INHERIT ] default_expr - The DEFAULT clause assigns a default data value for + The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The @@ -279,9 +279,9 @@ CHECK ( expression ) [ NO INHERIT ] Notes - Constraints on foreign tables (such as CHECK - or NOT NULL clauses) are not enforced by the - core PostgreSQL system, and most foreign data wrappers + Constraints on foreign tables (such as CHECK + or NOT NULL clauses) are not enforced by the + core PostgreSQL system, and most foreign data wrappers do not attempt to enforce them either; that is, the constraint is simply assumed to hold true. There would be little point in such enforcement since it would only apply to rows inserted or updated via @@ -300,7 +300,7 @@ CHECK ( expression ) [ NO INHERIT ] - Although PostgreSQL does not attempt to enforce + Although PostgreSQL does not attempt to enforce constraints on foreign tables, it does assume that they are correct for purposes of query optimization. If there are rows visible in the foreign table that do not satisfy a declared constraint, queries on @@ -314,8 +314,8 @@ CHECK ( expression ) [ NO INHERIT ] Examples - Create foreign table films, which will be accessed through - the server film_server: + Create foreign table films, which will be accessed through + the server film_server: CREATE FOREIGN TABLE films ( @@ -330,9 +330,9 @@ SERVER film_server; - Create foreign table measurement_y2016m07, which will be - accessed through the server server_07, as a partition - of the range partitioned table measurement: + Create foreign table measurement_y2016m07, which will be + accessed through the server server_07, as a partition + of the range partitioned table measurement: CREATE FOREIGN TABLE measurement_y2016m07 @@ -348,10 +348,10 @@ CREATE FOREIGN TABLE measurement_y2016m07 The CREATE FOREIGN TABLE command largely conforms to the SQL standard; however, much as with - CREATE TABLE, - NULL constraints and zero-column foreign tables are permitted. + CREATE TABLE, + NULL constraints and zero-column foreign tables are permitted. The ability to specify column default values is also - a PostgreSQL extension. Table inheritance, in the form + a PostgreSQL extension. Table inheritance, in the form defined by PostgreSQL, is nonstandard. diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index 072e033687..97cb9b7fc8 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -58,7 +58,7 @@ CREATE [ OR REPLACE ] FUNCTION The name of the new function must not match any existing function with the same input argument types in the same schema. However, functions of different argument types can share a name (this is - called overloading). + called overloading). @@ -68,13 +68,13 @@ CREATE [ OR REPLACE ] FUNCTION tried, you would actually be creating a new, distinct function). Also, CREATE OR REPLACE FUNCTION will not let you change the return type of an existing function. To do that, - you must drop and recreate the function. (When using OUT + you must drop and recreate the function. (When using OUT parameters, that means you cannot change the types of any - OUT parameters except by dropping the function.) + OUT parameters except by dropping the function.) - When CREATE OR REPLACE FUNCTION is used to replace an + When CREATE OR REPLACE FUNCTION is used to replace an existing function, the ownership and permissions of the function do not change. All other function properties are assigned the values specified or implied in the command. You must own the function @@ -87,7 +87,7 @@ CREATE [ OR REPLACE ] FUNCTION triggers, etc. that refer to the old function. Use CREATE OR REPLACE FUNCTION to change a function definition without breaking objects that refer to the function. - Also, ALTER FUNCTION can be used to change most of the + Also, ALTER FUNCTION can be used to change most of the auxiliary properties of an existing function. @@ -121,12 +121,12 @@ CREATE [ OR REPLACE ] FUNCTION - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. - Only OUT arguments can follow a VARIADIC one. - Also, OUT and INOUT arguments cannot be used - together with the RETURNS TABLE notation. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. + Only OUT arguments can follow a VARIADIC one. + Also, OUT and INOUT arguments cannot be used + together with the RETURNS TABLE notation. @@ -160,7 +160,7 @@ CREATE [ OR REPLACE ] FUNCTION Depending on the implementation language it might also be allowed - to specify pseudo-types such as cstring. + to specify pseudo-types such as cstring. Pseudo-types indicate that the actual argument type is either incompletely specified, or outside the set of ordinary SQL data types. @@ -183,7 +183,7 @@ CREATE [ OR REPLACE ] FUNCTION An expression to be used as default value if the parameter is not specified. The expression has to be coercible to the argument type of the parameter. - Only input (including INOUT) parameters can have a default + Only input (including INOUT) parameters can have a default value. All input parameters following a parameter with a default value must have default values as well. @@ -199,15 +199,15 @@ CREATE [ OR REPLACE ] FUNCTION can be a base, composite, or domain type, or can reference the type of a table column. Depending on the implementation language it might also be allowed - to specify pseudo-types such as cstring. + to specify pseudo-types such as cstring. If the function is not supposed to return a value, specify - void as the return type. + void as the return type. - When there are OUT or INOUT parameters, - the RETURNS clause can be omitted. If present, it + When there are OUT or INOUT parameters, + the RETURNS clause can be omitted. If present, it must agree with the result type implied by the output parameters: - RECORD if there are multiple output parameters, or + RECORD if there are multiple output parameters, or the same type as the single output parameter. @@ -229,10 +229,10 @@ CREATE [ OR REPLACE ] FUNCTION - The name of an output column in the RETURNS TABLE + The name of an output column in the RETURNS TABLE syntax. This is effectively another way of declaring a named - OUT parameter, except that RETURNS TABLE - also implies RETURNS SETOF. + OUT parameter, except that RETURNS TABLE + also implies RETURNS SETOF. @@ -242,7 +242,7 @@ CREATE [ OR REPLACE ] FUNCTION - The data type of an output column in the RETURNS TABLE + The data type of an output column in the RETURNS TABLE syntax. @@ -284,9 +284,9 @@ CREATE [ OR REPLACE ] FUNCTION WINDOW indicates that the function is a - window function rather than a plain function. + window function rather than a plain function. This is currently only useful for functions written in C. - The WINDOW attribute cannot be changed when + The WINDOW attribute cannot be changed when replacing an existing function definition. @@ -321,20 +321,20 @@ CREATE [ OR REPLACE ] FUNCTION result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter variables (such as the current time zone), etc. (It is - inappropriate for AFTER triggers that wish to + inappropriate for AFTER triggers that wish to query rows modified by the current command.) Also note - that the current_timestamp family of functions qualify + that the current_timestamp family of functions qualify as stable, since their values do not change within a transaction. VOLATILE indicates that the function value can change even within a single table scan, so no optimizations can be made. Relatively few database functions are volatile in this sense; - some examples are random(), currval(), - timeofday(). But note that any function that has + some examples are random(), currval(), + timeofday(). But note that any function that has side-effects must be classified volatile, even if its result is quite predictable, to prevent calls from being optimized away; an example is - setval(). + setval(). @@ -430,11 +430,11 @@ CREATE [ OR REPLACE ] FUNCTION Functions should be labeled parallel unsafe if they modify any database state, or if they make changes to the transaction such as using sub-transactions, or if they access sequences or attempt to make - persistent changes to settings (e.g. setval). They should + persistent changes to settings (e.g. setval). They should be labeled as parallel restricted if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize in parallel mode - (e.g. setseed cannot be executed other than by the group + (e.g. setseed cannot be executed other than by the group leader because a change made by another process would not be reflected in the leader). In general, if a function is labeled as being safe when it is restricted or unsafe, or if it is labeled as being restricted when @@ -443,7 +443,7 @@ CREATE [ OR REPLACE ] FUNCTION exhibit totally undefined behavior if mislabeled, since there is no way for the system to protect itself against arbitrary C code, but in most likely cases the result will be no worse than for any other function. - If in doubt, functions should be labeled as UNSAFE, which is + If in doubt, functions should be labeled as UNSAFE, which is the default. @@ -483,23 +483,23 @@ CREATE [ OR REPLACE ] FUNCTION value - The SET clause causes the specified configuration + The SET clause causes the specified configuration parameter to be set to the specified value when the function is entered, and then restored to its prior value when the function exits. - SET FROM CURRENT saves the value of the parameter that - is current when CREATE FUNCTION is executed as the value + SET FROM CURRENT saves the value of the parameter that + is current when CREATE FUNCTION is executed as the value to be applied when the function is entered. - If a SET clause is attached to a function, then - the effects of a SET LOCAL command executed inside the + If a SET clause is attached to a function, then + the effects of a SET LOCAL command executed inside the function for the same variable are restricted to the function: the configuration parameter's prior value is still restored at function exit. However, an ordinary - SET command (without LOCAL) overrides the - SET clause, much as it would do for a previous SET - LOCAL command: the effects of such a command will persist after + SET command (without LOCAL) overrides the + SET clause, much as it would do for a previous SET + LOCAL command: the effects of such a command will persist after function exit, unless the current transaction is rolled back. @@ -570,7 +570,7 @@ CREATE [ OR REPLACE ] FUNCTION - isStrict + isStrict Equivalent to STRICT or RETURNS NULL ON NULL INPUT. @@ -579,7 +579,7 @@ CREATE [ OR REPLACE ] FUNCTION - isCachable + isCachable isCachable is an obsolete equivalent of IMMUTABLE; it's still accepted for @@ -619,7 +619,7 @@ CREATE [ OR REPLACE ] FUNCTION Two functions are considered the same if they have the same names and - input argument types, ignoring any OUT + input argument types, ignoring any OUT parameters. Thus for example these declarations conflict: CREATE FUNCTION foo(int) ... @@ -635,7 +635,7 @@ CREATE FUNCTION foo(int, out text) ... CREATE FUNCTION foo(int) ... CREATE FUNCTION foo(int, int default 42) ... - A call foo(10) will fail due to the ambiguity about which + A call foo(10) will fail due to the ambiguity about which function should be called. @@ -648,16 +648,16 @@ CREATE FUNCTION foo(int, int default 42) ... The full SQL type syntax is allowed for declaring a function's arguments and return value. However, parenthesized type modifiers (e.g., the precision field for - type numeric) are discarded by CREATE FUNCTION. + type numeric) are discarded by CREATE FUNCTION. Thus for example - CREATE FUNCTION foo (varchar(10)) ... + CREATE FUNCTION foo (varchar(10)) ... is exactly the same as - CREATE FUNCTION foo (varchar) .... + CREATE FUNCTION foo (varchar) .... When replacing an existing function with CREATE OR REPLACE - FUNCTION, there are restrictions on changing parameter names. + FUNCTION, there are restrictions on changing parameter names. You cannot change the name already assigned to any input parameter (although you can add names to parameters that had none before). If there is more than one output parameter, you cannot change the @@ -668,9 +668,9 @@ CREATE FUNCTION foo(int, int default 42) ... - If a function is declared STRICT with a VARIADIC + If a function is declared STRICT with a VARIADIC argument, the strictness check tests that the variadic array as - a whole is non-null. The function will still be called if the + a whole is non-null. The function will still be called if the array has null elements. @@ -723,7 +723,7 @@ CREATE FUNCTION dup(int) RETURNS dup_result SELECT * FROM dup(42); - Another way to return multiple columns is to use a TABLE + Another way to return multiple columns is to use a TABLE function: CREATE FUNCTION dup(int) RETURNS TABLE(f1 int, f2 text) @@ -732,8 +732,8 @@ CREATE FUNCTION dup(int) RETURNS TABLE(f1 int, f2 text) SELECT * FROM dup(42); - However, a TABLE function is different from the - preceding examples, because it actually returns a set + However, a TABLE function is different from the + preceding examples, because it actually returns a set of records, not just one record. @@ -742,8 +742,8 @@ SELECT * FROM dup(42); Writing <literal>SECURITY DEFINER</literal> Functions Safely - search_path configuration parameter - use in securing functions + search_path configuration parameter + use in securing functions @@ -758,7 +758,7 @@ SELECT * FROM dup(42); temporary-table schema, which is searched first by default, and is normally writable by anyone. A secure arrangement can be obtained by forcing the temporary schema to be searched last. To do this, - write pg_temppg_tempsecuring functions as the last entry in search_path. + write pg_temppg_tempsecuring functions as the last entry in search_path. This function illustrates safe usage: @@ -778,27 +778,27 @@ $$ LANGUAGE plpgsql SET search_path = admin, pg_temp; - This function's intention is to access a table admin.pwds. - But without the SET clause, or with a SET clause - mentioning only admin, the function could be subverted by - creating a temporary table named pwds. + This function's intention is to access a table admin.pwds. + But without the SET clause, or with a SET clause + mentioning only admin, the function could be subverted by + creating a temporary table named pwds. Before PostgreSQL version 8.3, the - SET clause was not available, and so older functions may + SET clause was not available, and so older functions may contain rather complicated logic to save, set, and restore - search_path. The SET clause is far easier + search_path. The SET clause is far easier to use for this purpose. Another point to keep in mind is that by default, execute privilege - is granted to PUBLIC for newly created functions + is granted to PUBLIC for newly created functions (see for more information). Frequently you will wish to restrict use of a security definer function to only some users. To do that, you must revoke - the default PUBLIC privileges and then grant execute + the default PUBLIC privileges and then grant execute privilege selectively. To avoid having a window where the new function is accessible to all, create it and set the privileges within a single transaction. For example: diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index a462be790f..bb2601dc8c 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -51,8 +51,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] upper(col) would allow the clause - WHERE upper(col) = 'JIM' to use an index. + upper(col) would allow the clause + WHERE upper(col) = 'JIM' to use an index. @@ -85,7 +85,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] All functions and operators used in an index definition must be - immutable, that is, their results must depend only on + immutable, that is, their results must depend only on their arguments and never on any outside influence (such as the contents of another table or the current time). This restriction ensures that the behavior of the index is well-defined. To use a @@ -115,7 +115,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] CONCURRENTLY - When this option is used, PostgreSQL will build the + When this option is used, PostgreSQL will build the index without taking any locks that prevent concurrent inserts, updates, or deletes on the table; whereas a standard index build locks out writes (but not reads) on the table until it's done. @@ -144,7 +144,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The name of the index to be created. No schema name can be included here; the index is always created in the same schema as its parent - table. If the name is omitted, PostgreSQL chooses a + table. If the name is omitted, PostgreSQL chooses a suitable name based on the parent table's name and the indexed column name(s). @@ -166,8 +166,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The name of the index method to be used. Choices are btree, hash, - gist, spgist, gin, and - brin. + gist, spgist, gin, and + brin. The default method is btree. @@ -217,7 +217,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - ASC + ASC Specifies ascending sort order (which is the default). @@ -226,7 +226,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - DESC + DESC Specifies descending sort order. @@ -235,21 +235,21 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - NULLS FIRST + NULLS FIRST Specifies that nulls sort before non-nulls. This is the default - when DESC is specified. + when DESC is specified. - NULLS LAST + NULLS LAST Specifies that nulls sort after non-nulls. This is the default - when DESC is not specified. + when DESC is not specified. @@ -292,15 +292,15 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Index Storage Parameters - The optional WITH clause specifies storage - parameters for the index. Each index method has its own set of allowed + The optional WITH clause specifies storage + parameters for the index. Each index method has its own set of allowed storage parameters. The B-tree, hash, GiST and SP-GiST index methods all accept this parameter: - fillfactor + fillfactor The fillfactor for an index is a percentage that determines how full @@ -327,14 +327,14 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - buffering + buffering Determines whether the buffering build technique described in is used to build the index. With - OFF it is disabled, with ON it is enabled, and - with AUTO it is initially disabled, but turned on - on-the-fly once the index size reaches . The default is AUTO. + OFF it is disabled, with ON it is enabled, and + with AUTO it is initially disabled, but turned on + on-the-fly once the index size reaches . The default is AUTO. @@ -346,23 +346,23 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - fastupdate + fastupdate This setting controls usage of the fast update technique described in . It is a Boolean parameter: - ON enables fast update, OFF disables it. - (Alternative spellings of ON and OFF are + ON enables fast update, OFF disables it. + (Alternative spellings of ON and OFF are allowed as described in .) The - default is ON. + default is ON. - Turning fastupdate off via ALTER INDEX prevents + Turning fastupdate off via ALTER INDEX prevents future insertions from going into the list of pending index entries, but does not in itself flush previous entries. You might want to - VACUUM the table or call gin_clean_pending_list + VACUUM the table or call gin_clean_pending_list function afterward to ensure the pending list is emptied. @@ -371,7 +371,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - gin_pending_list_limit + gin_pending_list_limit Custom parameter. @@ -382,23 +382,23 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - BRIN indexes accept different parameters: + BRIN indexes accept different parameters: - pages_per_range + pages_per_range Defines the number of table blocks that make up one block range for - each entry of a BRIN index (see - for more details). The default is 128. + each entry of a BRIN index (see + for more details). The default is 128. - autosummarize + autosummarize Defines whether a summarization run is invoked for the previous page @@ -419,7 +419,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Creating an index can interfere with regular operation of a database. - Normally PostgreSQL locks the table to be indexed against + Normally PostgreSQL locks the table to be indexed against writes and performs the entire index build with a single scan of the table. Other transactions can still read the table, but if they try to insert, update, or delete rows in the table they will block until the @@ -430,11 +430,11 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - PostgreSQL supports building indexes without locking + PostgreSQL supports building indexes without locking out writes. This method is invoked by specifying the - CONCURRENTLY option of CREATE INDEX. + CONCURRENTLY option of CREATE INDEX. When this option is used, - PostgreSQL must perform two scans of the table, and in + PostgreSQL must perform two scans of the table, and in addition it must wait for all existing transactions that could potentially modify or use the index to terminate. Thus this method requires more total work than a standard index build and takes @@ -452,7 +452,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] ) predating the second scan to terminate. Then finally the index can be marked ready for use, - and the CREATE INDEX command terminates. + and the CREATE INDEX command terminates. Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build. @@ -460,11 +460,11 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] If a problem arises while scanning the table, such as a deadlock or a - uniqueness violation in a unique index, the CREATE INDEX - command will fail but leave behind an invalid index. This index + uniqueness violation in a unique index, the CREATE INDEX + command will fail but leave behind an invalid index. This index will be ignored for querying purposes because it might be incomplete; - however it will still consume update overhead. The psql - \d command will report such an index as INVALID: + however it will still consume update overhead. The psql + \d command will report such an index as INVALID: postgres=# \d tab @@ -478,8 +478,8 @@ Indexes: The recommended recovery method in such cases is to drop the index and try again to perform - CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild - the index with REINDEX. However, since REINDEX + CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild + the index with REINDEX. However, since REINDEX does not support concurrent builds, this option is unlikely to seem attractive.) @@ -490,7 +490,7 @@ Indexes: when the second table scan begins. This means that constraint violations could be reported in other queries prior to the index becoming available for use, or even in cases where the index build eventually fails. Also, - if a failure does occur in the second scan, the invalid index + if a failure does occur in the second scan, the invalid index continues to enforce its uniqueness constraint afterwards. @@ -505,8 +505,8 @@ Indexes: same table to occur in parallel, but only one concurrent index build can occur on a table at a time. In both cases, no other types of schema modification on the table are allowed meanwhile. Another difference - is that a regular CREATE INDEX command can be performed within - a transaction block, but CREATE INDEX CONCURRENTLY cannot. + is that a regular CREATE INDEX command can be performed within + a transaction block, but CREATE INDEX CONCURRENTLY cannot. @@ -547,17 +547,17 @@ Indexes: For index methods that support ordered scans (currently, only B-tree), - the optional clauses ASC, DESC, NULLS - FIRST, and/or NULLS LAST can be specified to modify + the optional clauses ASC, DESC, NULLS + FIRST, and/or NULLS LAST can be specified to modify the sort ordering of the index. Since an ordered index can be scanned either forward or backward, it is not normally useful to create a - single-column DESC index — that sort ordering is already + single-column DESC index — that sort ordering is already available with a regular index. The value of these options is that multicolumn indexes can be created that match the sort ordering requested by a mixed-ordering query, such as SELECT ... ORDER BY x ASC, y - DESC. The NULLS options are useful if you need to support - nulls sort low behavior, rather than the default nulls - sort high, in queries that depend on indexes to avoid sorting steps. + DESC. The NULLS options are useful if you need to support + nulls sort low behavior, rather than the default nulls + sort high, in queries that depend on indexes to avoid sorting steps. @@ -577,8 +577,8 @@ Indexes: Prior releases of PostgreSQL also had an R-tree index method. This method has been removed because it had no significant advantages over the GiST method. - If USING rtree is specified, CREATE INDEX - will interpret it as USING gist, to simplify conversion + If USING rtree is specified, CREATE INDEX + will interpret it as USING gist, to simplify conversion of old databases to GiST. @@ -595,13 +595,13 @@ CREATE UNIQUE INDEX title_idx ON films (title); - To create an index on the expression lower(title), + To create an index on the expression lower(title), allowing efficient case-insensitive searches: CREATE INDEX ON films ((lower(title))); (In this example we have chosen to omit the index name, so the system - will choose a name, typically films_lower_idx.) + will choose a name, typically films_lower_idx.) @@ -626,16 +626,16 @@ CREATE UNIQUE INDEX title_idx ON films (title) WITH (fillfactor = 70); - To create a GIN index with fast updates disabled: + To create a GIN index with fast updates disabled: CREATE INDEX gin_idx ON documents_table USING GIN (locations) WITH (fastupdate = off); - To create an index on the column code in the table - films and have the index reside in the tablespace - indexspace: + To create an index on the column code in the table + films and have the index reside in the tablespace + indexspace: CREATE INDEX code_idx ON films (code) TABLESPACE indexspace; diff --git a/doc/src/sgml/ref/create_language.sgml b/doc/src/sgml/ref/create_language.sgml index 75165b677f..20d56a766f 100644 --- a/doc/src/sgml/ref/create_language.sgml +++ b/doc/src/sgml/ref/create_language.sgml @@ -40,14 +40,14 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE not CREATE LANGUAGE. Direct use of CREATE LANGUAGE should now be confined to - extension installation scripts. If you have a bare + extension installation scripts. If you have a bare language in your database, perhaps as a result of an upgrade, you can convert it to an extension using - CREATE EXTENSION langname FROM + CREATE EXTENSION langname FROM unpackaged. @@ -67,11 +67,11 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE pg_pltemplate catalog and is marked - as allowed to be created by database owners (tmpldbacreate + as allowed to be created by database owners (tmpldbacreate is true). The default is that trusted languages can be created by database owners, but this can be adjusted by superusers by modifying the contents of pg_pltemplate. @@ -101,9 +101,9 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE internal, which will be the DO command's + type internal, which will be the DO command's internal representation, and it will typically return - void. The return value of the handler is ignored. + void. The return value of the handler is ignored. @@ -204,7 +204,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE - The TRUSTED option and the support function name(s) are + The TRUSTED option and the support function name(s) are ignored if the server has an entry for the specified language - name in pg_pltemplate. + name in pg_pltemplate. @@ -243,7 +243,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE pg_pltemplate. But when there is an entry, + in pg_pltemplate. But when there is an entry, the functions need not already exist; they will be automatically defined if not present in the database. - (This might result in CREATE LANGUAGE failing, if the + (This might result in CREATE LANGUAGE failing, if the shared library that implements the language is not available in the installation.) @@ -269,11 +269,11 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE name [ DEFAUL CREATE OPERATOR CLASS creates a new operator class. An operator class defines how a particular data type can be used with an index. The operator class specifies that certain operators will fill - particular roles or strategies for this data type and this + particular roles or strategies for this data type and this index method. The operator class also specifies the support procedures to be used by the index method when the operator class is selected for an @@ -69,8 +69,8 @@ CREATE OPERATOR CLASS name [ DEFAUL Related operator classes can be grouped into operator - families. To add a new operator class to an existing family, - specify the FAMILY option in CREATE OPERATOR + families. To add a new operator class to an existing family, + specify the FAMILY option in CREATE OPERATOR CLASS. Without this option, the new class is placed into a family named the same as the new class (creating that family if it doesn't already exist). @@ -96,7 +96,7 @@ CREATE OPERATOR CLASS name [ DEFAUL - DEFAULT + DEFAULT If present, the operator class will become the default @@ -159,15 +159,15 @@ CREATE OPERATOR CLASS name [ DEFAUL op_type - In an OPERATOR clause, - the operand data type(s) of the operator, or NONE to + In an OPERATOR clause, + the operand data type(s) of the operator, or NONE to signify a left-unary or right-unary operator. The operand data types can be omitted in the normal case where they are the same as the operator class's data type. - In a FUNCTION clause, the operand data type(s) the + In a FUNCTION clause, the operand data type(s) the function is intended to support, if different from the input data type(s) of the function (for B-tree comparison functions and hash functions) @@ -175,7 +175,7 @@ CREATE OPERATOR CLASS name [ DEFAUL functions in GiST, SP-GiST, GIN and BRIN operator classes). These defaults are correct, and so op_type need not be specified in - FUNCTION clauses, except for the case of a B-tree sort + FUNCTION clauses, except for the case of a B-tree sort support function that is meant to support cross-data-type comparisons. @@ -191,8 +191,8 @@ CREATE OPERATOR CLASS name [ DEFAUL - If neither FOR SEARCH nor FOR ORDER BY is - specified, FOR SEARCH is the default. + If neither FOR SEARCH nor FOR ORDER BY is + specified, FOR SEARCH is the default. @@ -233,11 +233,11 @@ CREATE OPERATOR CLASS name [ DEFAUL The data type actually stored in the index. Normally this is the same as the column data type, but some index methods (currently GiST, GIN and BRIN) allow it to be different. The - STORAGE clause must be omitted unless the index + STORAGE clause must be omitted unless the index method allows a different type to be used. - If the column data_type is specified - as anyarray, the storage_type - can be declared as anyelement to indicate that the index + If the column data_type is specified + as anyarray, the storage_type + can be declared as anyelement to indicate that the index entries are members of the element type belonging to the actual array type that each particular index is created for. @@ -246,7 +246,7 @@ CREATE OPERATOR CLASS name [ DEFAUL - The OPERATOR, FUNCTION, and STORAGE + The OPERATOR, FUNCTION, and STORAGE clauses can appear in any order. @@ -269,9 +269,9 @@ CREATE OPERATOR CLASS name [ DEFAUL - Before PostgreSQL 8.4, the OPERATOR - clause could include a RECHECK option. This is no longer - supported because whether an index operator is lossy is now + Before PostgreSQL 8.4, the OPERATOR + clause could include a RECHECK option. This is no longer + supported because whether an index operator is lossy is now determined on-the-fly at run time. This allows efficient handling of cases where an operator might or might not be lossy. @@ -282,7 +282,7 @@ CREATE OPERATOR CLASS name [ DEFAUL The following example command defines a GiST index operator class - for the data type _int4 (array of int4). See the + for the data type _int4 (array of int4). See the module for the complete example. diff --git a/doc/src/sgml/ref/create_operator.sgml b/doc/src/sgml/ref/create_operator.sgml index 818e3a2315..11c38fd38b 100644 --- a/doc/src/sgml/ref/create_operator.sgml +++ b/doc/src/sgml/ref/create_operator.sgml @@ -43,7 +43,7 @@ CREATE OPERATOR name ( - The operator name is a sequence of up to NAMEDATALEN-1 + The operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following list: + - * / < > = ~ ! @ # % ^ & | ` ? @@ -72,7 +72,7 @@ CREATE OPERATOR name ( - The use of => as an operator name is deprecated. It may + The use of => as an operator name is deprecated. It may be disallowed altogether in a future release. @@ -86,10 +86,10 @@ CREATE OPERATOR name ( - At least one of LEFTARG and RIGHTARG must be defined. For + At least one of LEFTARG and RIGHTARG must be defined. For binary operators, both must be defined. For right unary - operators, only LEFTARG should be defined, while for left - unary operators only RIGHTARG should be defined. + operators, only LEFTARG should be defined, while for left + unary operators only RIGHTARG should be defined. @@ -122,11 +122,11 @@ CREATE OPERATOR name ( The name of the operator to be defined. See above for allowable characters. The name can be schema-qualified, for example - CREATE OPERATOR myschema.+ (...). If not, then + CREATE OPERATOR myschema.+ (...). If not, then the operator is created in the current schema. Two operators in the same schema can have the same name if they operate on different data types. This is called - overloading. + overloading. @@ -218,7 +218,7 @@ CREATE OPERATOR name ( To give a schema-qualified operator name in com_op or the other optional - arguments, use the OPERATOR() syntax, for example: + arguments, use the OPERATOR() syntax, for example: COMMUTATOR = OPERATOR(myschema.===) , @@ -233,18 +233,18 @@ COMMUTATOR = OPERATOR(myschema.===) , It is not possible to specify an operator's lexical precedence in - CREATE OPERATOR, because the parser's precedence behavior + CREATE OPERATOR, because the parser's precedence behavior is hard-wired. See for precedence details. - The obsolete options SORT1, SORT2, - LTCMP, and GTCMP were formerly used to + The obsolete options SORT1, SORT2, + LTCMP, and GTCMP were formerly used to specify the names of sort operators associated with a merge-joinable operator. This is no longer necessary, since information about associated operators is found by looking at B-tree operator families instead. If one of these options is given, it is ignored except - for implicitly setting MERGES true. + for implicitly setting MERGES true. diff --git a/doc/src/sgml/ref/create_opfamily.sgml b/doc/src/sgml/ref/create_opfamily.sgml index c4bcf0863e..ca5261b7a0 100644 --- a/doc/src/sgml/ref/create_opfamily.sgml +++ b/doc/src/sgml/ref/create_opfamily.sgml @@ -35,7 +35,7 @@ CREATE OPERATOR FAMILY name USING < compatible with these operator classes but not essential for the functioning of any individual index. (Operators and functions that are essential to indexes should be grouped within the relevant operator - class, rather than being loose in the operator family. + class, rather than being loose in the operator family. Typically, single-data-type operators are bound to operator classes, while cross-data-type operators can be loose in an operator family containing operator classes for both data types.) @@ -45,7 +45,7 @@ CREATE OPERATOR FAMILY name USING < The new operator family is initially empty. It should be populated by issuing subsequent CREATE OPERATOR CLASS commands to add contained operator classes, and optionally - ALTER OPERATOR FAMILY commands to add loose + ALTER OPERATOR FAMILY commands to add loose operators and their corresponding support functions. diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index 70df22c059..1bcf2de429 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -88,7 +88,7 @@ CREATE POLICY name ON If row-level security is enabled for a table, but no applicable policies - exist, a default deny policy is assumed, so that no rows will + exist, a default deny policy is assumed, so that no rows will be visible or updatable. @@ -188,9 +188,9 @@ CREATE POLICY name ON SELECT), and will not be - available for modification (in an UPDATE - or DELETE). Such rows are silently suppressed; no error + visible to the user (in a SELECT), and will not be + available for modification (in an UPDATE + or DELETE). Such rows are silently suppressed; no error is reported. @@ -223,7 +223,7 @@ CREATE POLICY name ON - ALL + ALL Using ALL for a policy means that it will apply @@ -254,7 +254,7 @@ CREATE POLICY name ON - SELECT + SELECT Using SELECT for a policy means that it will apply @@ -274,7 +274,7 @@ CREATE POLICY name ON - INSERT + INSERT Using INSERT for a policy means that it will apply @@ -295,7 +295,7 @@ CREATE POLICY name ON - UPDATE + UPDATE Using UPDATE for a policy means that it will apply @@ -347,14 +347,14 @@ CREATE POLICY name ON UPDATE command, if the existing row does not pass the USING expressions, an error will be thrown (the - UPDATE path will never be silently + UPDATE path will never be silently avoided). - DELETE + DELETE Using DELETE for a policy means that it will apply diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml index 62a5fd432e..b997d387e7 100644 --- a/doc/src/sgml/ref/create_publication.sgml +++ b/doc/src/sgml/ref/create_publication.sgml @@ -64,10 +64,10 @@ CREATE PUBLICATION name Specifies a list of tables to add to the publication. If - ONLY is specified before the table name, only - that table is added to the publication. If ONLY is not + ONLY is specified before the table name, only + that table is added to the publication. If ONLY is not specified, the table and all its descendant tables (if any) are added. - Optionally, * can be specified after the table name to + Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -138,7 +138,7 @@ CREATE PUBLICATION name To create a publication, the invoking user must have the - CREATE privilege for the current database. + CREATE privilege for the current database. (Of course, superusers bypass this check.) @@ -151,12 +151,12 @@ CREATE PUBLICATION name The tables added to a publication that publishes UPDATE and/or DELETE operations must have - REPLICA IDENTITY defined. Otherwise those operations will be + REPLICA IDENTITY defined. Otherwise those operations will be disallowed on those tables. - For an INSERT ... ON CONFLICT command, the publication will + For an INSERT ... ON CONFLICT command, the publication will publish the operation that actually results from the command. So depending of the outcome, it may be published as either INSERT or UPDATE, or it may not be published at all. @@ -203,7 +203,7 @@ CREATE PUBLICATION insert_only FOR TABLE mydata Compatibility - CREATE PUBLICATION is a PostgreSQL + CREATE PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml index 41670c4b05..4a4061a237 100644 --- a/doc/src/sgml/ref/create_role.sgml +++ b/doc/src/sgml/ref/create_role.sgml @@ -51,11 +51,11 @@ CREATE ROLE name [ [ WITH ] CREATE ROLE adds a new role to a PostgreSQL database cluster. A role is an entity that can own database objects and have database privileges; - a role can be considered a user, a group, or both + a role can be considered a user, a group, or both depending on how it is used. Refer to and for information about managing - users and authentication. You must have CREATEROLE + users and authentication. You must have CREATEROLE privilege or be a database superuser to use this command. @@ -83,7 +83,7 @@ CREATE ROLE name [ [ WITH ] NOSUPERUSER - These clauses determine whether the new role is a superuser, + These clauses determine whether the new role is a superuser, who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed. You must yourself be a superuser to create a new superuser. @@ -94,8 +94,8 @@ CREATE ROLE name [ [ WITH ] - CREATEDB - NOCREATEDB + CREATEDB + NOCREATEDB These clauses define a role's ability to create databases. If @@ -128,13 +128,13 @@ CREATE ROLE name [ [ WITH ] NOINHERIT - These clauses determine whether a role inherits the + These clauses determine whether a role inherits the privileges of roles it is a member of. A role with the INHERIT attribute can automatically use whatever database privileges have been granted to all roles it is directly or indirectly a member of. Without INHERIT, membership in another role - only grants the ability to SET ROLE to that other role; + only grants the ability to SET ROLE to that other role; the privileges of the other role are only available after having done so. If not specified, @@ -156,7 +156,7 @@ CREATE ROLE name [ [ WITH ] NOLOGIN is the default, except when - CREATE ROLE is invoked through its alternative spelling + CREATE ROLE is invoked through its alternative spelling . @@ -172,7 +172,7 @@ CREATE ROLE name [ [ WITH ] REPLICATION attribute is a very + A role having the REPLICATION attribute is a very highly privileged role, and should only be used on roles actually used for replication. If not specified, NOREPLICATION is the default. @@ -210,7 +210,7 @@ CREATE ROLE name [ [ WITH ] - [ ENCRYPTED ] PASSWORD password + [ ENCRYPTED ] PASSWORD password Sets the role's password. (A password is only of use for @@ -225,7 +225,7 @@ CREATE ROLE name [ [ WITH ] Specifying an empty string will also set the password to null, - but that was not the case before PostgreSQL + but that was not the case before PostgreSQL version 10. In earlier versions, an empty string could be used, or not, depending on the authentication method and the exact version, and libpq would refuse to use it in any case. @@ -235,12 +235,12 @@ CREATE ROLE name [ [ WITH ] The password is always stored encrypted in the system catalogs. The - ENCRYPTED keyword has no effect, but is accepted for + ENCRYPTED keyword has no effect, but is accepted for backwards compatibility. The method of encryption is determined by the configuration parameter . If the presented password string is already in MD5-encrypted or SCRAM-encrypted format, then it is stored as-is regardless of - password_encryption (since the system cannot decrypt + password_encryption (since the system cannot decrypt the specified encrypted password string, to encrypt it in a different format). This allows reloading of encrypted passwords during dump/restore. @@ -260,61 +260,61 @@ CREATE ROLE name [ [ WITH ] - IN ROLE role_name + IN ROLE role_name The IN ROLE clause lists one or more existing roles to which the new role will be immediately added as a new member. (Note that there is no option to add the new role as an - administrator; use a separate GRANT command to do that.) + administrator; use a separate GRANT command to do that.) - IN GROUP role_name + IN GROUP role_name IN GROUP is an obsolete spelling of - IN ROLE. + IN ROLE. - ROLE role_name + ROLE role_name The ROLE clause lists one or more existing roles which are automatically added as members of the new role. - (This in effect makes the new role a group.) + (This in effect makes the new role a group.) - ADMIN role_name + ADMIN role_name The ADMIN clause is like ROLE, but the named roles are added to the new role WITH ADMIN - OPTION, giving them the right to grant membership in this role + OPTION, giving them the right to grant membership in this role to others. - USER role_name + USER role_name The USER clause is an obsolete spelling of - the ROLE clause. + the ROLE clause. - SYSID uid + SYSID uid The SYSID clause is ignored, but is accepted @@ -332,8 +332,8 @@ CREATE ROLE name [ [ WITH ] to change the attributes of a role, and to remove a role. All the attributes - specified by CREATE ROLE can be modified by later - ALTER ROLE commands. + specified by CREATE ROLE can be modified by later + ALTER ROLE commands. @@ -344,42 +344,42 @@ CREATE ROLE name [ [ WITH ] - The VALID UNTIL clause defines an expiration time for a - password only, not for the role per se. In + The VALID UNTIL clause defines an expiration time for a + password only, not for the role per se. In particular, the expiration time is not enforced when logging in using a non-password-based authentication method. - The INHERIT attribute governs inheritance of grantable + The INHERIT attribute governs inheritance of grantable privileges (that is, access privileges for database objects and role memberships). It does not apply to the special role attributes set by - CREATE ROLE and ALTER ROLE. For example, being - a member of a role with CREATEDB privilege does not immediately - grant the ability to create databases, even if INHERIT is set; + CREATE ROLE and ALTER ROLE. For example, being + a member of a role with CREATEDB privilege does not immediately + grant the ability to create databases, even if INHERIT is set; it would be necessary to become that role via before creating a database. - The INHERIT attribute is the default for reasons of backwards + The INHERIT attribute is the default for reasons of backwards compatibility: in prior releases of PostgreSQL, users always had access to all privileges of groups they were members of. - However, NOINHERIT provides a closer match to the semantics + However, NOINHERIT provides a closer match to the semantics specified in the SQL standard. - Be careful with the CREATEROLE privilege. There is no concept of - inheritance for the privileges of a CREATEROLE-role. That + Be careful with the CREATEROLE privilege. There is no concept of + inheritance for the privileges of a CREATEROLE-role. That means that even if a role does not have a certain privilege but is allowed to create other roles, it can easily create another role with different privileges than its own (except for creating roles with superuser - privileges). For example, if the role user has the - CREATEROLE privilege but not the CREATEDB privilege, - nonetheless it can create a new role with the CREATEDB - privilege. Therefore, regard roles that have the CREATEROLE + privileges). For example, if the role user has the + CREATEROLE privilege but not the CREATEDB privilege, + nonetheless it can create a new role with the CREATEDB + privilege. Therefore, regard roles that have the CREATEROLE privilege as almost-superuser-roles. @@ -391,9 +391,9 @@ CREATE ROLE name [ [ WITH ] - The CONNECTION LIMIT option is only enforced approximately; + The CONNECTION LIMIT option is only enforced approximately; if two new sessions start at about the same time when just one - connection slot remains for the role, it is possible that + connection slot remains for the role, it is possible that both will fail. Also, the limit is never enforced for superusers. @@ -425,8 +425,8 @@ CREATE ROLE jonathan LOGIN; CREATE USER davide WITH PASSWORD 'jw8s0F4'; - (CREATE USER is the same as CREATE ROLE except - that it implies LOGIN.) + (CREATE USER is the same as CREATE ROLE except + that it implies LOGIN.) @@ -453,7 +453,7 @@ CREATE ROLE admin WITH CREATEDB CREATEROLE; The CREATE ROLE statement is in the SQL standard, but the standard only requires the syntax -CREATE ROLE name [ WITH ADMIN role_name ] +CREATE ROLE name [ WITH ADMIN role_name ] Multiple initial administrators, and all the other options of CREATE ROLE, are @@ -471,8 +471,8 @@ CREATE ROLE name [ WITH ADMIN The behavior specified by the SQL standard is most closely approximated - by giving users the NOINHERIT attribute, while roles are - given the INHERIT attribute. + by giving users the NOINHERIT attribute, while roles are + given the INHERIT attribute. diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index 53fdf56621..c772c38399 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -76,13 +76,13 @@ CREATE [ OR REPLACE ] RULE name AS ON DELETE rules (or any subset of those that's sufficient for your purposes) to replace update actions on the view with appropriate updates on other tables. If you want to support - INSERT RETURNING and so on, then be sure to put a suitable - RETURNING clause into each of these rules. + INSERT RETURNING and so on, then be sure to put a suitable + RETURNING clause into each of these rules. There is a catch if you try to use conditional rules for complex view - updates: there must be an unconditional + updates: there must be an unconditional INSTEAD rule for each action you wish to allow on the view. If the rule is conditional, or is not INSTEAD, then the system will still reject @@ -95,7 +95,7 @@ CREATE [ OR REPLACE ] RULE name AS Then make the conditional rules non-INSTEAD; in the cases where they are applied, they add to the default INSTEAD NOTHING action. (This method does not - currently work to support RETURNING queries, however.) + currently work to support RETURNING queries, however.) @@ -108,7 +108,7 @@ CREATE [ OR REPLACE ] RULE name AS - Another alternative worth considering is to use INSTEAD OF + Another alternative worth considering is to use INSTEAD OF triggers (see ) in place of rules. @@ -161,7 +161,7 @@ CREATE [ OR REPLACE ] RULE name AS Any SQL conditional expression (returning boolean). The condition expression cannot refer - to any tables except NEW and OLD, and + to any tables except NEW and OLD, and cannot contain aggregate functions. @@ -171,7 +171,7 @@ CREATE [ OR REPLACE ] RULE name AS INSTEAD indicates that the commands should be - executed instead of the original command. + executed instead of the original command. @@ -227,19 +227,19 @@ CREATE [ OR REPLACE ] RULE name AS In a rule for INSERT, UPDATE, or - DELETE on a view, you can add a RETURNING + DELETE on a view, you can add a RETURNING clause that emits the view's columns. This clause will be used to compute - the outputs if the rule is triggered by an INSERT RETURNING, - UPDATE RETURNING, or DELETE RETURNING command + the outputs if the rule is triggered by an INSERT RETURNING, + UPDATE RETURNING, or DELETE RETURNING command respectively. When the rule is triggered by a command without - RETURNING, the rule's RETURNING clause will be + RETURNING, the rule's RETURNING clause will be ignored. The current implementation allows only unconditional - INSTEAD rules to contain RETURNING; furthermore - there can be at most one RETURNING clause among all the rules + INSTEAD rules to contain RETURNING; furthermore + there can be at most one RETURNING clause among all the rules for the same event. (This ensures that there is only one candidate - RETURNING clause to be used to compute the results.) - RETURNING queries on the view will be rejected if - there is no RETURNING clause in any available rule. + RETURNING clause to be used to compute the results.) + RETURNING queries on the view will be rejected if + there is no RETURNING clause in any available rule. diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index ce145f96a0..ce3530c048 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -48,9 +48,9 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp A schema is essentially a namespace: it contains named objects (tables, data types, functions, and operators) whose names can duplicate those of other objects existing in other - schemas. Named objects are accessed either by qualifying + schemas. Named objects are accessed either by qualifying their names with the schema name as a prefix, or by setting a search - path that includes the desired schema(s). A CREATE command + path that includes the desired schema(s). A CREATE command specifying an unqualified object name creates the object in the current schema (the one at the front of the search path, which can be determined with the function current_schema). @@ -60,7 +60,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp Optionally, CREATE SCHEMA can include subcommands to create objects within the new schema. The subcommands are treated essentially the same as separate commands issued after creating the - schema, except that if the AUTHORIZATION clause is used, + schema, except that if the AUTHORIZATION clause is used, all the created objects will be owned by that user. @@ -100,10 +100,10 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp An SQL statement defining an object to be created within the schema. Currently, only CREATE - TABLE, CREATE VIEW, CREATE - INDEX, CREATE SEQUENCE, CREATE - TRIGGER and GRANT are accepted as clauses - within CREATE SCHEMA. Other kinds of objects may + TABLE, CREATE VIEW, CREATE + INDEX, CREATE SEQUENCE, CREATE + TRIGGER and GRANT are accepted as clauses + within CREATE SCHEMA. Other kinds of objects may be created in separate commands after the schema is created. @@ -114,7 +114,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp Do nothing (except issuing a notice) if a schema with the same name - already exists. schema_element + already exists. schema_element subcommands cannot be included when this option is used. @@ -127,7 +127,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp To create a schema, the invoking user must have the - CREATE privilege for the current database. + CREATE privilege for the current database. (Of course, superusers bypass this check.) @@ -143,17 +143,17 @@ CREATE SCHEMA myschema; - Create a schema for user joe; the schema will also be - named joe: + Create a schema for user joe; the schema will also be + named joe: CREATE SCHEMA AUTHORIZATION joe; - Create a schema named test that will be owned by user - joe, unless there already is a schema named test. - (It does not matter whether joe owns the pre-existing schema.) + Create a schema named test that will be owned by user + joe, unless there already is a schema named test. + (It does not matter whether joe owns the pre-existing schema.) CREATE SCHEMA IF NOT EXISTS test AUTHORIZATION joe; @@ -185,7 +185,7 @@ CREATE VIEW hollywood.winners AS Compatibility - The SQL standard allows a DEFAULT CHARACTER SET clause + The SQL standard allows a DEFAULT CHARACTER SET clause in CREATE SCHEMA, as well as more subcommand types than are presently accepted by PostgreSQL. @@ -205,7 +205,7 @@ CREATE VIEW hollywood.winners AS all objects within it. PostgreSQL allows schemas to contain objects owned by users other than the schema owner. This can happen only if the schema owner grants the - CREATE privilege on their schema to someone else, or a + CREATE privilege on their schema to someone else, or a superuser chooses to create objects in it. diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index 2af8c8d23e..9248b1d459 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -67,10 +67,10 @@ SELECT * FROM name; to examine the parameters and current state of a sequence. In particular, - the last_value field of the sequence shows the last value + the last_value field of the sequence shows the last value allocated by any session. (Of course, this value might be obsolete by the time it's printed, if other sessions are actively doing - nextval calls.) + nextval calls.) @@ -250,14 +250,14 @@ SELECT * FROM name; - Sequences are based on bigint arithmetic, so the range + Sequences are based on bigint arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807). - Because nextval and setval calls are never - rolled back, sequence objects cannot be used if gapless + Because nextval and setval calls are never + rolled back, sequence objects cannot be used if gapless assignment of sequence numbers is needed. It is possible to build gapless assignment by using exclusive locking of a table containing a counter; but this solution is much more expensive than sequence @@ -271,9 +271,9 @@ SELECT * FROM name; used for a sequence object that will be used concurrently by multiple sessions. Each session will allocate and cache successive sequence values during one access to the sequence object and - increase the sequence object's last_value accordingly. + increase the sequence object's last_value accordingly. Then, the next cache-1 - uses of nextval within that session simply return the + uses of nextval within that session simply return the preallocated values without touching the sequence object. So, any numbers allocated but not used within a session will be lost when that session ends, resulting in holes in the @@ -290,18 +290,18 @@ SELECT * FROM name; 11..20 and return nextval=11 before session A has generated nextval=2. Thus, with a cache setting of one - it is safe to assume that nextval values are generated + it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you - should only assume that the nextval values are all + should only assume that the nextval values are all distinct, not that they are generated purely sequentially. Also, - last_value will reflect the latest value reserved by + last_value will reflect the latest value reserved by any session, whether or not it has yet been returned by - nextval. + nextval. - Another consideration is that a setval executed on + Another consideration is that a setval executed on such a sequence will not be noticed by other sessions until they have used up any preallocated values they have cached. @@ -365,14 +365,14 @@ END; - Obtaining the next value is done using the nextval() + Obtaining the next value is done using the nextval() function instead of the standard's NEXT VALUE FOR expression. - The OWNED BY clause is a PostgreSQL + The OWNED BY clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_server.sgml b/doc/src/sgml/ref/create_server.sgml index 47b8a6291b..e14ce43bf9 100644 --- a/doc/src/sgml/ref/create_server.sgml +++ b/doc/src/sgml/ref/create_server.sgml @@ -47,7 +47,7 @@ CREATE SERVER [IF NOT EXISTS] server_name - Creating a server requires USAGE privilege on the + Creating a server requires USAGE privilege on the foreign-data wrapper being used. @@ -57,7 +57,7 @@ CREATE SERVER [IF NOT EXISTS] server_name - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a server with the same name already exists. @@ -135,8 +135,8 @@ CREATE SERVER [IF NOT EXISTS] server_nameExamples - Create a server myserver that uses the - foreign-data wrapper postgres_fdw: + Create a server myserver that uses the + foreign-data wrapper postgres_fdw: CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foo', dbname 'foodb', port '5432'); diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index 0d68ca06b7..066af8a4b4 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -41,7 +41,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na If a schema name is given (for example, CREATE STATISTICS - myschema.mystat ...) then the statistics object is created in the + myschema.mystat ...) then the statistics object is created in the specified schema. Otherwise it is created in the current schema. The name of the statistics object must be distinct from the name of any other statistics object in the same schema. @@ -54,7 +54,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a statistics object with the same name already @@ -129,7 +129,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na Examples - Create table t1 with two functionally dependent columns, i.e. + Create table t1 with two functionally dependent columns, i.e. knowledge of a value in the first column is sufficient for determining the value in the other column. Then functional dependency statistics are built on those columns: @@ -157,10 +157,10 @@ EXPLAIN ANALYZE SELECT * FROM t1 WHERE (a = 1) AND (b = 0); Without functional-dependency statistics, the planner would assume - that the two WHERE conditions are independent, and would + that the two WHERE conditions are independent, and would multiply their selectivities together to arrive at a much-too-small row count estimate. - With such statistics, the planner recognizes that the WHERE + With such statistics, the planner recognizes that the WHERE conditions are redundant and does not underestimate the rowcount. diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index bae9f839bd..cd51b7fcac 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -201,7 +201,7 @@ CREATE SUBSCRIPTION subscription_namefalse, the tables are not subscribed, and so after you enable the subscription nothing will be replicated. It is required to run - ALTER SUBSCRIPTION ... REFRESH PUBLICATION in order + ALTER SUBSCRIPTION ... REFRESH PUBLICATION in order for tables to be subscribed. @@ -272,7 +272,7 @@ CREATE SUBSCRIPTION mysub Compatibility - CREATE SUBSCRIPTION is a PostgreSQL + CREATE SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index d15795857b..2db2e9fc44 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -113,7 +113,7 @@ FROM ( { numeric_literal | If a schema name is given (for example, CREATE TABLE - myschema.mytable ...) then the table is created in the specified + myschema.mytable ...) then the table is created in the specified schema. Otherwise it is created in the current schema. Temporary tables exist in a special schema, so a schema name cannot be given when creating a temporary table. The name of the table must be @@ -158,7 +158,7 @@ FROM ( { numeric_literal | - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the table is created as a temporary table. @@ -177,13 +177,13 @@ FROM ( { numeric_literal | ANALYZE on the temporary table after it is populated. + ANALYZE on the temporary table after it is populated. Optionally, GLOBAL or LOCAL - can be written before TEMPORARY or TEMP. - This presently makes no difference in PostgreSQL + can be written before TEMPORARY or TEMP. + This presently makes no difference in PostgreSQL and is deprecated; see . @@ -192,7 +192,7 @@ FROM ( { numeric_literal | - UNLOGGED + UNLOGGED If specified, the table is created as an unlogged table. Data written @@ -208,7 +208,7 @@ FROM ( { numeric_literal | - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -263,14 +263,14 @@ FROM ( { numeric_literal | partition_bound_spec must correspond to the partitioning method and partition key of the parent table, and must not overlap with any existing partition of that - parent. The form with IN is used for list partitioning, - while the form with FROM and TO is used for + parent. The form with IN is used for list partitioning, + while the form with FROM and TO is used for range partitioning. Each of the values specified in - the partition_bound_spec is + the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key @@ -294,52 +294,52 @@ FROM ( { numeric_literal | TO list are not. Note that this statement must be understood according to the rules of row-wise comparison (). - For example, given PARTITION BY RANGE (x,y), a partition + For example, given PARTITION BY RANGE (x,y), a partition bound FROM (1, 2) TO (3, 4) - allows x=1 with any y>=2, - x=2 with any non-null y, - and x=3 with any y<4. + allows x=1 with any y>=2, + x=2 with any non-null y, + and x=3 with any y<4. - The special values MINVALUE and MAXVALUE + The special values MINVALUE and MAXVALUE may be used when creating a range partition to indicate that there is no lower or upper bound on the column's value. For example, a - partition defined using FROM (MINVALUE) TO (10) allows + partition defined using FROM (MINVALUE) TO (10) allows any values less than 10, and a partition defined using - FROM (10) TO (MAXVALUE) allows any values greater than + FROM (10) TO (MAXVALUE) allows any values greater than or equal to 10. When creating a range partition involving more than one column, it - can also make sense to use MAXVALUE as part of the lower - bound, and MINVALUE as part of the upper bound. For + can also make sense to use MAXVALUE as part of the lower + bound, and MINVALUE as part of the upper bound. For example, a partition defined using - FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any rows + FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any rows where the first partition key column is greater than 0 and less than or equal to 10. Similarly, a partition defined using - FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any rows + FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any rows where the first partition key column starts with "a". - Note that if MINVALUE or MAXVALUE is used for + Note that if MINVALUE or MAXVALUE is used for one column of a partitioning bound, the same value must be used for all - subsequent columns. For example, (10, MINVALUE, 0) is not - a valid bound; you should write (10, MINVALUE, MINVALUE). + subsequent columns. For example, (10, MINVALUE, 0) is not + a valid bound; you should write (10, MINVALUE, MINVALUE). - Also note that some element types, such as timestamp, + Also note that some element types, such as timestamp, have a notion of "infinity", which is just another value that can - be stored. This is different from MINVALUE and - MAXVALUE, which are not real values that can be stored, + be stored. This is different from MINVALUE and + MAXVALUE, which are not real values that can be stored, but rather they are ways of saying that the value is unbounded. - MAXVALUE can be thought of as being greater than any - other value, including "infinity" and MINVALUE as being + MAXVALUE can be thought of as being greater than any + other value, including "infinity" and MINVALUE as being less than any other value, including "minus infinity". Thus the range - FROM ('infinity') TO (MAXVALUE) is not an empty range; it + FROM ('infinity') TO (MAXVALUE) is not an empty range; it allows precisely one value to be stored — "infinity". @@ -370,9 +370,9 @@ FROM ( { numeric_literal | CHECK constraints will be inherited + to all partitions. CHECK constraints will be inherited automatically by every partition, but an individual partition may specify - additional CHECK constraints; additional constraints with + additional CHECK constraints; additional constraints with the same name and condition as in the parent will be merged with the parent constraint. Defaults may be specified separately for each partition. @@ -421,7 +421,7 @@ FROM ( { numeric_literal | COLLATE collation - The COLLATE clause assigns a collation to + The COLLATE clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used. @@ -432,13 +432,13 @@ FROM ( { numeric_literal | INHERITS ( parent_table [, ... ] ) - The optional INHERITS clause specifies a list of + The optional INHERITS clause specifies a list of tables from which the new table automatically inherits all columns. Parent tables can be plain tables or foreign tables. - Use of INHERITS creates a persistent relationship + Use of INHERITS creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in @@ -462,19 +462,19 @@ FROM ( { numeric_literal | - CHECK constraints are merged in essentially the same way as + CHECK constraints are merged in essentially the same way as columns: if multiple parent tables and/or the new table definition - contain identically-named CHECK constraints, these + contain identically-named CHECK constraints, these constraints must all have the same check expression, or an error will be reported. Constraints having the same name and expression will - be merged into one copy. A constraint marked NO INHERIT in a - parent will not be considered. Notice that an unnamed CHECK + be merged into one copy. A constraint marked NO INHERIT in a + parent will not be considered. Notice that an unnamed CHECK constraint in the new table will never be merged, since a unique name will always be chosen for it. - Column STORAGE settings are also copied from parent tables. + Column STORAGE settings are also copied from parent tables. @@ -504,7 +504,7 @@ FROM ( { numeric_literal | A partitioned table is divided into sub-tables (called partitions), - which are created using separate CREATE TABLE commands. + which are created using separate CREATE TABLE commands. The partitioned table is itself empty. A data row inserted into the table is routed to a partition based on the value of columns or expressions in the partition key. If no existing partition matches @@ -542,7 +542,7 @@ FROM ( { numeric_literal | nextval, may create a functional linkage between + such as nextval, may create a functional linkage between the original and new tables. @@ -559,8 +559,8 @@ FROM ( { numeric_literal | - Indexes, PRIMARY KEY, UNIQUE, - and EXCLUDE constraints on the original table will be + Indexes, PRIMARY KEY, UNIQUE, + and EXCLUDE constraints on the original table will be created on the new table only if INCLUDING INDEXES is specified. Names for the new indexes and constraints are chosen according to the default rules, regardless of how the originals @@ -568,11 +568,11 @@ FROM ( { numeric_literal | - STORAGE settings for the copied column definitions will be + STORAGE settings for the copied column definitions will be copied only if INCLUDING STORAGE is specified. The - default behavior is to exclude STORAGE settings, resulting + default behavior is to exclude STORAGE settings, resulting in the copied columns in the new table having type-specific default - settings. For more on STORAGE settings, see + settings. For more on STORAGE settings, see . @@ -587,7 +587,7 @@ FROM ( { numeric_literal | Note that unlike INHERITS, columns and - constraints copied by LIKE are not merged with similarly + constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is signaled. @@ -607,7 +607,7 @@ FROM ( { numeric_literal | An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, - so constraint names like col must be positive can be used + so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name. @@ -616,7 +616,7 @@ FROM ( { numeric_literal | - NOT NULL + NOT NULL The column is not allowed to contain null values. @@ -625,7 +625,7 @@ FROM ( { numeric_literal | - NULL + NULL The column is allowed to contain null values. This is the default. @@ -643,7 +643,7 @@ FROM ( { numeric_literal | CHECK ( expression ) [ NO INHERIT ] - The CHECK clause specifies an expression producing a + The CHECK clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or @@ -662,15 +662,15 @@ FROM ( { numeric_literal | - A constraint marked with NO INHERIT will not propagate to + A constraint marked with NO INHERIT will not propagate to child tables. When a table has multiple CHECK constraints, they will be tested for each row in alphabetical order by name, - after checking NOT NULL constraints. - (PostgreSQL versions before 9.5 did not honor any + after checking NOT NULL constraints. + (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK constraints.) @@ -681,7 +681,7 @@ FROM ( { numeric_literal | default_expr - The DEFAULT clause assigns a default data value for + The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The @@ -729,8 +729,8 @@ FROM ( { numeric_literal | - UNIQUE (column constraint) - UNIQUE ( column_name [, ... ] ) (table constraint) + UNIQUE (column constraint) + UNIQUE ( column_name [, ... ] ) (table constraint) @@ -756,11 +756,11 @@ FROM ( { numeric_literal | - PRIMARY KEY (column constraint) - PRIMARY KEY ( column_name [, ... ] ) (table constraint) + PRIMARY KEY (column constraint) + PRIMARY KEY ( column_name [, ... ] ) (table constraint) - The PRIMARY KEY constraint specifies that a column or + The PRIMARY KEY constraint specifies that a column or columns of a table can contain only unique (non-duplicate), nonnull values. Only one primary key can be specified for a table, whether as a column constraint or a table constraint. @@ -775,7 +775,7 @@ FROM ( { numeric_literal | PRIMARY KEY enforces the same data constraints as - a combination of UNIQUE and NOT NULL, but + a combination of UNIQUE and NOT NULL, but identifying a set of columns as the primary key also provides metadata about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows. @@ -787,19 +787,19 @@ FROM ( { numeric_literal | EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] - The EXCLUDE clause defines an exclusion + The EXCLUDE clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified column(s) or expression(s) using the specified operator(s), not all of these - comparisons will return TRUE. If all of the + comparisons will return TRUE. If all of the specified operators test for equality, this is equivalent to a - UNIQUE constraint, although an ordinary unique constraint + UNIQUE constraint, although an ordinary unique constraint will be faster. However, exclusion constraints can specify constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles (see ) by using the - && operator. + && operator. @@ -807,7 +807,7 @@ FROM ( { numeric_literal | ) for the index access - method index_method. + method index_method. The operators are required to be commutative. Each exclude_element can optionally specify an operator class and/or ordering options; @@ -816,17 +816,17 @@ FROM ( { numeric_literal | - The access method must support amgettuple (see ); at present this means GIN + The access method must support amgettuple (see ); at present this means GIN cannot be used. Although it's allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does nothing that an ordinary unique constraint doesn't do better. - So in practice the access method will always be GiST or - SP-GiST. + So in practice the access method will always be GiST or + SP-GiST. - The predicate allows you to specify an + The predicate allows you to specify an exclusion constraint on a subset of the table; internally this creates a partial index. Note that parentheses are required around the predicate. @@ -853,7 +853,7 @@ FROM ( { numeric_literal | reftable is used. The referenced columns must be the columns of a non-deferrable unique or primary key constraint in the referenced table. The user - must have REFERENCES permission on the referenced table + must have REFERENCES permission on the referenced table (either the whole table, or the specific referenced columns). Note that foreign key constraints cannot be defined between temporary tables and permanent tables. @@ -863,16 +863,16 @@ FROM ( { numeric_literal | MATCH - FULL, MATCH PARTIAL, and MATCH + FULL, MATCH PARTIAL, and MATCH SIMPLE (which is the default). MATCH - FULL will not allow one column of a multicolumn foreign key + FULL will not allow one column of a multicolumn foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. - MATCH PARTIAL is not yet implemented. - (Of course, NOT NULL constraints can be applied to the + MATCH PARTIAL is not yet implemented. + (Of course, NOT NULL constraints can be applied to the referencing column(s) to prevent these cases from arising.) @@ -969,13 +969,13 @@ FROM ( { numeric_literal | command). NOT DEFERRABLE is the default. - Currently, only UNIQUE, PRIMARY KEY, - EXCLUDE, and - REFERENCES (foreign key) constraints accept this - clause. NOT NULL and CHECK constraints are not + Currently, only UNIQUE, PRIMARY KEY, + EXCLUDE, and + REFERENCES (foreign key) constraints accept this + clause. NOT NULL and CHECK constraints are not deferrable. Note that deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that - includes an ON CONFLICT DO UPDATE clause. + includes an ON CONFLICT DO UPDATE clause. @@ -1003,16 +1003,16 @@ FROM ( { numeric_literal | for more - information. The WITH clause for a - table can also include OIDS=TRUE (or just OIDS) + information. The WITH clause for a + table can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or - OIDS=FALSE to specify that the rows should not have OIDs. - If OIDS is not specified, the default setting depends upon + OIDS=FALSE to specify that the rows should not have OIDs. + If OIDS is not specified, the default setting depends upon the configuration parameter. (If the new table inherits from any tables that have OIDs, then - OIDS=TRUE is forced even if the command says - OIDS=FALSE.) + OIDS=TRUE is forced even if the command says + OIDS=FALSE.) @@ -1035,14 +1035,14 @@ FROM ( { numeric_literal | - WITH OIDS - WITHOUT OIDS + WITH OIDS + WITHOUT OIDS - These are obsolescent syntaxes equivalent to WITH (OIDS) - and WITH (OIDS=FALSE), respectively. If you wish to give - both an OIDS setting and storage parameters, you must use - the WITH ( ... ) syntax; see above. + These are obsolescent syntaxes equivalent to WITH (OIDS) + and WITH (OIDS=FALSE), respectively. If you wish to give + both an OIDS setting and storage parameters, you must use + the WITH ( ... ) syntax; see above. @@ -1110,7 +1110,7 @@ FROM ( { numeric_literal | This clause allows selection of the tablespace in which the index associated with a UNIQUE, PRIMARY - KEY, or EXCLUDE constraint will be created. + KEY, or EXCLUDE constraint will be created. If not specified, is consulted, or if the table is temporary. @@ -1128,16 +1128,16 @@ FROM ( { numeric_literal | - The WITH clause can specify storage parameters + The WITH clause can specify storage parameters for tables, and for indexes associated with a UNIQUE, - PRIMARY KEY, or EXCLUDE constraint. + PRIMARY KEY, or EXCLUDE constraint. Storage parameters for indexes are documented in . The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast., which controls the behavior of the - table's secondary TOAST table, if any + table's secondary TOAST table, if any (see for more information about TOAST). If a table parameter value is set and the equivalent toast. parameter is not, the TOAST table @@ -1149,14 +1149,14 @@ FROM ( { numeric_literal | - fillfactor (integer) + fillfactor (integer) The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default. When a smaller fillfactor - is specified, INSERT operations pack table pages only + is specified, INSERT operations pack table pages only to the indicated percentage; the remaining space on each page is - reserved for updating rows on that page. This gives UPDATE + reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page. For a table whose entries are never updated, complete packing is the @@ -1167,7 +1167,7 @@ FROM ( { numeric_literal | - parallel_workers (integer) + parallel_workers (integer) This sets the number of workers that should be used to assist a parallel @@ -1180,12 +1180,12 @@ FROM ( { numeric_literal | - autovacuum_enabled, toast.autovacuum_enabled (boolean) + autovacuum_enabled, toast.autovacuum_enabled (boolean) Enables or disables the autovacuum daemon for a particular table. - If true, the autovacuum daemon will perform automatic VACUUM - and/or ANALYZE operations on this table following the rules + If true, the autovacuum daemon will perform automatic VACUUM + and/or ANALYZE operations on this table following the rules discussed in . If false, this table will not be autovacuumed, except to prevent transaction ID wraparound. See for @@ -1194,14 +1194,14 @@ FROM ( { numeric_literal | parameter is false; setting individual tables' storage parameters does not override that. Therefore there is seldom much point in explicitly - setting this storage parameter to true, only - to false. + setting this storage parameter to true, only + to false. - autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer) + autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer) Per-table value for @@ -1211,7 +1211,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (float4) + autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (float4) Per-table value for @@ -1221,7 +1221,7 @@ FROM ( { numeric_literal | - autovacuum_analyze_threshold (integer) + autovacuum_analyze_threshold (integer) Per-table value for @@ -1231,7 +1231,7 @@ FROM ( { numeric_literal | - autovacuum_analyze_scale_factor (float4) + autovacuum_analyze_scale_factor (float4) Per-table value for @@ -1241,7 +1241,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (integer) + autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (integer) Per-table value for @@ -1251,7 +1251,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer) + autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer) Per-table value for @@ -1261,12 +1261,12 @@ FROM ( { numeric_literal | - autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer) + autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer) Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_freeze_min_age parameters that are + per-table autovacuum_freeze_min_age parameters that are larger than half the system-wide setting. @@ -1274,12 +1274,12 @@ FROM ( { numeric_literal | - autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer) + autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer) Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_freeze_max_age parameters that are + per-table autovacuum_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller). @@ -1301,7 +1301,7 @@ FROM ( { numeric_literal | Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_multixact_freeze_min_age parameters + per-table autovacuum_multixact_freeze_min_age parameters that are larger than half the system-wide setting. @@ -1316,7 +1316,7 @@ FROM ( { numeric_literal | parameter. Note that autovacuum will ignore - per-table autovacuum_multixact_freeze_max_age parameters + per-table autovacuum_multixact_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller). @@ -1369,11 +1369,11 @@ FROM ( { numeric_literal | oid column of that table, to ensure that + on the oid column of that table, to ensure that OIDs in the table will indeed uniquely identify rows even after counter wraparound. Avoid assuming that OIDs are unique across tables; if you need a database-wide unique identifier, use the - combination of tableoid and row OID for the + combination of tableoid and row OID for the purpose. @@ -1411,8 +1411,8 @@ FROM ( { numeric_literal | Examples - Create table films and table - distributors: + Create table films and table + distributors: CREATE TABLE films ( @@ -1484,7 +1484,7 @@ CREATE TABLE distributors ( Define a primary key table constraint for the table - films: + films: CREATE TABLE films ( @@ -1501,7 +1501,7 @@ CREATE TABLE films ( Define a primary key constraint for table - distributors. The following two examples are + distributors. The following two examples are equivalent, the first using the table constraint syntax, the second the column constraint syntax: @@ -1537,7 +1537,7 @@ CREATE TABLE distributors ( - Define two NOT NULL column constraints on the table + Define two NOT NULL column constraints on the table distributors, one of which is explicitly given a name: @@ -1585,7 +1585,7 @@ WITH (fillfactor=70); - Create table circles with an exclusion + Create table circles with an exclusion constraint that prevents any two circles from overlapping: @@ -1597,7 +1597,7 @@ CREATE TABLE circles ( - Create table cinemas in tablespace diskvol1: + Create table cinemas in tablespace diskvol1: CREATE TABLE cinemas ( @@ -1761,8 +1761,8 @@ CREATE TABLE cities_partdef The ON COMMIT clause for temporary tables also resembles the SQL standard, but has some differences. - If the ON COMMIT clause is omitted, SQL specifies that the - default behavior is ON COMMIT DELETE ROWS. However, the + If the ON COMMIT clause is omitted, SQL specifies that the + default behavior is ON COMMIT DELETE ROWS. However, the default behavior in PostgreSQL is ON COMMIT PRESERVE ROWS. The ON COMMIT DROP option does not exist in SQL. @@ -1773,15 +1773,15 @@ CREATE TABLE cities_partdef Non-deferred Uniqueness Constraints - When a UNIQUE or PRIMARY KEY constraint is + When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as - DEFERRABLE but not deferred (i.e., INITIALLY - IMMEDIATE). Be aware that this can be significantly slower than + DEFERRABLE but not deferred (i.e., INITIALLY + IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking. @@ -1790,8 +1790,8 @@ CREATE TABLE cities_partdef Column Check Constraints - The SQL standard says that CHECK column constraints - can only refer to the column they apply to; only CHECK + The SQL standard says that CHECK column constraints + can only refer to the column they apply to; only CHECK table constraints can refer to multiple columns. PostgreSQL does not enforce this restriction; it treats column and table check constraints alike. @@ -1802,7 +1802,7 @@ CREATE TABLE cities_partdef <literal>EXCLUDE</literal> Constraint - The EXCLUDE constraint type is a + The EXCLUDE constraint type is a PostgreSQL extension. @@ -1811,7 +1811,7 @@ CREATE TABLE cities_partdef <literal>NULL</literal> <quote>Constraint</quote> - The NULL constraint (actually a + The NULL constraint (actually a non-constraint) is a PostgreSQL extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the NOT @@ -1838,11 +1838,11 @@ CREATE TABLE cities_partdef PostgreSQL allows a table of no columns - to be created (for example, CREATE TABLE foo();). This + to be created (for example, CREATE TABLE foo();). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE - DROP COLUMN, so it seems cleaner to ignore this spec restriction. + DROP COLUMN, so it seems cleaner to ignore this spec restriction. @@ -1861,10 +1861,10 @@ CREATE TABLE cities_partdef - <literal>LIKE</> Clause + <literal>LIKE</literal> Clause - While a LIKE clause exists in the SQL standard, many of the + While a LIKE clause exists in the SQL standard, many of the options that PostgreSQL accepts for it are not in the standard, and some of the standard's options are not implemented by PostgreSQL. @@ -1872,10 +1872,10 @@ CREATE TABLE cities_partdef - <literal>WITH</> Clause + <literal>WITH</literal> Clause - The WITH clause is a PostgreSQL + The WITH clause is a PostgreSQL extension; neither storage parameters nor OIDs are in the standard. @@ -1904,19 +1904,19 @@ CREATE TABLE cities_partdef - <literal>PARTITION BY</> Clause + <literal>PARTITION BY</literal> Clause - The PARTITION BY clause is a + The PARTITION BY clause is a PostgreSQL extension. - <literal>PARTITION OF</> Clause + <literal>PARTITION OF</literal> Clause - The PARTITION OF clause is a + The PARTITION OF clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml index 0fa28a11fa..8198442a97 100644 --- a/doc/src/sgml/ref/create_table_as.sgml +++ b/doc/src/sgml/ref/create_table_as.sgml @@ -71,7 +71,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the table is created as a temporary table. @@ -81,7 +81,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - UNLOGGED + UNLOGGED If specified, the table is created as an unlogged table. @@ -91,7 +91,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -127,25 +127,25 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This clause specifies optional storage parameters for the new table; see for more - information. The WITH clause - can also include OIDS=TRUE (or just OIDS) + information. The WITH clause + can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or - OIDS=FALSE to specify that the rows should not have OIDs. + OIDS=FALSE to specify that the rows should not have OIDs. See for more information. - WITH OIDS - WITHOUT OIDS + WITH OIDS + WITHOUT OIDS - These are obsolescent syntaxes equivalent to WITH (OIDS) - and WITH (OIDS=FALSE), respectively. If you wish to give - both an OIDS setting and storage parameters, you must use - the WITH ( ... ) syntax; see above. + These are obsolescent syntaxes equivalent to WITH (OIDS) + and WITH (OIDS=FALSE), respectively. If you wish to give + both an OIDS setting and storage parameters, you must use + the WITH ( ... ) syntax; see above. @@ -214,14 +214,14 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI A , TABLE, or command, or an command that runs a - prepared SELECT, TABLE, or - VALUES query. + prepared SELECT, TABLE, or + VALUES query. - WITH [ NO ] DATA + WITH [ NO ] DATA This clause specifies whether or not the data produced by the query @@ -241,7 +241,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This command is functionally similar to , but it is preferred since it is less likely to be confused with other uses of - the SELECT INTO syntax. Furthermore, CREATE + the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset of the functionality offered by SELECT INTO. @@ -315,7 +315,7 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS - PostgreSQL handles temporary tables in a way + PostgreSQL handles temporary tables in a way rather different from the standard; see for details. @@ -324,7 +324,7 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS - The WITH clause is a PostgreSQL + The WITH clause is a PostgreSQL extension; neither storage parameters nor OIDs are in the standard. diff --git a/doc/src/sgml/ref/create_tablespace.sgml b/doc/src/sgml/ref/create_tablespace.sgml index 2fed29ffaf..4d95cac9e5 100644 --- a/doc/src/sgml/ref/create_tablespace.sgml +++ b/doc/src/sgml/ref/create_tablespace.sgml @@ -45,9 +45,9 @@ CREATE TABLESPACE tablespace_name A user with appropriate privileges can pass - tablespace_name to - CREATE DATABASE, CREATE TABLE, - CREATE INDEX or ADD CONSTRAINT to have the data + tablespace_name to + CREATE DATABASE, CREATE TABLE, + CREATE INDEX or ADD CONSTRAINT to have the data files for these objects stored within the specified tablespace. @@ -93,7 +93,7 @@ CREATE TABLESPACE tablespace_name The directory that will be used for the tablespace. The directory should be empty and must be owned by the - PostgreSQL system user. The directory must be + PostgreSQL system user. The directory must be specified by an absolute path name. @@ -104,8 +104,8 @@ CREATE TABLESPACE tablespace_name A tablespace parameter to be set or reset. Currently, the only - available parameters are seq_page_cost, - random_page_cost and effective_io_concurrency. + available parameters are seq_page_cost, + random_page_cost and effective_io_concurrency. Setting either value for a particular tablespace will override the planner's usual estimate of the cost of reading pages from tables in that tablespace, as established by the configuration parameters of the @@ -128,7 +128,7 @@ CREATE TABLESPACE tablespace_name - CREATE TABLESPACE cannot be executed inside a transaction + CREATE TABLESPACE cannot be executed inside a transaction block. @@ -137,15 +137,15 @@ CREATE TABLESPACE tablespace_name Examples - Create a tablespace dbspace at /data/dbs: + Create a tablespace dbspace at /data/dbs: CREATE TABLESPACE dbspace LOCATION '/data/dbs'; - Create a tablespace indexspace at /data/indexes - owned by user genevieve: + Create a tablespace indexspace at /data/indexes + owned by user genevieve: CREATE TABLESPACE indexspace OWNER genevieve LOCATION '/data/indexes'; @@ -155,7 +155,7 @@ CREATE TABLESPACE indexspace OWNER genevieve LOCATION '/data/indexes'; Compatibility - CREATE TABLESPACE is a PostgreSQL + CREATE TABLESPACE is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 7fc481d9fc..6726e3c766 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -86,10 +86,10 @@ CREATE [ CONSTRAINT ] TRIGGER name - Triggers that are specified to fire INSTEAD OF the trigger - event must be marked FOR EACH ROW, and can only be defined - on views. BEFORE and AFTER triggers on a view - must be marked as FOR EACH STATEMENT. + Triggers that are specified to fire INSTEAD OF the trigger + event must be marked FOR EACH ROW, and can only be defined + on views. BEFORE and AFTER triggers on a view + must be marked as FOR EACH STATEMENT. @@ -115,35 +115,35 @@ CREATE [ CONSTRAINT ] TRIGGER name - BEFORE - INSERT/UPDATE/DELETE + BEFORE + INSERT/UPDATE/DELETE Tables and foreign tables Tables, views, and foreign tables - TRUNCATE + TRUNCATE Tables - AFTER - INSERT/UPDATE/DELETE + AFTER + INSERT/UPDATE/DELETE Tables and foreign tables Tables, views, and foreign tables - TRUNCATE + TRUNCATE Tables - INSTEAD OF - INSERT/UPDATE/DELETE + INSTEAD OF + INSERT/UPDATE/DELETE Views - TRUNCATE + TRUNCATE @@ -152,11 +152,11 @@ CREATE [ CONSTRAINT ] TRIGGER name - Also, a trigger definition can specify a Boolean WHEN + Also, a trigger definition can specify a Boolean WHEN condition, which will be tested to see whether the trigger should - be fired. In row-level triggers the WHEN condition can + be fired. In row-level triggers the WHEN condition can examine the old and/or new values of columns of the row. Statement-level - triggers can also have WHEN conditions, although the feature + triggers can also have WHEN conditions, although the feature is not so useful for them since the condition cannot refer to any values in the table. @@ -167,36 +167,36 @@ CREATE [ CONSTRAINT ] TRIGGER name - When the CONSTRAINT option is specified, this command creates a - constraint trigger. This is the same as a regular trigger + When the CONSTRAINT option is specified, this command creates a + constraint trigger. This is the same as a regular trigger except that the timing of the trigger firing can be adjusted using . - Constraint triggers must be AFTER ROW triggers on plain + Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; in the latter case they - are said to be deferred. A pending deferred-trigger firing + are said to be deferred. A pending deferred-trigger firing can also be forced to happen immediately by using SET - CONSTRAINTS. Constraint triggers are expected to raise an exception + CONSTRAINTS. Constraint triggers are expected to raise an exception when the constraints they implement are violated. - The REFERENCING option enables collection - of transition relations, which are row sets that include all + The REFERENCING option enables collection + of transition relations, which are row sets that include all of the rows inserted, deleted, or modified by the current SQL statement. This feature lets the trigger see a global view of what the statement did, not just one row at a time. This option is only allowed for - an AFTER trigger that is not a constraint trigger; also, if - the trigger is an UPDATE trigger, it must not specify + an AFTER trigger that is not a constraint trigger; also, if + the trigger is an UPDATE trigger, it must not specify a column_name list. - OLD TABLE may only be specified once, and only for a trigger - that can fire on UPDATE or DELETE; it creates a - transition relation containing the before-images of all rows + OLD TABLE may only be specified once, and only for a trigger + that can fire on UPDATE or DELETE; it creates a + transition relation containing the before-images of all rows updated or deleted by the statement. - Similarly, NEW TABLE may only be specified once, and only for - a trigger that can fire on UPDATE or INSERT; - it creates a transition relation containing the after-images + Similarly, NEW TABLE may only be specified once, and only for + a trigger that can fire on UPDATE or INSERT; + it creates a transition relation containing the after-images of all rows updated or inserted by the statement. @@ -225,7 +225,7 @@ CREATE [ CONSTRAINT ] TRIGGER name The name cannot be schema-qualified — the trigger inherits the schema of its table. For a constraint trigger, this is also the name to use when modifying the trigger's behavior using - SET CONSTRAINTS. + SET CONSTRAINTS. @@ -238,7 +238,7 @@ CREATE [ CONSTRAINT ] TRIGGER name Determines whether the function is called before, after, or instead of the event. A constraint trigger can only be specified as - AFTER. + AFTER. @@ -261,11 +261,11 @@ CREATE [ CONSTRAINT ] TRIGGER name UPDATE OF column_name1 [, column_name2 ... ] The trigger will only fire if at least one of the listed columns - is mentioned as a target of the UPDATE command. + is mentioned as a target of the UPDATE command. - INSTEAD OF UPDATE events do not allow a list of columns. + INSTEAD OF UPDATE events do not allow a list of columns. A column list cannot be specified when requesting transition relations, either. @@ -352,7 +352,7 @@ UPDATE OF column_name1 [, column_name2FOR EACH STATEMENT is the default. Constraint triggers can only - be specified FOR EACH ROW. + be specified FOR EACH ROW. @@ -362,20 +362,20 @@ UPDATE OF column_name1 [, column_name2 A Boolean expression that determines whether the trigger function - will actually be executed. If WHEN is specified, the + will actually be executed. If WHEN is specified, the function will only be called if the condition returns true. - In FOR EACH ROW triggers, the WHEN + class="parameter">condition returns true. + In FOR EACH ROW triggers, the WHEN condition can refer to columns of the old and/or new row values by writing OLD.column_name or NEW.column_name respectively. - Of course, INSERT triggers cannot refer to OLD - and DELETE triggers cannot refer to NEW. + Of course, INSERT triggers cannot refer to OLD + and DELETE triggers cannot refer to NEW. - INSTEAD OF triggers do not support WHEN + INSTEAD OF triggers do not support WHEN conditions. @@ -385,7 +385,7 @@ UPDATE OF column_name1 [, column_name2 - Note that for constraint triggers, evaluation of the WHEN + Note that for constraint triggers, evaluation of the WHEN condition is not deferred, but occurs immediately after the row update operation is performed. If the condition does not evaluate to true then the trigger is not queued for deferred execution. @@ -398,7 +398,7 @@ UPDATE OF column_name1 [, column_name2 A user-supplied function that is declared as taking no arguments - and returning type trigger, which is executed when + and returning type trigger, which is executed when the trigger fires. @@ -438,32 +438,32 @@ UPDATE OF column_name1 [, column_name2 A column-specific trigger (one defined using the UPDATE OF column_name syntax) will fire when any - of its columns are listed as targets in the UPDATE - command's SET list. It is possible for a column's value + of its columns are listed as targets in the UPDATE + command's SET list. It is possible for a column's value to change even when the trigger is not fired, because changes made to the - row's contents by BEFORE UPDATE triggers are not considered. - Conversely, a command such as UPDATE ... SET x = x ... - will fire a trigger on column x, even though the column's + row's contents by BEFORE UPDATE triggers are not considered. + Conversely, a command such as UPDATE ... SET x = x ... + will fire a trigger on column x, even though the column's value did not change. - In a BEFORE trigger, the WHEN condition is + In a BEFORE trigger, the WHEN condition is evaluated just before the function is or would be executed, so using - WHEN is not materially different from testing the same + WHEN is not materially different from testing the same condition at the beginning of the trigger function. Note in particular - that the NEW row seen by the condition is the current value, - as possibly modified by earlier triggers. Also, a BEFORE - trigger's WHEN condition is not allowed to examine the - system columns of the NEW row (such as oid), + that the NEW row seen by the condition is the current value, + as possibly modified by earlier triggers. Also, a BEFORE + trigger's WHEN condition is not allowed to examine the + system columns of the NEW row (such as oid), because those won't have been set yet. - In an AFTER trigger, the WHEN condition is + In an AFTER trigger, the WHEN condition is evaluated just after the row update occurs, and it determines whether an event is queued to fire the trigger at the end of statement. So when an - AFTER trigger's WHEN condition does not return + AFTER trigger's WHEN condition does not return true, it is not necessary to queue an event nor to re-fetch the row at end of statement. This can result in significant speedups in statements that modify many rows, if the trigger only needs to be fired for a few of the @@ -473,7 +473,7 @@ UPDATE OF column_name1 [, column_name2 In some cases it is possible for a single SQL command to fire more than one kind of trigger. For instance an INSERT with - an ON CONFLICT DO UPDATE clause may cause both insert and + an ON CONFLICT DO UPDATE clause may cause both insert and update operations, so it will fire both kinds of triggers as needed. The transition relations supplied to triggers are specific to their event type; thus an INSERT trigger @@ -483,14 +483,14 @@ UPDATE OF column_name1 [, column_name2 Row updates or deletions caused by foreign-key enforcement actions, such - as ON UPDATE CASCADE or ON DELETE SET NULL, are + as ON UPDATE CASCADE or ON DELETE SET NULL, are treated as part of the SQL command that caused them (note that such actions are never deferred). Relevant triggers on the affected table will be fired, so that this provides another way in which a SQL command might fire triggers not directly matching its type. In simple cases, triggers that request transition relations will see all changes caused in their table by a single original SQL command as a single transition relation. - However, there are cases in which the presence of an AFTER ROW + However, there are cases in which the presence of an AFTER ROW trigger that requests transition relations will cause the foreign-key enforcement actions triggered by a single SQL command to be split into multiple steps, each with its own transition relation(s). In such cases, @@ -516,10 +516,10 @@ UPDATE OF column_name1 [, column_name2 In PostgreSQL versions before 7.3, it was necessary to declare trigger functions as returning the placeholder - type opaque, rather than trigger. To support loading - of old dump files, CREATE TRIGGER will accept a function - declared as returning opaque, but it will issue a notice and - change the function's declared return type to trigger. + type opaque, rather than trigger. To support loading + of old dump files, CREATE TRIGGER will accept a function + declared as returning opaque, but it will issue a notice and + change the function's declared return type to trigger. @@ -527,8 +527,8 @@ UPDATE OF column_name1 [, column_name2Examples - Execute the function check_account_update whenever - a row of the table accounts is about to be updated: + Execute the function check_account_update whenever + a row of the table accounts is about to be updated: CREATE TRIGGER check_update @@ -537,8 +537,8 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - The same, but only execute the function if column balance - is specified as a target in the UPDATE command: + The same, but only execute the function if column balance + is specified as a target in the UPDATE command: CREATE TRIGGER check_update @@ -547,7 +547,7 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - This form only executes the function if column balance + This form only executes the function if column balance has in fact changed value: @@ -558,7 +558,7 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - Call a function to log updates of accounts, but only if + Call a function to log updates of accounts, but only if something changed: @@ -569,7 +569,7 @@ CREATE TRIGGER log_update EXECUTE PROCEDURE log_account_update(); - Execute the function view_insert_row for each row to insert + Execute the function view_insert_row for each row to insert rows into the tables underlying a view: @@ -579,8 +579,8 @@ CREATE TRIGGER view_insert EXECUTE PROCEDURE view_insert_row(); - Execute the function check_transfer_balances_to_zero for each - statement to confirm that the transfer rows offset to a net of + Execute the function check_transfer_balances_to_zero for each + statement to confirm that the transfer rows offset to a net of zero: @@ -591,7 +591,7 @@ CREATE TRIGGER transfer_insert EXECUTE PROCEDURE check_transfer_balances_to_zero(); - Execute the function check_matching_pairs for each row to + Execute the function check_matching_pairs for each row to confirm that changes are made to matching pairs at the same time (by the same statement): @@ -624,27 +624,27 @@ CREATE TRIGGER paired_items_update The CREATE TRIGGER statement in PostgreSQL implements a subset of the - SQL standard. The following functionalities are currently + SQL standard. The following functionalities are currently missing: - While transition table names for AFTER triggers are - specified using the REFERENCING clause in the standard way, - the row variables used in FOR EACH ROW triggers may not be - specified in a REFERENCING clause. They are available in a + While transition table names for AFTER triggers are + specified using the REFERENCING clause in the standard way, + the row variables used in FOR EACH ROW triggers may not be + specified in a REFERENCING clause. They are available in a manner that is dependent on the language in which the trigger function is written, but is fixed for any one language. Some languages - effectively behave as though there is a REFERENCING clause - containing OLD ROW AS OLD NEW ROW AS NEW. + effectively behave as though there is a REFERENCING clause + containing OLD ROW AS OLD NEW ROW AS NEW. The standard allows transition tables to be used with - column-specific UPDATE triggers, but then the set of rows + column-specific UPDATE triggers, but then the set of rows that should be visible in the transition tables depends on the trigger's column list. This is not currently implemented by PostgreSQL. @@ -673,7 +673,7 @@ CREATE TRIGGER paired_items_update SQL specifies that BEFORE DELETE triggers on cascaded - deletes fire after the cascaded DELETE completes. + deletes fire after the cascaded DELETE completes. The PostgreSQL behavior is for BEFORE DELETE to always fire before the delete action, even a cascading one. This is considered more consistent. There is also nonstandard @@ -685,19 +685,19 @@ CREATE TRIGGER paired_items_update The ability to specify multiple actions for a single trigger using - OR is a PostgreSQL extension of + OR is a PostgreSQL extension of the SQL standard. The ability to fire triggers for TRUNCATE is a - PostgreSQL extension of the SQL standard, as is the + PostgreSQL extension of the SQL standard, as is the ability to define statement-level triggers on views. CREATE CONSTRAINT TRIGGER is a - PostgreSQL extension of the SQL + PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/create_tsconfig.sgml b/doc/src/sgml/ref/create_tsconfig.sgml index 63321520df..d1792e5d29 100644 --- a/doc/src/sgml/ref/create_tsconfig.sgml +++ b/doc/src/sgml/ref/create_tsconfig.sgml @@ -99,7 +99,7 @@ CREATE TEXT SEARCH CONFIGURATION nameNotes - The PARSER and COPY options are mutually + The PARSER and COPY options are mutually exclusive, because when an existing configuration is copied, its parser selection is copied too. diff --git a/doc/src/sgml/ref/create_tstemplate.sgml b/doc/src/sgml/ref/create_tstemplate.sgml index 360ad41f35..e10f18b28b 100644 --- a/doc/src/sgml/ref/create_tstemplate.sgml +++ b/doc/src/sgml/ref/create_tstemplate.sgml @@ -49,7 +49,7 @@ CREATE TEXT SEARCH TEMPLATE name ( TEMPLATE. This restriction is made because an erroneous text search template definition could confuse or even crash the server. The reason for separating templates from dictionaries is that a template - encapsulates the unsafe aspects of defining a dictionary. + encapsulates the unsafe aspects of defining a dictionary. The parameters that can be set when defining a dictionary are safe for unprivileged users to set, and so creating a dictionary need not be a privileged operation. diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 312bd050bc..02ca27b281 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -81,8 +81,8 @@ CREATE TYPE name There are five forms of CREATE TYPE, as shown in the syntax synopsis above. They respectively create a composite - type, an enum type, a range type, a - base type, or a shell type. The first four + type, an enum type, a range type, a + base type, or a shell type. The first four of these are discussed in turn below. A shell type is simply a placeholder for a type to be defined later; it is created by issuing CREATE TYPE with no parameters except for the type name. Shell types @@ -154,7 +154,7 @@ CREATE TYPE name declared. To do this, you must first create a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command CREATE TYPE - name, with no additional parameters. Then + name, with no additional parameters. Then the function can be declared using the shell type as argument and result, and finally the range type can be declared using the same name. This automatically replaces the shell type entry with a valid range type. @@ -211,7 +211,7 @@ CREATE TYPE name The first argument is the input text as a C string, the second argument is the type's own OID (except for array types, which instead receive their element type's OID), - and the third is the typmod of the destination column, if known + and the third is the typmod of the destination column, if known (-1 will be passed if not). The input function must return a value of the data type itself. Usually, an input function should be declared STRICT; if it is not, @@ -264,12 +264,12 @@ CREATE TYPE name You should at this point be wondering how the input and output functions can be declared to have results or arguments of the new type, when they have to be created before the new type can be created. The answer is that - the type should first be defined as a shell type, which is a + the type should first be defined as a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command CREATE TYPE - name, with no additional parameters. Then the + name, with no additional parameters. Then the C I/O functions can be defined referencing the shell type. Finally, - CREATE TYPE with a full definition replaces the shell entry + CREATE TYPE with a full definition replaces the shell entry with a complete, valid type definition, after which the new type can be used normally. @@ -279,23 +279,23 @@ CREATE TYPE name type_modifier_input_function and type_modifier_output_function are needed if the type supports modifiers, that is optional constraints - attached to a type declaration, such as char(5) or - numeric(30,2). PostgreSQL allows + attached to a type declaration, such as char(5) or + numeric(30,2). PostgreSQL allows user-defined types to take one or more simple constants or identifiers as modifiers. However, this information must be capable of being packed into a single non-negative integer value for storage in the system catalogs. The type_modifier_input_function - is passed the declared modifier(s) in the form of a cstring + is passed the declared modifier(s) in the form of a cstring array. It must check the values for validity (throwing an error if they are wrong), and if they are correct, return a single non-negative - integer value that will be stored as the column typmod. + integer value that will be stored as the column typmod. Type modifiers will be rejected if the type does not have a type_modifier_input_function. The type_modifier_output_function converts the internal integer typmod value back to the correct form for - user display. It must return a cstring value that is the exact - string to append to the type name; for example numeric's - function might return (30,2). + user display. It must return a cstring value that is the exact + string to append to the type name; for example numeric's + function might return (30,2). It is allowed to omit the type_modifier_output_function, in which case the default display format is just the stored typmod integer @@ -305,14 +305,14 @@ CREATE TYPE name The optional analyze_function performs type-specific statistics collection for columns of the data type. - By default, ANALYZE will attempt to gather statistics using - the type's equals and less-than operators, if there + By default, ANALYZE will attempt to gather statistics using + the type's equals and less-than operators, if there is a default b-tree operator class for the type. For non-scalar types this behavior is likely to be unsuitable, so it can be overridden by specifying a custom analysis function. The analysis function must be - declared to take a single argument of type internal, and return - a boolean result. The detailed API for analysis functions appears - in src/include/commands/vacuum.h. + declared to take a single argument of type internal, and return + a boolean result. The detailed API for analysis functions appears + in src/include/commands/vacuum.h. @@ -327,7 +327,7 @@ CREATE TYPE name positive integer, or variable-length, indicated by setting internallength to VARIABLE. (Internally, this is represented - by setting typlen to -1.) The internal representation of all + by setting typlen to -1.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type. (Note that the length field is often encoded, as described in ; it's unwise @@ -338,7 +338,7 @@ CREATE TYPE name The optional flag PASSEDBYVALUE indicates that values of this data type are passed by value, rather than by reference. Types passed by value must be fixed-length, and their internal - representation cannot be larger than the size of the Datum type + representation cannot be larger than the size of the Datum type (4 bytes on some machines, 8 bytes on others). @@ -347,7 +347,7 @@ CREATE TYPE name specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least - 4, since they necessarily contain an int4 as their first component. + 4, since they necessarily contain an int4 as their first component. @@ -372,12 +372,12 @@ CREATE TYPE name All storage values other than plain imply that the functions of the data type - can handle values that have been toasted, as described + can handle values that have been toasted, as described in and . The specific other value given merely determines the default TOAST storage strategy for columns of a toastable data type; users can pick other strategies for individual columns using ALTER TABLE - SET STORAGE. + SET STORAGE. @@ -389,9 +389,9 @@ CREATE TYPE name alignment, and storage are copied from the named type. (It is possible, though usually undesirable, to override - some of these values by specifying them along with the LIKE + some of these values by specifying them along with the LIKE clause.) Specifying representation this way is especially useful when - the low-level implementation of the new type piggybacks on an + the low-level implementation of the new type piggybacks on an existing type in some fashion. @@ -400,7 +400,7 @@ CREATE TYPE name preferred parameters can be used to help control which implicit cast will be applied in ambiguous situations. Each data type belongs to a category named by a single ASCII - character, and each type is either preferred or not within its + character, and each type is either preferred or not within its category. The parser will prefer casting to preferred types (but only from other types within the same category) when this rule is helpful in resolving overloaded functions or operators. For more details see name other types, it is sufficient to leave these settings at the defaults. However, for a group of related types that have implicit casts, it is often helpful to mark them all as belonging to a category and select one or two - of the most general types as being preferred within the category. + of the most general types as being preferred within the category. The category parameter is especially useful when adding a user-defined type to an existing built-in category, such as the numeric or string types. However, it is also @@ -426,7 +426,7 @@ CREATE TYPE name To indicate that a type is an array, specify the type of the array - elements using the ELEMENT key word. For example, to + elements using the ELEMENT key word. For example, to define an array of 4-byte integers (int4), specify ELEMENT = int4. More details about array types appear below. @@ -465,26 +465,26 @@ CREATE TYPE name so generated collides with an existing type name, the process is repeated until a non-colliding name is found.) This implicitly-created array type is variable length and uses the - built-in input and output functions array_in and - array_out. The array type tracks any changes in its + built-in input and output functions array_in and + array_out. The array type tracks any changes in its element type's owner or schema, and is dropped if the element type is. - You might reasonably ask why there is an option, if the system makes the correct array type automatically. - The only case where it's useful to use is when you are making a fixed-length type that happens to be internally an array of a number of identical things, and you want to allow these things to be accessed directly by subscripting, in addition to whatever operations you plan - to provide for the type as a whole. For example, type point + to provide for the type as a whole. For example, type point is represented as just two floating-point numbers, which can be accessed - using point[0] and point[1]. + using point[0] and point[1]. Note that this facility only works for fixed-length types whose internal form is exactly a sequence of identical fixed-length fields. A subscriptable variable-length type must have the generalized internal representation - used by array_in and array_out. + used by array_in and array_out. For historical reasons (i.e., this is clearly wrong but it's far too late to change it), subscripting of fixed-length array types starts from zero, rather than from one as for variable-length arrays. @@ -697,7 +697,7 @@ CREATE TYPE name alignment, and storage are copied from that type, unless overridden by explicit - specification elsewhere in this CREATE TYPE command. + specification elsewhere in this CREATE TYPE command. @@ -707,7 +707,7 @@ CREATE TYPE name The category code (a single ASCII character) for this type. - The default is 'U' for user-defined type. + The default is 'U' for user-defined type. Other standard category codes can be found in . You may also choose other ASCII characters in order to create custom categories. @@ -779,7 +779,7 @@ CREATE TYPE name This is usually not an issue for the sorts of functions that are useful in a type definition. But you might want to think twice before designing a type - in a way that would require secret information to be used + in a way that would require secret information to be used while converting it to or from external form. @@ -792,7 +792,7 @@ CREATE TYPE name this in case of maximum-length names or collisions with user type names that begin with underscore. Writing code that depends on this convention is therefore deprecated. Instead, use - pg_type.typarray to locate the array type + pg_type.typarray to locate the array type associated with a given type. @@ -807,7 +807,7 @@ CREATE TYPE name Before PostgreSQL version 8.2, the shell-type creation syntax - CREATE TYPE name did not exist. + CREATE TYPE name did not exist. The way to create a new base type was to create its input function first. In this approach, PostgreSQL will first see the name of the new data type as the return type of the input function. @@ -824,10 +824,10 @@ CREATE TYPE name In PostgreSQL versions before 7.3, it was customary to avoid creating a shell type at all, by replacing the functions' forward references to the type name with the placeholder - pseudo-type opaque. The cstring arguments and - results also had to be declared as opaque before 7.3. To - support loading of old dump files, CREATE TYPE will - accept I/O functions declared using opaque, but it will issue + pseudo-type opaque. The cstring arguments and + results also had to be declared as opaque before 7.3. To + support loading of old dump files, CREATE TYPE will + accept I/O functions declared using opaque, but it will issue a notice and change the function declarations to use the correct types. @@ -894,7 +894,7 @@ CREATE TABLE myboxes ( If the internal structure of box were an array of four - float4 elements, we might instead use: + float4 elements, we might instead use: CREATE TYPE box ( INTERNALLENGTH = 16, @@ -933,11 +933,11 @@ CREATE TABLE big_objs ( The first form of the CREATE TYPE command, which - creates a composite type, conforms to the SQL standard. + creates a composite type, conforms to the SQL standard. The other forms are PostgreSQL extensions. The CREATE TYPE statement in - the SQL standard also defines other forms that are not - implemented in PostgreSQL. + the SQL standard also defines other forms that are not + implemented in PostgreSQL. diff --git a/doc/src/sgml/ref/create_user.sgml b/doc/src/sgml/ref/create_user.sgml index 480b6405e6..500169da98 100644 --- a/doc/src/sgml/ref/create_user.sgml +++ b/doc/src/sgml/ref/create_user.sgml @@ -51,8 +51,8 @@ CREATE USER name [ [ WITH ] CREATE USER is now an alias for . The only difference is that when the command is spelled - CREATE USER, LOGIN is assumed - by default, whereas NOLOGIN is assumed when + CREATE USER, LOGIN is assumed + by default, whereas NOLOGIN is assumed when the command is spelled CREATE ROLE. diff --git a/doc/src/sgml/ref/create_user_mapping.sgml b/doc/src/sgml/ref/create_user_mapping.sgml index d6f29c9489..10182e1426 100644 --- a/doc/src/sgml/ref/create_user_mapping.sgml +++ b/doc/src/sgml/ref/create_user_mapping.sgml @@ -41,7 +41,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na The owner of a foreign server can create user mappings for that server for any user. Also, a user can create a user mapping for - their own user name if USAGE privilege on the server has + their own user name if USAGE privilege on the server has been granted to the user. @@ -51,7 +51,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a mapping of the given user to the given foreign @@ -67,8 +67,8 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na The name of an existing user that is mapped to foreign server. - CURRENT_USER and USER match the name of - the current user. When PUBLIC is specified, a + CURRENT_USER and USER match the name of + the current user. When PUBLIC is specified, a so-called public mapping is created that is used when no user-specific mapping is applicable. @@ -103,7 +103,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na Examples - Create a user mapping for user bob, server foo: + Create a user mapping for user bob, server foo: CREATE USER MAPPING FOR bob SERVER foo OPTIONS (user 'bob', password 'secret'); diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index 695c759312..c0dd022495 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -48,7 +48,7 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW If a schema name is given (for example, CREATE VIEW - myschema.myview ...) then the view is created in the specified + myschema.myview ...) then the view is created in the specified schema. Otherwise it is created in the current schema. Temporary views exist in a special schema, so a schema name cannot be given when creating a temporary view. The name of the view must be @@ -62,7 +62,7 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the view is created as a temporary view. @@ -82,16 +82,16 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW - RECURSIVE + RECURSIVE Creates a recursive view. The syntax -CREATE RECURSIVE VIEW [ schema . ] view_name (column_names) AS SELECT ...; +CREATE RECURSIVE VIEW [ schema . ] view_name (column_names) AS SELECT ...; is equivalent to -CREATE VIEW [ schema . ] view_name AS WITH RECURSIVE view_name (column_names) AS (SELECT ...) SELECT column_names FROM view_name; +CREATE VIEW [ schema . ] view_name AS WITH RECURSIVE view_name (column_names) AS (SELECT ...) SELECT column_names FROM view_name; A view column name list must be specified for a recursive view. @@ -129,9 +129,9 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR check_option (string) - This parameter may be either local or - cascaded, and is equivalent to specifying - WITH [ CASCADED | LOCAL ] CHECK OPTION (see below). + This parameter may be either local or + cascaded, and is equivalent to specifying + WITH [ CASCADED | LOCAL ] CHECK OPTION (see below). This option can be changed on existing views using . @@ -175,12 +175,12 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR This option controls the behavior of automatically updatable views. When - this option is specified, INSERT and UPDATE + this option is specified, INSERT and UPDATE commands on the view will be checked to ensure that new rows satisfy the view-defining condition (that is, the new rows are checked to ensure that they are visible through the view). If they are not, the update will be - rejected. If the CHECK OPTION is not specified, - INSERT and UPDATE commands on the view are + rejected. If the CHECK OPTION is not specified, + INSERT and UPDATE commands on the view are allowed to create rows that are not visible through the view. The following check options are supported: @@ -191,7 +191,7 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR New rows are only checked against the conditions defined directly in the view itself. Any conditions defined on underlying base views are - not checked (unless they also specify the CHECK OPTION). + not checked (unless they also specify the CHECK OPTION). @@ -201,9 +201,9 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR New rows are checked against the conditions of the view and all - underlying base views. If the CHECK OPTION is specified, - and neither LOCAL nor CASCADED is specified, - then CASCADED is assumed. + underlying base views. If the CHECK OPTION is specified, + and neither LOCAL nor CASCADED is specified, + then CASCADED is assumed. @@ -211,26 +211,26 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR - The CHECK OPTION may not be used with RECURSIVE + The CHECK OPTION may not be used with RECURSIVE views. - Note that the CHECK OPTION is only supported on views that - are automatically updatable, and do not have INSTEAD OF - triggers or INSTEAD rules. If an automatically updatable - view is defined on top of a base view that has INSTEAD OF - triggers, then the LOCAL CHECK OPTION may be used to check + Note that the CHECK OPTION is only supported on views that + are automatically updatable, and do not have INSTEAD OF + triggers or INSTEAD rules. If an automatically updatable + view is defined on top of a base view that has INSTEAD OF + triggers, then the LOCAL CHECK OPTION may be used to check the conditions on the automatically updatable view, but the conditions - on the base view with INSTEAD OF triggers will not be + on the base view with INSTEAD OF triggers will not be checked (a cascaded check option will not cascade down to a trigger-updatable view, and any check options defined directly on a trigger-updatable view will be ignored). If the view or any of its base - relations has an INSTEAD rule that causes the - INSERT or UPDATE command to be rewritten, then + relations has an INSTEAD rule that causes the + INSERT or UPDATE command to be rewritten, then all check options will be ignored in the rewritten query, including any checks from automatically updatable views defined on top of the relation - with the INSTEAD rule. + with the INSTEAD rule. @@ -251,8 +251,8 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR CREATE VIEW vista AS SELECT 'Hello World'; - is bad form because the column name defaults to ?column?; - also, the column data type defaults to text, which might not + is bad form because the column name defaults to ?column?; + also, the column data type defaults to text, which might not be what you wanted. Better style for a string literal in a view's result is something like: @@ -271,7 +271,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; - When CREATE OR REPLACE VIEW is used on an + When CREATE OR REPLACE VIEW is used on an existing view, only the view's defining SELECT rule is changed. Other view properties, including ownership, permissions, and non-SELECT rules, remain unchanged. You must own the view @@ -287,30 +287,30 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; Simple views are automatically updatable: the system will allow - INSERT, UPDATE and DELETE statements + INSERT, UPDATE and DELETE statements to be used on the view in the same way as on a regular table. A view is automatically updatable if it satisfies all of the following conditions: - The view must have exactly one entry in its FROM list, + The view must have exactly one entry in its FROM list, which must be a table or another updatable view. - The view definition must not contain WITH, - DISTINCT, GROUP BY, HAVING, - LIMIT, or OFFSET clauses at the top level. + The view definition must not contain WITH, + DISTINCT, GROUP BY, HAVING, + LIMIT, or OFFSET clauses at the top level. - The view definition must not contain set operations (UNION, - INTERSECT or EXCEPT) at the top level. + The view definition must not contain set operations (UNION, + INTERSECT or EXCEPT) at the top level. @@ -327,42 +327,42 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; An automatically updatable view may contain a mix of updatable and non-updatable columns. A column is updatable if it is a simple reference to an updatable column of the underlying base relation; otherwise the - column is read-only, and an error will be raised if an INSERT - or UPDATE statement attempts to assign a value to it. + column is read-only, and an error will be raised if an INSERT + or UPDATE statement attempts to assign a value to it. If the view is automatically updatable the system will convert any - INSERT, UPDATE or DELETE statement + INSERT, UPDATE or DELETE statement on the view into the corresponding statement on the underlying base - relation. INSERT statements that have an ON - CONFLICT UPDATE clause are fully supported. + relation. INSERT statements that have an ON + CONFLICT UPDATE clause are fully supported. - If an automatically updatable view contains a WHERE + If an automatically updatable view contains a WHERE condition, the condition restricts which rows of the base relation are - available to be modified by UPDATE and DELETE - statements on the view. However, an UPDATE is allowed to - change a row so that it no longer satisfies the WHERE + available to be modified by UPDATE and DELETE + statements on the view. However, an UPDATE is allowed to + change a row so that it no longer satisfies the WHERE condition, and thus is no longer visible through the view. Similarly, - an INSERT command can potentially insert base-relation rows - that do not satisfy the WHERE condition and thus are not - visible through the view (ON CONFLICT UPDATE may + an INSERT command can potentially insert base-relation rows + that do not satisfy the WHERE condition and thus are not + visible through the view (ON CONFLICT UPDATE may similarly affect an existing row not visible through the view). - The CHECK OPTION may be used to prevent - INSERT and UPDATE commands from creating + The CHECK OPTION may be used to prevent + INSERT and UPDATE commands from creating such rows that are not visible through the view. If an automatically updatable view is marked with the - security_barrier property then all the view's WHERE + security_barrier property then all the view's WHERE conditions (and any conditions using operators which are marked as LEAKPROOF) will always be evaluated before any conditions that a user of the view has added. See for full details. Note that, due to this, rows which are not ultimately returned (because they do not - pass the user's WHERE conditions) may still end up being locked. + pass the user's WHERE conditions) may still end up being locked. EXPLAIN can be used to see which conditions are applied at the relation level (and therefore do not lock rows) and which are not. @@ -372,7 +372,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; A more complex view that does not satisfy all these conditions is read-only by default: the system will not allow an insert, update, or delete on the view. You can get the effect of an updatable view by - creating INSTEAD OF triggers on the view, which must + creating INSTEAD OF triggers on the view, which must convert attempted inserts, etc. on the view into appropriate actions on other tables. For more information see . Another possibility is to create rules @@ -404,13 +404,13 @@ CREATE VIEW comedies AS WHERE kind = 'Comedy'; This will create a view containing the columns that are in the - film table at the time of view creation. Though - * was used to create the view, columns added later to + film table at the time of view creation. Though + * was used to create the view, columns added later to the table will not be part of the view. - Create a view with LOCAL CHECK OPTION: + Create a view with LOCAL CHECK OPTION: CREATE VIEW universal_comedies AS @@ -419,16 +419,16 @@ CREATE VIEW universal_comedies AS WHERE classification = 'U' WITH LOCAL CHECK OPTION; - This will create a view based on the comedies view, showing - only films with kind = 'Comedy' and - classification = 'U'. Any attempt to INSERT or - UPDATE a row in the view will be rejected if the new row - doesn't have classification = 'U', but the film - kind will not be checked. + This will create a view based on the comedies view, showing + only films with kind = 'Comedy' and + classification = 'U'. Any attempt to INSERT or + UPDATE a row in the view will be rejected if the new row + doesn't have classification = 'U', but the film + kind will not be checked. - Create a view with CASCADED CHECK OPTION: + Create a view with CASCADED CHECK OPTION: CREATE VIEW pg_comedies AS @@ -437,8 +437,8 @@ CREATE VIEW pg_comedies AS WHERE classification = 'PG' WITH CASCADED CHECK OPTION; - This will create a view that checks both the kind and - classification of new rows. + This will create a view that checks both the kind and + classification of new rows. @@ -454,10 +454,10 @@ CREATE VIEW comedies AS FROM films f WHERE f.kind = 'Comedy'; - This view will support INSERT, UPDATE and - DELETE. All the columns from the films table will - be updatable, whereas the computed columns country and - avg_rating will be read-only. + This view will support INSERT, UPDATE and + DELETE. All the columns from the films table will + be updatable, whereas the computed columns country and + avg_rating will be read-only. @@ -469,7 +469,7 @@ UNION ALL SELECT n+1 FROM nums_1_100 WHERE n < 100; Notice that although the recursive view's name is schema-qualified in this - CREATE, its internal self-reference is not schema-qualified. + CREATE, its internal self-reference is not schema-qualified. This is because the implicitly-created CTE's name cannot be schema-qualified. @@ -482,7 +482,7 @@ UNION ALL CREATE OR REPLACE VIEW is a PostgreSQL language extension. So is the concept of a temporary view. - The WITH ( ... ) clause is an extension as well. + The WITH ( ... ) clause is an extension as well. diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml index 9fc4c16a81..0112d3a848 100644 --- a/doc/src/sgml/ref/createdb.sgml +++ b/doc/src/sgml/ref/createdb.sgml @@ -86,8 +86,8 @@ PostgreSQL documentation - - + + Specifies the default tablespace for the database. (This name @@ -97,8 +97,8 @@ PostgreSQL documentation - - + + Echo the commands that createdb generates @@ -108,8 +108,8 @@ PostgreSQL documentation - - + + Specifies the character encoding scheme to be used in this @@ -121,8 +121,8 @@ PostgreSQL documentation - - + + Specifies the locale to be used in this database. This is equivalent @@ -132,7 +132,7 @@ PostgreSQL documentation - + Specifies the LC_COLLATE setting to be used in this database. @@ -141,7 +141,7 @@ PostgreSQL documentation - + Specifies the LC_CTYPE setting to be used in this database. @@ -150,8 +150,8 @@ PostgreSQL documentation - - + + Specifies the database user who will own the new database. @@ -161,8 +161,8 @@ PostgreSQL documentation - - + + Specifies the template database from which to build this @@ -172,8 +172,8 @@ PostgreSQL documentation - - + + Print the createdb version and exit. @@ -182,8 +182,8 @@ PostgreSQL documentation - - + + Show help about createdb command line @@ -209,8 +209,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -221,8 +221,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or the local Unix domain socket file @@ -232,8 +232,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -242,8 +242,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -257,8 +257,8 @@ PostgreSQL documentation - - + + Force createdb to prompt for a @@ -271,14 +271,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, createdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to when creating the @@ -325,8 +325,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -362,7 +362,7 @@ PostgreSQL documentation To create the database demo using the - server on host eden, port 5000, using the + server on host eden, port 5000, using the template0 template database, here is the command-line command and the underlying SQL command: diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml index fda77976ff..788ee81daf 100644 --- a/doc/src/sgml/ref/createuser.sgml +++ b/doc/src/sgml/ref/createuser.sgml @@ -34,15 +34,15 @@ PostgreSQL documentation createuser creates a new PostgreSQL user (or more precisely, a role). - Only superusers and users with CREATEROLE privilege can create + Only superusers and users with CREATEROLE privilege can create new users, so createuser must be invoked by someone who can connect as a superuser or a user with - CREATEROLE privilege. + CREATEROLE privilege. If you wish to create a new superuser, you must connect as a - superuser, not merely with CREATEROLE privilege. + superuser, not merely with CREATEROLE privilege. Being a superuser implies the ability to bypass all access permission checks within the database, so superuserdom should not be granted lightly. @@ -61,7 +61,7 @@ PostgreSQL documentation Options - createuser accepts the following command-line arguments: + createuser accepts the following command-line arguments: @@ -77,8 +77,8 @@ PostgreSQL documentation - - + + Set a maximum number of connections for the new user. @@ -88,8 +88,8 @@ PostgreSQL documentation - - + + The new user will be allowed to create databases. @@ -98,8 +98,8 @@ PostgreSQL documentation - - + + The new user will not be allowed to create databases. This is the @@ -109,8 +109,8 @@ PostgreSQL documentation - - + + Echo the commands that createuser generates @@ -120,8 +120,8 @@ PostgreSQL documentation - - + + This option is obsolete but still accepted for backward @@ -131,21 +131,21 @@ PostgreSQL documentation - - + + Indicates role to which this role will be added immediately as a new member. Multiple roles to which this role will be added as a member can be specified by writing multiple - switches. - - + + The new role will automatically inherit privileges of roles @@ -156,8 +156,8 @@ PostgreSQL documentation - - + + The new role will not automatically inherit privileges of roles @@ -167,7 +167,7 @@ PostgreSQL documentation - + Prompt for the user name if none is specified on the command line, and @@ -181,8 +181,8 @@ PostgreSQL documentation - - + + The new user will be allowed to log in (that is, the user name @@ -193,8 +193,8 @@ PostgreSQL documentation - - + + The new user will not be allowed to log in. @@ -205,8 +205,8 @@ PostgreSQL documentation - - + + If given, createuser will issue a prompt for @@ -217,19 +217,19 @@ PostgreSQL documentation - - + + The new user will be allowed to create new roles (that is, - this user will have CREATEROLE privilege). + this user will have CREATEROLE privilege). - - + + The new user will not be allowed to create new roles. This is the @@ -239,8 +239,8 @@ PostgreSQL documentation - - + + The new user will be a superuser. @@ -249,8 +249,8 @@ PostgreSQL documentation - - + + The new user will not be a superuser. This is the default. @@ -259,8 +259,8 @@ PostgreSQL documentation - - + + Print the createuser version and exit. @@ -269,7 +269,7 @@ PostgreSQL documentation - + The new user will have the REPLICATION privilege, @@ -280,7 +280,7 @@ PostgreSQL documentation - + The new user will not have the REPLICATION @@ -291,8 +291,8 @@ PostgreSQL documentation - - + + Show help about createuser command line @@ -310,8 +310,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -323,8 +323,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -335,8 +335,8 @@ PostgreSQL documentation - - + + User name to connect as (not the user name to create). @@ -345,8 +345,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -360,8 +360,8 @@ PostgreSQL documentation - - + + Force createuser to prompt for a @@ -375,7 +375,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, createuser will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -403,8 +403,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -451,7 +451,7 @@ PostgreSQL documentation To create the same user joe using the - server on host eden, port 5000, with attributes explicitly specified, + server on host eden, port 5000, with attributes explicitly specified, taking a look at the underlying command: $ createuser -h eden -p 5000 -S -D -R -e joe diff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml index 5cb85cc568..8eae0354af 100644 --- a/doc/src/sgml/ref/declare.sgml +++ b/doc/src/sgml/ref/declare.sgml @@ -45,7 +45,7 @@ DECLARE name [ BINARY ] [ INSENSITI This page describes usage of cursors at the SQL command level. - If you are trying to use cursors inside a PL/pgSQL + If you are trying to use cursors inside a PL/pgSQL function, the rules are different — see . @@ -144,13 +144,13 @@ DECLARE name [ BINARY ] [ INSENSITI Normal cursors return data in text format, the same as a - SELECT would produce. The BINARY option + SELECT would produce. The BINARY option specifies that the cursor should return data in binary format. This reduces conversion effort for both the server and client, at the cost of more programmer effort to deal with platform-dependent binary data formats. As an example, if a query returns a value of one from an integer column, - you would get a string of 1 with a default cursor, + you would get a string of 1 with a default cursor, whereas with a binary cursor you would get a 4-byte field containing the internal representation of the value (in big-endian byte order). @@ -165,8 +165,8 @@ DECLARE name [ BINARY ] [ INSENSITI - When the client application uses the extended query protocol - to issue a FETCH command, the Bind protocol message + When the client application uses the extended query protocol + to issue a FETCH command, the Bind protocol message specifies whether data is to be retrieved in text or binary format. This choice overrides the way that the cursor is defined. The concept of a binary cursor as such is thus obsolete when using extended query @@ -177,7 +177,7 @@ DECLARE name [ BINARY ] [ INSENSITI Unless WITH HOLD is specified, the cursor created by this command can only be used within the current - transaction. Thus, DECLARE without WITH + transaction. Thus, DECLARE without WITH HOLD is useless outside a transaction block: the cursor would survive only to the completion of the statement. Therefore PostgreSQL reports an error if such a @@ -204,25 +204,25 @@ DECLARE name [ BINARY ] [ INSENSITI WITH HOLD may not be specified when the query - includes FOR UPDATE or FOR SHARE. + includes FOR UPDATE or FOR SHARE. - The SCROLL option should be specified when defining a + The SCROLL option should be specified when defining a cursor that will be used to fetch backwards. This is required by the SQL standard. However, for compatibility with earlier versions, PostgreSQL will allow - backward fetches without SCROLL, if the cursor's query + backward fetches without SCROLL, if the cursor's query plan is simple enough that no extra overhead is needed to support it. However, application developers are advised not to rely on using backward fetches from a cursor that has not been created - with SCROLL. If NO SCROLL is + with SCROLL. If NO SCROLL is specified, then backward fetches are disallowed in any case. Backward fetches are also disallowed when the query - includes FOR UPDATE or FOR SHARE; therefore + includes FOR UPDATE or FOR SHARE; therefore SCROLL may not be specified in this case. @@ -241,42 +241,42 @@ DECLARE name [ BINARY ] [ INSENSITI - If the cursor's query includes FOR UPDATE or FOR - SHARE, then returned rows are locked at the time they are first + If the cursor's query includes FOR UPDATE or FOR + SHARE, then returned rows are locked at the time they are first fetched, in the same way as for a regular command with these options. In addition, the returned rows will be the most up-to-date versions; therefore these options provide the equivalent of what the SQL standard - calls a sensitive cursor. (Specifying INSENSITIVE - together with FOR UPDATE or FOR SHARE is an error.) + calls a sensitive cursor. (Specifying INSENSITIVE + together with FOR UPDATE or FOR SHARE is an error.) - It is generally recommended to use FOR UPDATE if the cursor - is intended to be used with UPDATE ... WHERE CURRENT OF or - DELETE ... WHERE CURRENT OF. Using FOR UPDATE + It is generally recommended to use FOR UPDATE if the cursor + is intended to be used with UPDATE ... WHERE CURRENT OF or + DELETE ... WHERE CURRENT OF. Using FOR UPDATE prevents other sessions from changing the rows between the time they are - fetched and the time they are updated. Without FOR UPDATE, - a subsequent WHERE CURRENT OF command will have no effect if + fetched and the time they are updated. Without FOR UPDATE, + a subsequent WHERE CURRENT OF command will have no effect if the row was changed since the cursor was created. - Another reason to use FOR UPDATE is that without it, a - subsequent WHERE CURRENT OF might fail if the cursor query + Another reason to use FOR UPDATE is that without it, a + subsequent WHERE CURRENT OF might fail if the cursor query does not meet the SQL standard's rules for being simply - updatable (in particular, the cursor must reference just one table - and not use grouping or ORDER BY). Cursors + updatable (in particular, the cursor must reference just one table + and not use grouping or ORDER BY). Cursors that are not simply updatable might work, or might not, depending on plan choice details; so in the worst case, an application might work in testing and then fail in production. - The main reason not to use FOR UPDATE with WHERE - CURRENT OF is if you need the cursor to be scrollable, or to be + The main reason not to use FOR UPDATE with WHERE + CURRENT OF is if you need the cursor to be scrollable, or to be insensitive to the subsequent updates (that is, continue to show the old data). If this is a requirement, pay close heed to the caveats shown above. @@ -321,13 +321,13 @@ DECLARE liahona CURSOR FOR SELECT * FROM films; The SQL standard says that it is implementation-dependent whether cursors are sensitive to concurrent updates of the underlying data by default. In PostgreSQL, cursors are insensitive by default, - and can be made sensitive by specifying FOR UPDATE. Other + and can be made sensitive by specifying FOR UPDATE. Other products may work differently. The SQL standard allows cursors only in embedded - SQL and in modules. PostgreSQL + SQL and in modules. PostgreSQL permits cursors to be used interactively. diff --git a/doc/src/sgml/ref/delete.sgml b/doc/src/sgml/ref/delete.sgml index 8ced7de7be..570e9aa710 100644 --- a/doc/src/sgml/ref/delete.sgml +++ b/doc/src/sgml/ref/delete.sgml @@ -55,12 +55,12 @@ DELETE FROM [ ONLY ] table_name [ * - The optional RETURNING clause causes DELETE + The optional RETURNING clause causes DELETE to compute and return value(s) based on each row actually deleted. Any expression using the table's columns, and/or columns of other tables mentioned in USING, can be computed. - The syntax of the RETURNING list is identical to that of the - output list of SELECT. + The syntax of the RETURNING list is identical to that of the + output list of SELECT. @@ -81,7 +81,7 @@ DELETE FROM [ ONLY ] table_name [ * The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the DELETE + subqueries that can be referenced by name in the DELETE query. See and for details. @@ -93,11 +93,11 @@ DELETE FROM [ ONLY ] table_name [ * The name (optionally schema-qualified) of the table to delete rows - from. If ONLY is specified before the table name, + from. If ONLY is specified before the table name, matching rows are deleted from the named table only. If - ONLY is not specified, matching rows are also deleted + ONLY is not specified, matching rows are also deleted from any tables inheriting from the named table. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -109,9 +109,9 @@ DELETE FROM [ ONLY ] table_name [ * A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For - example, given DELETE FROM foo AS f, the remainder + example, given DELETE FROM foo AS f, the remainder of the DELETE statement must refer to this - table as f not foo. + table as f not foo. @@ -121,7 +121,7 @@ DELETE FROM [ ONLY ] table_name [ * A list of table expressions, allowing columns from other tables - to appear in the WHERE condition. This is similar + to appear in the WHERE condition. This is similar to the list of tables that can be specified in the of a SELECT statement; for example, an alias for @@ -137,7 +137,7 @@ DELETE FROM [ ONLY ] table_name [ * An expression that returns a value of type boolean. - Only rows for which this expression returns true + Only rows for which this expression returns true will be deleted. @@ -147,15 +147,15 @@ DELETE FROM [ ONLY ] table_name [ * cursor_name - The name of the cursor to use in a WHERE CURRENT OF + The name of the cursor to use in a WHERE CURRENT OF condition. The row to be deleted is the one most recently fetched from this cursor. The cursor must be a non-grouping - query on the DELETE's target table. - Note that WHERE CURRENT OF cannot be + query on the DELETE's target table. + Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See for more information about using cursors with - WHERE CURRENT OF. + WHERE CURRENT OF. @@ -164,11 +164,11 @@ DELETE FROM [ ONLY ] table_name [ * output_expression - An expression to be computed and returned by the DELETE + An expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any column names of the table named by table_name - or table(s) listed in USING. - Write * to return all columns. + or table(s) listed in USING. + Write * to return all columns. @@ -188,7 +188,7 @@ DELETE FROM [ ONLY ] table_name [ * Outputs - On successful completion, a DELETE command returns a command + On successful completion, a DELETE command returns a command tag of the form DELETE count @@ -197,16 +197,16 @@ DELETE count of rows deleted. Note that the number may be less than the number of rows that matched the condition when deletes were - suppressed by a BEFORE DELETE trigger. If BEFORE DELETE trigger. If count is 0, no rows were deleted by the query (this is not considered an error). - If the DELETE command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the DELETE command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) deleted by the + RETURNING list, computed over the row(s) deleted by the command. @@ -216,16 +216,16 @@ DELETE count PostgreSQL lets you reference columns of - other tables in the WHERE condition by specifying the + other tables in the WHERE condition by specifying the other tables in the USING clause. For example, to delete all films produced by a given producer, one can do: DELETE FROM films USING producers WHERE producer_id = producers.id AND producers.name = 'foo'; - What is essentially happening here is a join between films - and producers, with all successfully joined - films rows being marked for deletion. + What is essentially happening here is a join between films + and producers, with all successfully joined + films rows being marked for deletion. This syntax is not standard. A more standard way to do it is: DELETE FROM films @@ -261,8 +261,8 @@ DELETE FROM tasks WHERE status = 'DONE' RETURNING *; - Delete the row of tasks on which the cursor - c_tasks is currently positioned: + Delete the row of tasks on which the cursor + c_tasks is currently positioned: DELETE FROM tasks WHERE CURRENT OF c_tasks; @@ -273,9 +273,9 @@ DELETE FROM tasks WHERE CURRENT OF c_tasks; This command conforms to the SQL standard, except - that the USING and RETURNING clauses + that the USING and RETURNING clauses are PostgreSQL extensions, as is the ability - to use WITH with DELETE. + to use WITH with DELETE. diff --git a/doc/src/sgml/ref/discard.sgml b/doc/src/sgml/ref/discard.sgml index e859bf7bab..f432e70430 100644 --- a/doc/src/sgml/ref/discard.sgml +++ b/doc/src/sgml/ref/discard.sgml @@ -29,10 +29,10 @@ DISCARD { ALL | PLANS | SEQUENCES | TEMPORARY | TEMP } Description - DISCARD releases internal resources associated with a + DISCARD releases internal resources associated with a database session. This command is useful for partially or fully resetting the session's state. There are several subcommands to - release different types of resources; the DISCARD ALL + release different types of resources; the DISCARD ALL variant subsumes all the others, and also resets additional state. @@ -57,9 +57,9 @@ DISCARD { ALL | PLANS | SEQUENCES | TEMPORARY | TEMP } Discards all cached sequence-related state, - including currval()/lastval() + including currval()/lastval() information and any preallocated sequence values that have not - yet been returned by nextval(). + yet been returned by nextval(). (See for a description of preallocated sequence values.) @@ -104,7 +104,7 @@ DISCARD TEMP; Notes - DISCARD ALL cannot be executed inside a transaction block. + DISCARD ALL cannot be executed inside a transaction block. diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index d4da32c34d..5d2e9b1b8c 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -39,12 +39,12 @@ DO [ LANGUAGE lang_name ] The code block is treated as though it were the body of a function - with no parameters, returning void. It is parsed and + with no parameters, returning void. It is parsed and executed a single time. - The optional LANGUAGE clause can be written either + The optional LANGUAGE clause can be written either before or after the code block. @@ -58,7 +58,7 @@ DO [ LANGUAGE lang_name ] The procedural language code to be executed. This must be specified - as a string literal, just as in CREATE FUNCTION. + as a string literal, just as in CREATE FUNCTION. Use of a dollar-quoted literal is recommended. @@ -69,7 +69,7 @@ DO [ LANGUAGE lang_name ] The name of the procedural language the code is written in. - If omitted, the default is plpgsql. + If omitted, the default is plpgsql. @@ -81,12 +81,12 @@ DO [ LANGUAGE lang_name ] The procedural language to be used must already have been installed - into the current database by means of CREATE LANGUAGE. - plpgsql is installed by default, but other languages are not. + into the current database by means of CREATE LANGUAGE. + plpgsql is installed by default, but other languages are not. - The user must have USAGE privilege for the procedural + The user must have USAGE privilege for the procedural language, or must be a superuser if the language is untrusted. This is the same privilege requirement as for creating a function in the language. @@ -96,8 +96,8 @@ DO [ LANGUAGE lang_name ] Examples - Grant all privileges on all views in schema public to - role webuser: + Grant all privileges on all views in schema public to + role webuser: DO $$DECLARE r record; BEGIN diff --git a/doc/src/sgml/ref/drop_access_method.sgml b/doc/src/sgml/ref/drop_access_method.sgml index 8aa9197fe4..aa5d2505c7 100644 --- a/doc/src/sgml/ref/drop_access_method.sgml +++ b/doc/src/sgml/ref/drop_access_method.sgml @@ -85,7 +85,7 @@ DROP ACCESS METHOD [ IF EXISTS ] nameExamples - Drop the access method heptree: + Drop the access method heptree: DROP ACCESS METHOD heptree; @@ -96,7 +96,7 @@ DROP ACCESS METHOD heptree; DROP ACCESS METHOD is a - PostgreSQL extension. + PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_aggregate.sgml b/doc/src/sgml/ref/drop_aggregate.sgml index dde1ea2444..ac29e7a419 100644 --- a/doc/src/sgml/ref/drop_aggregate.sgml +++ b/doc/src/sgml/ref/drop_aggregate.sgml @@ -70,8 +70,8 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr - The mode of an argument: IN or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. @@ -94,10 +94,10 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr An input data type on which the aggregate function operates. - To reference a zero-argument aggregate function, write * + To reference a zero-argument aggregate function, write * in place of the list of argument specifications. To reference an ordered-set aggregate function, write - ORDER BY between the direct and aggregated argument + ORDER BY between the direct and aggregated argument specifications. @@ -148,7 +148,7 @@ DROP AGGREGATE myavg(integer); - To remove the hypothetical-set aggregate function myrank, + To remove the hypothetical-set aggregate function myrank, which takes an arbitrary list of ordering columns and a matching list of direct arguments: diff --git a/doc/src/sgml/ref/drop_collation.sgml b/doc/src/sgml/ref/drop_collation.sgml index 2177d8e5d6..23f8e88fc9 100644 --- a/doc/src/sgml/ref/drop_collation.sgml +++ b/doc/src/sgml/ref/drop_collation.sgml @@ -83,7 +83,7 @@ DROP COLLATION [ IF EXISTS ] name [ CASCADE | RESTRIC Examples - To drop the collation named german: + To drop the collation named german: DROP COLLATION german; @@ -95,7 +95,7 @@ DROP COLLATION german; The DROP COLLATION command conforms to the SQL standard, apart from the IF - EXISTS option, which is a PostgreSQL extension. + EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_conversion.sgml b/doc/src/sgml/ref/drop_conversion.sgml index 1a33b3dcc5..9d56ec51a5 100644 --- a/doc/src/sgml/ref/drop_conversion.sgml +++ b/doc/src/sgml/ref/drop_conversion.sgml @@ -74,7 +74,7 @@ DROP CONVERSION [ IF EXISTS ] name [ CASCADE | RESTRI Examples - To drop the conversion named myname: + To drop the conversion named myname: DROP CONVERSION myname; diff --git a/doc/src/sgml/ref/drop_database.sgml b/doc/src/sgml/ref/drop_database.sgml index 44436ad48d..7e5fbe7396 100644 --- a/doc/src/sgml/ref/drop_database.sgml +++ b/doc/src/sgml/ref/drop_database.sgml @@ -71,7 +71,7 @@ DROP DATABASE [ IF EXISTS ] name Notes - DROP DATABASE cannot be executed inside a transaction + DROP DATABASE cannot be executed inside a transaction block. diff --git a/doc/src/sgml/ref/drop_domain.sgml b/doc/src/sgml/ref/drop_domain.sgml index ba546165c2..b1dac01e65 100644 --- a/doc/src/sgml/ref/drop_domain.sgml +++ b/doc/src/sgml/ref/drop_domain.sgml @@ -58,7 +58,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - CASCADE + CASCADE Automatically drop objects that depend on the domain (such as @@ -70,7 +70,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - RESTRICT + RESTRICT Refuse to drop the domain if any objects depend on it. This is @@ -97,7 +97,7 @@ DROP DOMAIN box; This command conforms to the SQL standard, except for the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_extension.sgml b/doc/src/sgml/ref/drop_extension.sgml index ba52922013..f75308a20d 100644 --- a/doc/src/sgml/ref/drop_extension.sgml +++ b/doc/src/sgml/ref/drop_extension.sgml @@ -79,7 +79,7 @@ DROP EXTENSION [ IF EXISTS ] name [ Refuse to drop the extension if any objects depend on it (other than its own member objects and other extensions listed in the same - DROP command). This is the default. + DROP command). This is the default. @@ -97,7 +97,7 @@ DROP EXTENSION hstore; This command will fail if any of hstore's objects are in use in the database, for example if any tables have columns - of the hstore type. Add the CASCADE option to + of the hstore type. Add the CASCADE option to forcibly remove those dependent objects as well. @@ -106,7 +106,7 @@ DROP EXTENSION hstore; Compatibility - DROP EXTENSION is a PostgreSQL + DROP EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml index 702cc021db..a3c73a0d46 100644 --- a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml @@ -86,7 +86,7 @@ DROP FOREIGN DATA WRAPPER [ IF EXISTS ] nameExamples - Drop the foreign-data wrapper dbi: + Drop the foreign-data wrapper dbi: DROP FOREIGN DATA WRAPPER dbi; @@ -97,8 +97,8 @@ DROP FOREIGN DATA WRAPPER dbi; DROP FOREIGN DATA WRAPPER conforms to ISO/IEC - 9075-9 (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + 9075-9 (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_foreign_table.sgml b/doc/src/sgml/ref/drop_foreign_table.sgml index 173eadadd3..456d55d112 100644 --- a/doc/src/sgml/ref/drop_foreign_table.sgml +++ b/doc/src/sgml/ref/drop_foreign_table.sgml @@ -95,7 +95,7 @@ DROP FOREIGN TABLE films, distributors; This command conforms to the ISO/IEC 9075-9 (SQL/MED), except that the standard only allows one foreign table to be dropped per command, and apart - from the IF EXISTS option, which is a PostgreSQL + from the IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_function.sgml b/doc/src/sgml/ref/drop_function.sgml index 0aa984528d..9c9adb9a46 100644 --- a/doc/src/sgml/ref/drop_function.sgml +++ b/doc/src/sgml/ref/drop_function.sgml @@ -67,14 +67,14 @@ DROP FUNCTION [ IF EXISTS ] name [ - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that DROP FUNCTION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml index 4c838fffff..de36c135d1 100644 --- a/doc/src/sgml/ref/drop_index.sgml +++ b/doc/src/sgml/ref/drop_index.sgml @@ -44,19 +44,19 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name Drop the index without locking out concurrent selects, inserts, updates, - and deletes on the index's table. A normal DROP INDEX + and deletes on the index's table. A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed. With this option, the command instead waits until conflicting transactions have completed. There are several caveats to be aware of when using this option. - Only one index name can be specified, and the CASCADE option - is not supported. (Thus, an index that supports a UNIQUE or - PRIMARY KEY constraint cannot be dropped this way.) - Also, regular DROP INDEX commands can be + Only one index name can be specified, and the CASCADE option + is not supported. (Thus, an index that supports a UNIQUE or + PRIMARY KEY constraint cannot be dropped this way.) + Also, regular DROP INDEX commands can be performed within a transaction block, but - DROP INDEX CONCURRENTLY cannot. + DROP INDEX CONCURRENTLY cannot. diff --git a/doc/src/sgml/ref/drop_language.sgml b/doc/src/sgml/ref/drop_language.sgml index 081bd5fe3e..524d758370 100644 --- a/doc/src/sgml/ref/drop_language.sgml +++ b/doc/src/sgml/ref/drop_language.sgml @@ -31,13 +31,13 @@ DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name DROP LANGUAGE removes the definition of a previously registered procedural language. You must be a superuser - or the owner of the language to use DROP LANGUAGE. + or the owner of the language to use DROP LANGUAGE. As of PostgreSQL 9.1, most procedural - languages have been made into extensions, and should + languages have been made into extensions, and should therefore be removed with not DROP LANGUAGE. diff --git a/doc/src/sgml/ref/drop_opclass.sgml b/doc/src/sgml/ref/drop_opclass.sgml index 423a211bca..83af6d7e48 100644 --- a/doc/src/sgml/ref/drop_opclass.sgml +++ b/doc/src/sgml/ref/drop_opclass.sgml @@ -37,7 +37,7 @@ DROP OPERATOR CLASS [ IF EXISTS ] nameDROP OPERATOR CLASS does not drop any of the operators or functions referenced by the class. If there are any indexes depending on the operator class, you will need to specify - CASCADE for the drop to complete. + CASCADE for the drop to complete. @@ -101,13 +101,13 @@ DROP OPERATOR CLASS [ IF EXISTS ] nameNotes - DROP OPERATOR CLASS will not drop the operator family + DROP OPERATOR CLASS will not drop the operator family containing the class, even if there is nothing else left in the family (in particular, in the case where the family was implicitly - created by CREATE OPERATOR CLASS). An empty operator + created by CREATE OPERATOR CLASS). An empty operator family is harmless, but for the sake of tidiness you might wish to - remove the family with DROP OPERATOR FAMILY; or perhaps - better, use DROP OPERATOR FAMILY in the first place. + remove the family with DROP OPERATOR FAMILY; or perhaps + better, use DROP OPERATOR FAMILY in the first place. @@ -122,7 +122,7 @@ DROP OPERATOR CLASS widget_ops USING btree; This command will not succeed if there are any existing indexes - that use the operator class. Add CASCADE to drop + that use the operator class. Add CASCADE to drop such indexes along with the operator class. diff --git a/doc/src/sgml/ref/drop_opfamily.sgml b/doc/src/sgml/ref/drop_opfamily.sgml index a7b90f306c..b825978aee 100644 --- a/doc/src/sgml/ref/drop_opfamily.sgml +++ b/doc/src/sgml/ref/drop_opfamily.sgml @@ -38,7 +38,7 @@ DROP OPERATOR FAMILY [ IF EXISTS ] nameCASCADE for the drop to complete. + CASCADE for the drop to complete. @@ -109,7 +109,7 @@ DROP OPERATOR FAMILY float_ops USING btree; This command will not succeed if there are any existing indexes - that use operator classes within the family. Add CASCADE to + that use operator classes within the family. Add CASCADE to drop such indexes along with the operator family. diff --git a/doc/src/sgml/ref/drop_owned.sgml b/doc/src/sgml/ref/drop_owned.sgml index 0426373d2d..8b4b3644e6 100644 --- a/doc/src/sgml/ref/drop_owned.sgml +++ b/doc/src/sgml/ref/drop_owned.sgml @@ -92,7 +92,7 @@ DROP OWNED BY { name | CURRENT_USER The command is an alternative that reassigns the ownership of all the database objects owned by one or - more roles. However, REASSIGN OWNED does not deal with + more roles. However, REASSIGN OWNED does not deal with privileges for other objects. diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml index bf43db3dac..1c129c0444 100644 --- a/doc/src/sgml/ref/drop_publication.sgml +++ b/doc/src/sgml/ref/drop_publication.sgml @@ -89,7 +89,7 @@ DROP PUBLICATION mypublication; Compatibility - DROP PUBLICATION is a PostgreSQL + DROP PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_role.sgml b/doc/src/sgml/ref/drop_role.sgml index fcddfeb172..3c1bbaba6f 100644 --- a/doc/src/sgml/ref/drop_role.sgml +++ b/doc/src/sgml/ref/drop_role.sgml @@ -31,7 +31,7 @@ DROP ROLE [ IF EXISTS ] name [, ... DROP ROLE removes the specified role(s). To drop a superuser role, you must be a superuser yourself; - to drop non-superuser roles, you must have CREATEROLE + to drop non-superuser roles, you must have CREATEROLE privilege. @@ -47,7 +47,7 @@ DROP ROLE [ IF EXISTS ] name [, ... However, it is not necessary to remove role memberships involving - the role; DROP ROLE automatically revokes any memberships + the role; DROP ROLE automatically revokes any memberships of the target role in other roles, and of other roles in the target role. The other roles are not dropped nor otherwise affected. diff --git a/doc/src/sgml/ref/drop_schema.sgml b/doc/src/sgml/ref/drop_schema.sgml index fd1fcd7e03..bb3af1e186 100644 --- a/doc/src/sgml/ref/drop_schema.sgml +++ b/doc/src/sgml/ref/drop_schema.sgml @@ -114,7 +114,7 @@ DROP SCHEMA mystuff CASCADE; DROP SCHEMA is fully conforming with the SQL standard, except that the standard only allows one schema to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_sequence.sgml b/doc/src/sgml/ref/drop_sequence.sgml index 9d827f0cb1..5027129b38 100644 --- a/doc/src/sgml/ref/drop_sequence.sgml +++ b/doc/src/sgml/ref/drop_sequence.sgml @@ -98,7 +98,7 @@ DROP SEQUENCE serial; DROP SEQUENCE conforms to the SQL standard, except that the standard only allows one sequence to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_server.sgml b/doc/src/sgml/ref/drop_server.sgml index 42acdd41dc..8ef0e014e4 100644 --- a/doc/src/sgml/ref/drop_server.sgml +++ b/doc/src/sgml/ref/drop_server.sgml @@ -86,7 +86,7 @@ DROP SERVER [ IF EXISTS ] name [, . Examples - Drop a server foo if it exists: + Drop a server foo if it exists: DROP SERVER IF EXISTS foo; @@ -97,8 +97,8 @@ DROP SERVER IF EXISTS foo; DROP SERVER conforms to ISO/IEC 9075-9 - (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_subscription.sgml b/doc/src/sgml/ref/drop_subscription.sgml index f5734e6f30..58b1489475 100644 --- a/doc/src/sgml/ref/drop_subscription.sgml +++ b/doc/src/sgml/ref/drop_subscription.sgml @@ -114,7 +114,7 @@ DROP SUBSCRIPTION mysub; Compatibility - DROP SUBSCRIPTION is a PostgreSQL + DROP SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_table.sgml b/doc/src/sgml/ref/drop_table.sgml index ae96cf0657..cea7e00351 100644 --- a/doc/src/sgml/ref/drop_table.sgml +++ b/doc/src/sgml/ref/drop_table.sgml @@ -40,8 +40,8 @@ DROP TABLE [ IF EXISTS ] name [, .. DROP TABLE always removes any indexes, rules, triggers, and constraints that exist for the target table. However, to drop a table that is referenced by a view or a foreign-key - constraint of another table, CASCADE must be - specified. (CASCADE will remove a dependent view entirely, + constraint of another table, CASCADE must be + specified. (CASCADE will remove a dependent view entirely, but in the foreign-key case it will only remove the foreign-key constraint, not the other table entirely.) @@ -112,7 +112,7 @@ DROP TABLE films, distributors; This command conforms to the SQL standard, except that the standard only allows one table to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_tablespace.sgml b/doc/src/sgml/ref/drop_tablespace.sgml index ee40cc6b0c..4343035ebb 100644 --- a/doc/src/sgml/ref/drop_tablespace.sgml +++ b/doc/src/sgml/ref/drop_tablespace.sgml @@ -39,7 +39,7 @@ DROP TABLESPACE [ IF EXISTS ] name in the tablespace even if no objects in the current database are using the tablespace. Also, if the tablespace is listed in the setting of any active session, the - DROP might fail due to temporary files residing in the + DROP might fail due to temporary files residing in the tablespace. @@ -74,7 +74,7 @@ DROP TABLESPACE [ IF EXISTS ] name Notes - DROP TABLESPACE cannot be executed inside a transaction block. + DROP TABLESPACE cannot be executed inside a transaction block. @@ -93,7 +93,7 @@ DROP TABLESPACE mystuff; Compatibility - DROP TABLESPACE is a PostgreSQL + DROP TABLESPACE is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_tsconfig.sgml b/doc/src/sgml/ref/drop_tsconfig.sgml index e4a1738bae..cc053beceb 100644 --- a/doc/src/sgml/ref/drop_tsconfig.sgml +++ b/doc/src/sgml/ref/drop_tsconfig.sgml @@ -94,8 +94,8 @@ DROP TEXT SEARCH CONFIGURATION my_english; This command will not succeed if there are any existing indexes - that reference the configuration in to_tsvector calls. - Add CASCADE to + that reference the configuration in to_tsvector calls. + Add CASCADE to drop such indexes along with the text search configuration. diff --git a/doc/src/sgml/ref/drop_tsdictionary.sgml b/doc/src/sgml/ref/drop_tsdictionary.sgml index faa4b3a1e5..66af10fb0f 100644 --- a/doc/src/sgml/ref/drop_tsdictionary.sgml +++ b/doc/src/sgml/ref/drop_tsdictionary.sgml @@ -94,7 +94,7 @@ DROP TEXT SEARCH DICTIONARY english; This command will not succeed if there are any existing text search - configurations that use the dictionary. Add CASCADE to + configurations that use the dictionary. Add CASCADE to drop such configurations along with the dictionary. diff --git a/doc/src/sgml/ref/drop_tsparser.sgml b/doc/src/sgml/ref/drop_tsparser.sgml index bc9dae17a5..3fa9467ebd 100644 --- a/doc/src/sgml/ref/drop_tsparser.sgml +++ b/doc/src/sgml/ref/drop_tsparser.sgml @@ -92,7 +92,7 @@ DROP TEXT SEARCH PARSER my_parser; This command will not succeed if there are any existing text search - configurations that use the parser. Add CASCADE to + configurations that use the parser. Add CASCADE to drop such configurations along with the parser. diff --git a/doc/src/sgml/ref/drop_tstemplate.sgml b/doc/src/sgml/ref/drop_tstemplate.sgml index 98f5523e51..ad83275457 100644 --- a/doc/src/sgml/ref/drop_tstemplate.sgml +++ b/doc/src/sgml/ref/drop_tstemplate.sgml @@ -93,7 +93,7 @@ DROP TEXT SEARCH TEMPLATE thesaurus; This command will not succeed if there are any existing text search - dictionaries that use the template. Add CASCADE to + dictionaries that use the template. Add CASCADE to drop such dictionaries along with the template. diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 4ec1b92f32..92ac2729ca 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -96,8 +96,8 @@ DROP TYPE box; This command is similar to the corresponding command in the SQL - standard, apart from the IF EXISTS - option, which is a PostgreSQL extension. + standard, apart from the IF EXISTS + option, which is a PostgreSQL extension. But note that much of the CREATE TYPE command and the data type extension mechanisms in PostgreSQL differ from the SQL standard. diff --git a/doc/src/sgml/ref/drop_user_mapping.sgml b/doc/src/sgml/ref/drop_user_mapping.sgml index eb4c320293..27284acae4 100644 --- a/doc/src/sgml/ref/drop_user_mapping.sgml +++ b/doc/src/sgml/ref/drop_user_mapping.sgml @@ -36,7 +36,7 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_name The owner of a foreign server can drop user mappings for that server for any user. Also, a user can drop a user mapping for their own - user name if USAGE privilege on the server has been + user name if USAGE privilege on the server has been granted to the user. @@ -59,9 +59,9 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_nameuser_name - User name of the mapping. CURRENT_USER - and USER match the name of the current - user. PUBLIC is used to match all present and + User name of the mapping. CURRENT_USER + and USER match the name of the current + user. PUBLIC is used to match all present and future user names in the system. @@ -82,7 +82,7 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_nameExamples - Drop a user mapping bob, server foo if it exists: + Drop a user mapping bob, server foo if it exists: DROP USER MAPPING IF EXISTS FOR bob SERVER foo; @@ -93,8 +93,8 @@ DROP USER MAPPING IF EXISTS FOR bob SERVER foo; DROP USER MAPPING conforms to ISO/IEC 9075-9 - (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_view.sgml b/doc/src/sgml/ref/drop_view.sgml index 002d2c6dd6..a33b33335b 100644 --- a/doc/src/sgml/ref/drop_view.sgml +++ b/doc/src/sgml/ref/drop_view.sgml @@ -97,7 +97,7 @@ DROP VIEW kinds; This command conforms to the SQL standard, except that the standard only allows one view to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/dropdb.sgml b/doc/src/sgml/ref/dropdb.sgml index 16c49e7928..9dd44be882 100644 --- a/doc/src/sgml/ref/dropdb.sgml +++ b/doc/src/sgml/ref/dropdb.sgml @@ -53,7 +53,7 @@ PostgreSQL documentation Options - dropdb accepts the following command-line arguments: + dropdb accepts the following command-line arguments: @@ -66,8 +66,8 @@ PostgreSQL documentation - - + + Echo the commands that dropdb generates @@ -77,8 +77,8 @@ PostgreSQL documentation - - + + Issues a verification prompt before doing anything destructive. @@ -87,8 +87,8 @@ PostgreSQL documentation - - + + Print the dropdb version and exit. @@ -97,7 +97,7 @@ PostgreSQL documentation - + Do not throw an error if the database does not exist. A notice is issued @@ -107,8 +107,8 @@ PostgreSQL documentation - - + + Show help about dropdb command line @@ -127,8 +127,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -140,8 +140,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -152,8 +152,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -162,8 +162,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -177,8 +177,8 @@ PostgreSQL documentation - - + + Force dropdb to prompt for a @@ -191,14 +191,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, dropdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to in order to drop the @@ -231,8 +231,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/dropuser.sgml b/doc/src/sgml/ref/dropuser.sgml index d7ad61b3d6..1387b7dc2d 100644 --- a/doc/src/sgml/ref/dropuser.sgml +++ b/doc/src/sgml/ref/dropuser.sgml @@ -35,7 +35,7 @@ PostgreSQL documentation dropuser removes an existing PostgreSQL user. - Only superusers and users with the CREATEROLE privilege can + Only superusers and users with the CREATEROLE privilege can remove PostgreSQL users. (To remove a superuser, you must yourself be a superuser.) @@ -70,8 +70,8 @@ PostgreSQL documentation - - + + Echo the commands that dropuser generates @@ -81,8 +81,8 @@ PostgreSQL documentation - - + + Prompt for confirmation before actually removing the user, and prompt @@ -92,8 +92,8 @@ PostgreSQL documentation - - + + Print the dropuser version and exit. @@ -102,7 +102,7 @@ PostgreSQL documentation - + Do not throw an error if the user does not exist. A notice is @@ -112,8 +112,8 @@ PostgreSQL documentation - - + + Show help about dropuser command line @@ -131,8 +131,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -144,8 +144,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -156,8 +156,8 @@ PostgreSQL documentation - - + + User name to connect as (not the user name to drop). @@ -166,8 +166,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -181,8 +181,8 @@ PostgreSQL documentation - - + + Force dropuser to prompt for a @@ -195,7 +195,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, dropuser will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -223,8 +223,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/ecpg-ref.sgml b/doc/src/sgml/ref/ecpg-ref.sgml index 8bfb47c4d7..a9eaff815d 100644 --- a/doc/src/sgml/ref/ecpg-ref.sgml +++ b/doc/src/sgml/ref/ecpg-ref.sgml @@ -220,9 +220,9 @@ PostgreSQL documentation When compiling the preprocessed C code files, the compiler needs to - be able to find the ECPG header files in the - PostgreSQL include directory. Therefore, you might - have to use the option when invoking the compiler (e.g., -I/usr/local/pgsql/include). diff --git a/doc/src/sgml/ref/end.sgml b/doc/src/sgml/ref/end.sgml index 10e414515b..1f74118efd 100644 --- a/doc/src/sgml/ref/end.sgml +++ b/doc/src/sgml/ref/end.sgml @@ -62,7 +62,7 @@ END [ WORK | TRANSACTION ] - Issuing END when not inside a transaction does + Issuing END when not inside a transaction does no harm, but it will provoke a warning message. diff --git a/doc/src/sgml/ref/execute.sgml b/doc/src/sgml/ref/execute.sgml index 6ab5e54fa7..6ac413d808 100644 --- a/doc/src/sgml/ref/execute.sgml +++ b/doc/src/sgml/ref/execute.sgml @@ -87,12 +87,12 @@ EXECUTE name [ ( Outputs The command tag returned by EXECUTE - is that of the prepared statement, and not EXECUTE. + is that of the prepared statement, and not EXECUTE. - Examples</> + <title>Examples Examples are given in the section of the hit means that a read was avoided because the block was + A hit means that a read was avoided because the block was found already in cache when needed. Shared blocks contain data from regular tables and indexes; local blocks contain data from temporary tables and indexes; while temp blocks contain short-term working data used in sorts, hashes, Materialize plan nodes, and similar cases. - The number of blocks dirtied indicates the number of + The number of blocks dirtied indicates the number of previously unmodified blocks that were changed by this query; while the - number of blocks written indicates the number of + number of blocks written indicates the number of previously-dirtied blocks evicted from cache by this backend during query processing. The number of blocks shown for an @@ -229,9 +229,9 @@ ROLLBACK; Specifies whether the selected option should be turned on or off. - You can write TRUE, ON, or + You can write TRUE, ON, or 1 to enable the option, and FALSE, - OFF, or 0 to disable it. The + OFF, or 0 to disable it. The boolean value can also be omitted, in which case TRUE is assumed. @@ -242,10 +242,10 @@ ROLLBACK; statement - Any SELECT, INSERT, UPDATE, - DELETE, VALUES, EXECUTE, - DECLARE, CREATE TABLE AS, or - CREATE MATERIALIZED VIEW AS statement, whose execution + Any SELECT, INSERT, UPDATE, + DELETE, VALUES, EXECUTE, + DECLARE, CREATE TABLE AS, or + CREATE MATERIALIZED VIEW AS statement, whose execution plan you wish to see. diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml index 7651dcd0f8..fb79a1ac61 100644 --- a/doc/src/sgml/ref/fetch.sgml +++ b/doc/src/sgml/ref/fetch.sgml @@ -57,20 +57,20 @@ FETCH [ direction [ FROM | IN ] ] < A cursor has an associated position, which is used by - FETCH. The cursor position can be before the first row of the + FETCH. The cursor position can be before the first row of the query result, on any particular row of the result, or after the last row of the result. When created, a cursor is positioned before the first row. After fetching some rows, the cursor is positioned on the row most recently - retrieved. If FETCH runs off the end of the available rows + retrieved. If FETCH runs off the end of the available rows then the cursor is left positioned after the last row, or before the first - row if fetching backward. FETCH ALL or FETCH BACKWARD - ALL will always leave the cursor positioned after the last row or before + row if fetching backward. FETCH ALL or FETCH BACKWARD + ALL will always leave the cursor positioned after the last row or before the first row. - The forms NEXT, PRIOR, FIRST, - LAST, ABSOLUTE, RELATIVE fetch + The forms NEXT, PRIOR, FIRST, + LAST, ABSOLUTE, RELATIVE fetch a single row after moving the cursor appropriately. If there is no such row, an empty result is returned, and the cursor is left positioned before the first row or after the last row as @@ -78,7 +78,7 @@ FETCH [ direction [ FROM | IN ] ] < - The forms using FORWARD and BACKWARD + The forms using FORWARD and BACKWARD retrieve the indicated number of rows moving in the forward or backward direction, leaving the cursor positioned on the last-returned row (or after/before all rows, if the direction [ FROM | IN ] ] < - RELATIVE 0, FORWARD 0, and - BACKWARD 0 all request fetching the current row without + RELATIVE 0, FORWARD 0, and + BACKWARD 0 all request fetching the current row without moving the cursor, that is, re-fetching the most recently fetched row. This will succeed unless the cursor is positioned before the first row or after the last row; in which case, no row is returned. @@ -97,7 +97,7 @@ FETCH [ direction [ FROM | IN ] ] < This page describes usage of cursors at the SQL command level. - If you are trying to use cursors inside a PL/pgSQL + If you are trying to use cursors inside a PL/pgSQL function, the rules are different — see . @@ -274,10 +274,10 @@ FETCH [ direction [ FROM | IN ] ] < count is a possibly-signed integer constant, determining the location or - number of rows to fetch. For FORWARD and - BACKWARD cases, specifying a negative FORWARD and + BACKWARD cases, specifying a negative count is equivalent to changing - the sense of FORWARD and BACKWARD. + the sense of FORWARD and BACKWARD. @@ -297,7 +297,7 @@ FETCH [ direction [ FROM | IN ] ] < Outputs - On successful completion, a FETCH command returns a command + On successful completion, a FETCH command returns a command tag of the form FETCH count @@ -315,8 +315,8 @@ FETCH count The cursor should be declared with the SCROLL - option if one intends to use any variants of FETCH - other than FETCH NEXT or FETCH FORWARD with + option if one intends to use any variants of FETCH + other than FETCH NEXT or FETCH FORWARD with a positive count. For simple queries PostgreSQL will allow backwards fetch from cursors not declared with SCROLL, but this @@ -400,8 +400,8 @@ COMMIT WORK; - The SQL standard allows only FROM preceding the cursor - name; the option to use IN, or to leave them out altogether, is + The SQL standard allows only FROM preceding the cursor + name; the option to use IN, or to leave them out altogether, is an extension. diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index 385cfe6a9c..fd9fe03a6a 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -116,7 +116,7 @@ GRANT role_name [, ...] TO ALL - TABLES is considered to include views and foreign tables). + TABLES is considered to include views and foreign tables). @@ -174,7 +174,7 @@ GRANT role_name [, ...] TO REVOKE both default and expressly granted privileges. (For maximum - security, issue the REVOKE in the same transaction that + security, issue the REVOKE in the same transaction that creates the object; then there is no window in which another user can use the object.) Also, these initial default privilege settings can be changed using the @@ -211,7 +211,7 @@ GRANT role_name [, ...] TO Allows of a new row into the specified table. If specific columns are listed, - only those columns may be assigned to in the INSERT + only those columns may be assigned to in the INSERT command (other columns will therefore receive default values). Also allows FROM. @@ -224,8 +224,8 @@ GRANT role_name [, ...] TO Allows of any column, or the specific columns listed, of the specified table. - (In practice, any nontrivial UPDATE command will require - SELECT privilege as well, since it must reference table + (In practice, any nontrivial UPDATE command will require + SELECT privilege as well, since it must reference table columns to determine which rows to update, and/or to compute new values for columns.) SELECT ... FOR UPDATE @@ -246,8 +246,8 @@ GRANT role_name [, ...] TO Allows of a row from the specified table. - (In practice, any nontrivial DELETE command will require - SELECT privilege as well, since it must reference table + (In practice, any nontrivial DELETE command will require + SELECT privilege as well, since it must reference table columns to determine which rows to delete.) @@ -292,7 +292,7 @@ GRANT role_name [, ...] TO For schemas, allows new objects to be created within the schema. - To rename an existing object, you must own the object and + To rename an existing object, you must own the object and have this privilege for the containing schema. @@ -310,7 +310,7 @@ GRANT role_name [, ...] TO Allows the user to connect to the specified database. This privilege is checked at connection startup (in addition to checking - any restrictions imposed by pg_hba.conf). + any restrictions imposed by pg_hba.conf). @@ -348,7 +348,7 @@ GRANT role_name [, ...] TO For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are - also met). Essentially this allows the grantee to look up + also met). Essentially this allows the grantee to look up objects within the schema. Without this permission, it is still possible to see the object names, e.g. by querying the system tables. Also, after revoking this permission, existing backends might have @@ -416,14 +416,14 @@ GRANT role_name [, ...] TO on itself, but it may grant or revoke membership in itself from a database session where the session user matches the role. Database superusers can grant or revoke membership in any role - to anyone. Roles having CREATEROLE privilege can grant + to anyone. Roles having CREATEROLE privilege can grant or revoke membership in any role that is not a superuser. Unlike the case with privileges, membership in a role cannot be granted - to PUBLIC. Note also that this form of the command does not - allow the noise word GROUP. + to PUBLIC. Note also that this form of the command does not + allow the noise word GROUP. @@ -440,13 +440,13 @@ GRANT role_name [, ...] TO Since PostgreSQL 8.1, the concepts of users and groups have been unified into a single kind of entity called a role. - It is therefore no longer necessary to use the keyword GROUP - to identify whether a grantee is a user or a group. GROUP + It is therefore no longer necessary to use the keyword GROUP + to identify whether a grantee is a user or a group. GROUP is still allowed in the command, but it is a noise word. - A user may perform SELECT, INSERT, etc. on a + A user may perform SELECT, INSERT, etc. on a column if they hold that privilege for either the specific column or its whole table. Granting the privilege at the table level and then revoking it for one column will not do what one might wish: the @@ -454,12 +454,12 @@ GRANT role_name [, ...] TO - When a non-owner of an object attempts to GRANT privileges + When a non-owner of an object attempts to GRANT privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will grant only those privileges for which the user has grant options. The GRANT ALL - PRIVILEGES forms will issue a warning message if no grant options are + PRIVILEGES forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but @@ -470,13 +470,13 @@ GRANT role_name [, ...] TO It should be noted that database superusers can access all objects regardless of object privilege settings. This - is comparable to the rights of root in a Unix system. - As with root, it's unwise to operate as a superuser + is comparable to the rights of root in a Unix system. + As with root, it's unwise to operate as a superuser except when absolutely necessary. - If a superuser chooses to issue a GRANT or REVOKE + If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it were issued by the owner of the affected object. In particular, privileges granted via such a command will appear to have been granted by the object owner. @@ -485,32 +485,32 @@ GRANT role_name [, ...] TO - GRANT and REVOKE can also be done by a role + GRANT and REVOKE can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges WITH GRANT OPTION on the object. In this case the privileges will be recorded as having been granted by the role that actually owns the object or holds the privileges WITH GRANT OPTION. For example, if table - t1 is owned by role g1, of which role - u1 is a member, then u1 can grant privileges - on t1 to u2, but those privileges will appear - to have been granted directly by g1. Any other member - of role g1 could revoke them later. + t1 is owned by role g1, of which role + u1 is a member, then u1 can grant privileges + on t1 to u2, but those privileges will appear + to have been granted directly by g1. Any other member + of role g1 could revoke them later. - If the role executing GRANT holds the required privileges + If the role executing GRANT holds the required privileges indirectly via more than one role membership path, it is unspecified which containing role will be recorded as having done the grant. In such - cases it is best practice to use SET ROLE to become the - specific role you want to do the GRANT as. + cases it is best practice to use SET ROLE to become the + specific role you want to do the GRANT as. Granting permission on a table does not automatically extend permissions to any sequences used by the table, including - sequences tied to SERIAL columns. Permissions on + sequences tied to SERIAL columns. Permissions on sequences must be set separately. @@ -551,8 +551,8 @@ rolename=xxxx -- privileges granted to a role /yyyy -- role that granted this privilege - The above example display would be seen by user miriam after - creating table mytable and doing: + The above example display would be seen by user miriam after + creating table mytable and doing: GRANT SELECT ON mytable TO PUBLIC; @@ -562,31 +562,31 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw; - For non-table objects there are other \d commands + For non-table objects there are other \d commands that can display their privileges. - If the Access privileges column is empty for a given object, + If the Access privileges column is empty for a given object, it means the object has default privileges (that is, its privileges column is null). Default privileges always include all privileges for the owner, - and can include some privileges for PUBLIC depending on the - object type, as explained above. The first GRANT or - REVOKE on an object + and can include some privileges for PUBLIC depending on the + object type, as explained above. The first GRANT or + REVOKE on an object will instantiate the default privileges (producing, for example, - {miriam=arwdDxt/miriam}) and then modify them per the + {miriam=arwdDxt/miriam}) and then modify them per the specified request. Similarly, entries are shown in Column access - privileges only for columns with nondefault privileges. - (Note: for this purpose, default privileges always means the + privileges only for columns with nondefault privileges. + (Note: for this purpose, default privileges always means the built-in default privileges for the object's type. An object whose - privileges have been affected by an ALTER DEFAULT PRIVILEGES + privileges have been affected by an ALTER DEFAULT PRIVILEGES command will always be shown with an explicit privilege entry that - includes the effects of the ALTER.) + includes the effects of the ALTER.) Notice that the owner's implicit grant options are not marked in the - access privileges display. A * will appear only when + access privileges display. A * will appear only when grant options have been explicitly granted to someone. @@ -617,7 +617,7 @@ GRANT ALL PRIVILEGES ON kinds TO manuel; - Grant membership in role admins to user joe: + Grant membership in role admins to user joe: GRANT admins TO joe; @@ -637,14 +637,14 @@ GRANT admins TO joe; PostgreSQL allows an object owner to revoke their own ordinary privileges: for example, a table owner can make the table - read-only to themselves by revoking their own INSERT, - UPDATE, DELETE, and TRUNCATE + read-only to themselves by revoking their own INSERT, + UPDATE, DELETE, and TRUNCATE privileges. This is not possible according to the SQL standard. The reason is that PostgreSQL treats the owner's privileges as having been granted by the owner to themselves; therefore they can revoke them too. In the SQL standard, the owner's privileges are - granted by an assumed entity _SYSTEM. Not being - _SYSTEM, the owner cannot revoke these rights. + granted by an assumed entity _SYSTEM. Not being + _SYSTEM, the owner cannot revoke these rights. diff --git a/doc/src/sgml/ref/import_foreign_schema.sgml b/doc/src/sgml/ref/import_foreign_schema.sgml index f22893f137..9bc83f1c6a 100644 --- a/doc/src/sgml/ref/import_foreign_schema.sgml +++ b/doc/src/sgml/ref/import_foreign_schema.sgml @@ -124,9 +124,9 @@ IMPORT FOREIGN SCHEMA remote_schema Examples - Import table definitions from a remote schema foreign_films - on server film_server, creating the foreign tables in - local schema films: + Import table definitions from a remote schema foreign_films + on server film_server, creating the foreign tables in + local schema films: IMPORT FOREIGN SCHEMA foreign_films @@ -135,8 +135,8 @@ IMPORT FOREIGN SCHEMA foreign_films - As above, but import only the two tables actors and - directors (if they exist): + As above, but import only the two tables actors and + directors (if they exist): IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) @@ -149,8 +149,8 @@ IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) The IMPORT FOREIGN SCHEMA command conforms to the - SQL standard, except that the OPTIONS - clause is a PostgreSQL extension. + SQL standard, except that the OPTIONS + clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml index 732fecab8e..6696d4d05a 100644 --- a/doc/src/sgml/ref/initdb.sgml +++ b/doc/src/sgml/ref/initdb.sgml @@ -79,8 +79,8 @@ PostgreSQL documentation initdb initializes the database cluster's default locale and character set encoding. The character set encoding, - collation order (LC_COLLATE) and character set classes - (LC_CTYPE, e.g. upper, lower, digit) can be set separately + collation order (LC_COLLATE) and character set classes + (LC_CTYPE, e.g. upper, lower, digit) can be set separately for a database when it is created. initdb determines those settings for the template1 database, which will serve as the default for all other databases. @@ -89,7 +89,7 @@ PostgreSQL documentation To alter the default collation order or character set classes, use the and options. - Collation orders other than C or POSIX also have + Collation orders other than C or POSIX also have a performance penalty. For these reasons it is important to choose the right locale when running initdb. @@ -98,8 +98,8 @@ PostgreSQL documentation The remaining locale categories can be changed later when the server is started. You can also use to set the default for all locale categories, including collation order and - character set classes. All server locale values (lc_*) can - be displayed via SHOW ALL. + character set classes. All server locale values (lc_*) can + be displayed via SHOW ALL. More details can be found in . @@ -121,7 +121,7 @@ PostgreSQL documentation This option specifies the default authentication method for local - users used in pg_hba.conf (host + users used in pg_hba.conf (host and local lines). initdb will prepopulate pg_hba.conf entries using the specified authentication method for non-replication as well as @@ -129,8 +129,8 @@ PostgreSQL documentation - Do not use trust unless you trust all local users on your - system. trust is the default for ease of installation. + Do not use trust unless you trust all local users on your + system. trust is the default for ease of installation. @@ -140,7 +140,7 @@ PostgreSQL documentation This option specifies the authentication method for local users via - TCP/IP connections used in pg_hba.conf + TCP/IP connections used in pg_hba.conf (host lines). @@ -151,7 +151,7 @@ PostgreSQL documentation This option specifies the authentication method for local users via - Unix-domain socket connections used in pg_hba.conf + Unix-domain socket connections used in pg_hba.conf (local lines). @@ -255,7 +255,7 @@ PostgreSQL documentation - + Makes initdb read the database superuser's password @@ -270,14 +270,14 @@ PostgreSQL documentation Safely write all database files to disk and exit. This does not - perform any of the normal initdb operations. + perform any of the normal initdb operations. - - + + Sets the default text search configuration. @@ -319,7 +319,7 @@ PostgreSQL documentation - Set the WAL segment size, in megabytes. This is + Set the WAL segment size, in megabytes. This is the size of each individual file in the WAL log. It may be useful to adjust this size to control the granularity of WAL log shipping. This option can only be set during initialization, and cannot be @@ -395,8 +395,8 @@ PostgreSQL documentation - - + + Print the initdb version and exit. @@ -405,8 +405,8 @@ PostgreSQL documentation - - + + Show help about initdb command line @@ -449,8 +449,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index ce037e5902..7f44ec31d1 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -56,10 +56,10 @@ INSERT INTO table_name [ AS The target column names can be listed in any order. If no list of column names is given at all, the default is all the columns of the - table in their declared order; or the first N column - names, if there are only N columns supplied by the - VALUES clause or query. The values - supplied by the VALUES clause or query are + table in their declared order; or the first N column + names, if there are only N columns supplied by the + VALUES clause or query. The values + supplied by the VALUES clause or query are associated with the explicit or implicit column list left-to-right. @@ -75,21 +75,21 @@ INSERT INTO table_name [ AS - ON CONFLICT can be used to specify an alternative + ON CONFLICT can be used to specify an alternative action to raising a unique constraint or exclusion constraint violation error. (See below.) - The optional RETURNING clause causes INSERT + The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted - (or updated, if an ON CONFLICT DO UPDATE clause was + (or updated, if an ON CONFLICT DO UPDATE clause was used). This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of - the RETURNING list is identical to that of the output - list of SELECT. Only rows that were successfully + the RETURNING list is identical to that of the output + list of SELECT. Only rows that were successfully inserted or updated will be returned. For example, if a row was locked but not updated because an ON CONFLICT DO UPDATE ... WHERE clause table_name [ AS You must have INSERT privilege on a table in - order to insert into it. If ON CONFLICT DO UPDATE is + order to insert into it. If ON CONFLICT DO UPDATE is present, UPDATE privilege on the table is also required. @@ -107,17 +107,17 @@ INSERT INTO table_name [ AS If a column list is specified, you only need INSERT privilege on the listed columns. - Similarly, when ON CONFLICT DO UPDATE is specified, you - only need UPDATE privilege on the column(s) that are - listed to be updated. However, ON CONFLICT DO UPDATE - also requires SELECT privilege on any column whose - values are read in the ON CONFLICT DO UPDATE - expressions or condition. + Similarly, when ON CONFLICT DO UPDATE is specified, you + only need UPDATE privilege on the column(s) that are + listed to be updated. However, ON CONFLICT DO UPDATE + also requires SELECT privilege on any column whose + values are read in the ON CONFLICT DO UPDATE + expressions or condition. - Use of the RETURNING clause requires SELECT - privilege on all columns mentioned in RETURNING. + Use of the RETURNING clause requires SELECT + privilege on all columns mentioned in RETURNING. If you use the query clause to insert rows from a query, you of course need to have SELECT privilege on @@ -144,7 +144,7 @@ INSERT INTO table_name [ AS The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the INSERT + subqueries that can be referenced by name in the INSERT query. See and for details. @@ -175,8 +175,8 @@ INSERT INTO table_name [ AS table_name. When an alias is provided, it completely hides the actual name of the table. - This is particularly useful when ON CONFLICT DO UPDATE - targets a table named excluded, since that will otherwise + This is particularly useful when ON CONFLICT DO UPDATE + targets a table named excluded, since that will otherwise be taken as the name of the special table representing rows proposed for insertion. @@ -193,11 +193,11 @@ INSERT INTO table_name [ AS ON CONFLICT DO UPDATE, do not include + column with ON CONFLICT DO UPDATE, do not include the table's name in the specification of a target column. For example, INSERT INTO table_name ... ON CONFLICT DO UPDATE - SET table_name.col = 1 is invalid (this follows the general - behavior for UPDATE). + SET table_name.col = 1 is invalid (this follows the general + behavior for UPDATE). @@ -281,11 +281,11 @@ INSERT INTO table_name [ AS An expression to be computed and returned by the - INSERT command after each row is inserted or + INSERT command after each row is inserted or updated. The expression can use any column names of the table named by table_name. Write - * to return all columns of the inserted or updated + * to return all columns of the inserted or updated row(s). @@ -386,7 +386,7 @@ INSERT INTO table_name [ AS have access to the existing row using the table's name (or an alias), and to rows proposed for insertion using the special excluded table. - SELECT privilege is required on any column in the + SELECT privilege is required on any column in the target table where corresponding excluded columns are read. @@ -406,7 +406,7 @@ INSERT INTO table_name [ AS table_name column. Used to infer arbiter indexes. Follows CREATE - INDEX format. SELECT privilege on + INDEX format. SELECT privilege on index_column_name is required. @@ -422,7 +422,7 @@ INSERT INTO table_name [ AS table_name columns appearing within index definitions (not simple columns). Follows - CREATE INDEX format. SELECT + CREATE INDEX format. SELECT privilege on any column appearing within index_expression is required. @@ -469,7 +469,7 @@ INSERT INTO table_name [ AS CREATE - INDEX format. SELECT privilege on any + INDEX format. SELECT privilege on any column appearing within index_predicate is required. @@ -494,7 +494,7 @@ INSERT INTO table_name [ AS boolean. Only rows for which this expression returns true will be updated, although all - rows will be locked when the ON CONFLICT DO UPDATE + rows will be locked when the ON CONFLICT DO UPDATE action is taken. Note that condition is evaluated last, after a conflict has been identified as a candidate to update. @@ -510,7 +510,7 @@ INSERT INTO table_name [ AS - INSERT with an ON CONFLICT DO UPDATE + INSERT with an ON CONFLICT DO UPDATE clause is a deterministic statement. This means that the command will not be allowed to affect any single existing row more than once; a cardinality violation error will be raised @@ -538,7 +538,7 @@ INSERT INTO table_name [ AS Outputs - On successful completion, an INSERT command returns a command + On successful completion, an INSERT command returns a command tag of the form INSERT oid count @@ -554,10 +554,10 @@ INSERT oid count - If the INSERT command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the INSERT command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) inserted or + RETURNING list, computed over the row(s) inserted or updated by the command. @@ -616,7 +616,7 @@ INSERT INTO films DEFAULT VALUES; - To insert multiple rows using the multirow VALUES syntax: + To insert multiple rows using the multirow VALUES syntax: INSERT INTO films (code, title, did, date_prod, kind) VALUES @@ -675,7 +675,7 @@ INSERT INTO employees_log SELECT *, current_timestamp FROM upd; Insert or update new distributors as appropriate. Assumes a unique index has been defined that constrains values appearing in the did column. Note that the special - excluded table is used to reference values originally + excluded table is used to reference values originally proposed for insertion: INSERT INTO distributors (did, dname) @@ -697,7 +697,7 @@ INSERT INTO distributors (did, dname) VALUES (7, 'Redline GmbH') Insert or update new distributors as appropriate. Example assumes a unique index has been defined that constrains values appearing in - the did column. WHERE clause is + the did column. WHERE clause is used to limit the rows actually updated (any existing row not updated will still be locked, though): @@ -734,13 +734,13 @@ INSERT INTO distributors (did, dname) VALUES (10, 'Conrad International') INSERT conforms to the SQL standard, except that - the RETURNING clause is a + the RETURNING clause is a PostgreSQL extension, as is the ability - to use WITH with INSERT, and the ability to - specify an alternative action with ON CONFLICT. + to use WITH with INSERT, and the ability to + specify an alternative action with ON CONFLICT. Also, the case in which a column name list is omitted, but not all the columns are - filled from the VALUES clause or query, + filled from the VALUES clause or query, is disallowed by the standard. diff --git a/doc/src/sgml/ref/listen.sgml b/doc/src/sgml/ref/listen.sgml index 76215716d6..6527562717 100644 --- a/doc/src/sgml/ref/listen.sgml +++ b/doc/src/sgml/ref/listen.sgml @@ -54,12 +54,12 @@ LISTEN channel The method a client application must use to detect notification events depends on which PostgreSQL application programming interface it - uses. With the libpq library, the application issues + uses. With the libpq library, the application issues LISTEN as an ordinary SQL command, and then must periodically call the function PQnotifies to find out whether any notification events have been received. Other interfaces such as - libpgtcl provide higher-level methods for handling notify events; indeed, - with libpgtcl the application programmer should not even issue + libpgtcl provide higher-level methods for handling notify events; indeed, + with libpgtcl the application programmer should not even issue LISTEN or UNLISTEN directly. See the documentation for the interface you are using for more details. diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index 2be28e6d15..b9e3fe8b25 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -28,12 +28,12 @@ LOAD 'filename' Description - This command loads a shared library file into the PostgreSQL + This command loads a shared library file into the PostgreSQL server's address space. If the file has been loaded already, the command does nothing. Shared library files that contain C functions are automatically loaded whenever one of their functions is called. - Therefore, an explicit LOAD is usually only needed to - load a library that modifies the server's behavior through hooks + Therefore, an explicit LOAD is usually only needed to + load a library that modifies the server's behavior through hooks rather than providing a set of functions. @@ -47,15 +47,15 @@ LOAD 'filename' - $libdir/plugins + $libdir/plugins - Non-superusers can only apply LOAD to library files - located in $libdir/plugins/ — the specified + Non-superusers can only apply LOAD to library files + located in $libdir/plugins/ — the specified filename must begin with exactly that string. (It is the database administrator's - responsibility to ensure that only safe libraries + responsibility to ensure that only safe libraries are installed there.) diff --git a/doc/src/sgml/ref/lock.sgml b/doc/src/sgml/ref/lock.sgml index f1dbb8e65a..6d68ec6c53 100644 --- a/doc/src/sgml/ref/lock.sgml +++ b/doc/src/sgml/ref/lock.sgml @@ -51,13 +51,13 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] restrictive lock mode possible. LOCK TABLE provides for cases when you might need more restrictive locking. For example, suppose an application runs a transaction at the - READ COMMITTED isolation level and needs to ensure that + READ COMMITTED isolation level and needs to ensure that data in a table remains stable for the duration of the transaction. - To achieve this you could obtain SHARE lock mode over the + To achieve this you could obtain SHARE lock mode over the table before querying. This will prevent concurrent data changes and ensure subsequent reads of the table see a stable view of - committed data, because SHARE lock mode conflicts with - the ROW EXCLUSIVE lock acquired by writers, and your + committed data, because SHARE lock mode conflicts with + the ROW EXCLUSIVE lock acquired by writers, and your LOCK TABLE name IN SHARE MODE statement will wait until any concurrent holders of ROW @@ -68,28 +68,28 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] To achieve a similar effect when running a transaction at the - REPEATABLE READ or SERIALIZABLE - isolation level, you have to execute the LOCK TABLE statement - before executing any SELECT or data modification statement. - A REPEATABLE READ or SERIALIZABLE transaction's + REPEATABLE READ or SERIALIZABLE + isolation level, you have to execute the LOCK TABLE statement + before executing any SELECT or data modification statement. + A REPEATABLE READ or SERIALIZABLE transaction's view of data will be frozen when its first - SELECT or data modification statement begins. A LOCK - TABLE later in the transaction will still prevent concurrent writes + SELECT or data modification statement begins. A LOCK + TABLE later in the transaction will still prevent concurrent writes — but it won't ensure that what the transaction reads corresponds to the latest committed values. If a transaction of this sort is going to change the data in the - table, then it should use SHARE ROW EXCLUSIVE lock mode - instead of SHARE mode. This ensures that only one + table, then it should use SHARE ROW EXCLUSIVE lock mode + instead of SHARE mode. This ensures that only one transaction of this type runs at a time. Without this, a deadlock - is possible: two transactions might both acquire SHARE - mode, and then be unable to also acquire ROW EXCLUSIVE + is possible: two transactions might both acquire SHARE + mode, and then be unable to also acquire ROW EXCLUSIVE mode to actually perform their updates. (Note that a transaction's own locks never conflict, so a transaction can acquire ROW - EXCLUSIVE mode when it holds SHARE mode — but not - if anyone else holds SHARE mode.) To avoid deadlocks, + EXCLUSIVE mode when it holds SHARE mode — but not + if anyone else holds SHARE mode.) To avoid deadlocks, make sure all transactions acquire locks on the same objects in the same order, and if multiple lock modes are involved for a single object, then transactions should always acquire the most @@ -111,16 +111,16 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] The name (optionally schema-qualified) of an existing table to - lock. If ONLY is specified before the table name, only that - table is locked. If ONLY is not specified, the table and all - its descendant tables (if any) are locked. Optionally, * + lock. If ONLY is specified before the table name, only that + table is locked. If ONLY is not specified, the table and all + its descendant tables (if any) are locked. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. - The command LOCK TABLE a, b; is equivalent to - LOCK TABLE a; LOCK TABLE b;. The tables are locked + The command LOCK TABLE a, b; is equivalent to + LOCK TABLE a; LOCK TABLE b;. The tables are locked one-by-one in the order specified in the LOCK TABLE command. @@ -160,18 +160,18 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] Notes - LOCK TABLE ... IN ACCESS SHARE MODE requires SELECT + LOCK TABLE ... IN ACCESS SHARE MODE requires SELECT privileges on the target table. LOCK TABLE ... IN ROW EXCLUSIVE - MODE requires INSERT, UPDATE, DELETE, - or TRUNCATE privileges on the target table. All other forms of - LOCK require table-level UPDATE, DELETE, - or TRUNCATE privileges. + MODE requires INSERT, UPDATE, DELETE, + or TRUNCATE privileges on the target table. All other forms of + LOCK require table-level UPDATE, DELETE, + or TRUNCATE privileges. - LOCK TABLE is useless outside a transaction block: the lock + LOCK TABLE is useless outside a transaction block: the lock would remain held only to the completion of the statement. Therefore - PostgreSQL reports an error if LOCK + PostgreSQL reports an error if LOCK is used outside a transaction block. Use and @@ -181,13 +181,13 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] - LOCK TABLE only deals with table-level locks, and so - the mode names involving ROW are all misnomers. These + LOCK TABLE only deals with table-level locks, and so + the mode names involving ROW are all misnomers. These mode names should generally be read as indicating the intention of the user to acquire row-level locks within the locked table. Also, - ROW EXCLUSIVE mode is a shareable table lock. Keep in + ROW EXCLUSIVE mode is a shareable table lock. Keep in mind that all the lock modes have identical semantics so far as - LOCK TABLE is concerned, differing only in the rules + LOCK TABLE is concerned, differing only in the rules about which modes conflict with which. For information on how to acquire an actual row-level lock, see and the name [ * ] Examples - Obtain a SHARE lock on a primary key table when going to perform + Obtain a SHARE lock on a primary key table when going to perform inserts into a foreign key table: @@ -216,7 +216,7 @@ COMMIT WORK; - Take a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform + Take a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform a delete operation: @@ -240,8 +240,8 @@ COMMIT WORK; - Except for ACCESS SHARE, ACCESS EXCLUSIVE, - and SHARE UPDATE EXCLUSIVE lock modes, the + Except for ACCESS SHARE, ACCESS EXCLUSIVE, + and SHARE UPDATE EXCLUSIVE lock modes, the PostgreSQL lock modes and the LOCK TABLE syntax are compatible with those present in Oracle. diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml index 6b809b961d..4bf7896858 100644 --- a/doc/src/sgml/ref/move.sgml +++ b/doc/src/sgml/ref/move.sgml @@ -69,7 +69,7 @@ MOVE [ direction [ FROM | IN ] ] Outputs - On successful completion, a MOVE command returns a command + On successful completion, a MOVE command returns a command tag of the form MOVE count diff --git a/doc/src/sgml/ref/notify.sgml b/doc/src/sgml/ref/notify.sgml index 09debd6685..4376b9fdd7 100644 --- a/doc/src/sgml/ref/notify.sgml +++ b/doc/src/sgml/ref/notify.sgml @@ -30,9 +30,9 @@ NOTIFY channel [ , The NOTIFY command sends a notification event together - with an optional payload string to each client application that + with an optional payload string to each client application that has previously executed - LISTEN channel + LISTEN channel for the specified channel name in the current database. Notifications are visible to all users. @@ -49,7 +49,7 @@ NOTIFY channel [ , The information passed to the client for a notification event includes the notification channel - name, the notifying session's server process PID, and the + name, the notifying session's server process PID, and the payload string, which is an empty string if it has not been specified. @@ -115,9 +115,9 @@ NOTIFY channel [ , PID (supplied in the + session's server process PID (supplied in the notification event message) is the same as one's own session's - PID (available from libpq). When they + PID (available from libpq). When they are the same, the notification event is one's own work bouncing back, and can be ignored. @@ -139,7 +139,7 @@ NOTIFY channel [ , payload - The payload string to be communicated along with the + The payload string to be communicated along with the notification. This must be specified as a simple string literal. In the default configuration it must be shorter than 8000 bytes. (If binary data or large amounts of information need to be communicated, diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index f790c56003..1944c185cb 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation pg_basebackup - option + option @@ -69,7 +69,7 @@ PostgreSQL documentation pg_basebackup can make a base backup from not only the master but also the standby. To take a backup from the standby, set up the standby so that it can accept replication connections (that is, set - max_wal_senders and , + max_wal_senders and , and configure host-based authentication). You will also need to enable on the master. @@ -85,7 +85,7 @@ PostgreSQL documentation - If you are using -X none, there is no guarantee that all + If you are using -X none, there is no guarantee that all WAL files required for the backup are archived at the end of backup. @@ -97,9 +97,9 @@ PostgreSQL documentation All WAL records required for the backup must contain sufficient full-page writes, - which requires you to enable full_page_writes on the master and - not to use a tool like pg_compresslog as - archive_command to remove full-page writes from WAL files. + which requires you to enable full_page_writes on the master and + not to use a tool like pg_compresslog as + archive_command to remove full-page writes from WAL files. @@ -193,8 +193,8 @@ PostgreSQL documentation The maximum transfer rate of data transferred from the server. Values are - in kilobytes per second. Use a suffix of M to indicate megabytes - per second. A suffix of k is also accepted, and has no effect. + in kilobytes per second. Use a suffix of M to indicate megabytes + per second. A suffix of k is also accepted, and has no effect. Valid values are between 32 kilobytes per second and 1024 megabytes per second. @@ -534,7 +534,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_basebackup doesn't connect to any particular database in the cluster, database name in the connection string will be ignored. @@ -594,8 +594,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -623,7 +623,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_basebackup will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -636,8 +636,8 @@ PostgreSQL documentation - - + + Print the pg_basebackup version and exit. @@ -646,8 +646,8 @@ PostgreSQL documentation - - + + Show help about pg_basebackup command line @@ -665,8 +665,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). @@ -709,8 +709,8 @@ PostgreSQL documentation tar file before starting the PostgreSQL server. If there are additional tablespaces, the tar files for them need to be unpacked in the correct locations. In this case the symbolic links for those tablespaces will be created by the server - according to the contents of the tablespace_map file that is - included in the base.tar file. + according to the contents of the tablespace_map file that is + included in the base.tar file. diff --git a/doc/src/sgml/ref/pg_config-ref.sgml b/doc/src/sgml/ref/pg_config-ref.sgml index 0210f6389d..b819f3f345 100644 --- a/doc/src/sgml/ref/pg_config-ref.sgml +++ b/doc/src/sgml/ref/pg_config-ref.sgml @@ -13,7 +13,7 @@ pg_config - retrieve information about the installed version of PostgreSQL + retrieve information about the installed version of PostgreSQL @@ -24,12 +24,12 @@ - Description</> + <title>Description - The pg_config utility prints configuration parameters - of the currently installed version of PostgreSQL. It is + The pg_config utility prints configuration parameters + of the currently installed version of PostgreSQL. It is intended, for example, to be used by software packages that want to interface - to PostgreSQL to facilitate finding the required header files + to PostgreSQL to facilitate finding the required header files and libraries. @@ -39,22 +39,22 @@ Options - To use pg_config, supply one or more of the following + To use pg_config, supply one or more of the following options: - + Print the location of user executables. Use this, for example, to find - the psql program. This is normally also the location - where the pg_config program resides. + the psql program. This is normally also the location + where the pg_config program resides. - + Print the location of documentation files. @@ -63,7 +63,7 @@ - + Print the location of HTML documentation files. @@ -72,7 +72,7 @@ - + Print the location of C header files of the client interfaces. @@ -81,7 +81,7 @@ - + Print the location of other C header files. @@ -90,7 +90,7 @@ - + Print the location of C header files for server programming. @@ -99,7 +99,7 @@ - + Print the location of object code libraries. @@ -108,7 +108,7 @@ - + Print the location of dynamically loadable modules, or where @@ -120,18 +120,18 @@ - + Print the location of locale support files. (This will be an empty string if locale support was not configured when - PostgreSQL was built.) + PostgreSQL was built.) - + Print the location of manual pages. @@ -140,7 +140,7 @@ - + Print the location of architecture-independent support files. @@ -149,7 +149,7 @@ - + Print the location of system-wide configuration files. @@ -158,7 +158,7 @@ - + Print the location of extension makefiles. @@ -167,11 +167,11 @@ - + - Print the options that were given to the configure - script when PostgreSQL was configured for building. + Print the options that were given to the configure + script when PostgreSQL was configured for building. This can be used to reproduce the identical configuration, or to find out with what options a binary package was built. (Note however that binary packages often contain vendor-specific custom @@ -181,102 +181,102 @@ - + Print the value of the CC variable that was used for building - PostgreSQL. This shows the C compiler used. + PostgreSQL. This shows the C compiler used. - + Print the value of the CPPFLAGS variable that was used for building - PostgreSQL. This shows C compiler switches needed - at preprocessing time (typically, -I switches). + PostgreSQL. This shows C compiler switches needed + at preprocessing time (typically, -I switches). - + Print the value of the CFLAGS variable that was used for building - PostgreSQL. This shows C compiler switches. + PostgreSQL. This shows C compiler switches. - + Print the value of the CFLAGS_SL variable that was used for building - PostgreSQL. This shows extra C compiler switches + PostgreSQL. This shows extra C compiler switches used for building shared libraries. - + Print the value of the LDFLAGS variable that was used for building - PostgreSQL. This shows linker switches. + PostgreSQL. This shows linker switches. - + Print the value of the LDFLAGS_EX variable that was used for building - PostgreSQL. This shows linker switches + PostgreSQL. This shows linker switches used for building executables only. - + Print the value of the LDFLAGS_SL variable that was used for building - PostgreSQL. This shows linker switches + PostgreSQL. This shows linker switches used for building shared libraries only. - + Print the value of the LIBS variable that was used for building - PostgreSQL. This normally contains -l - switches for external libraries linked into PostgreSQL. + PostgreSQL. This normally contains -l + switches for external libraries linked into PostgreSQL. - + - Print the version of PostgreSQL. + Print the version of PostgreSQL. - - + + Show help about pg_config command line @@ -303,9 +303,9 @@ , , , , , , - and were added in PostgreSQL 8.1. - The option was added in PostgreSQL 8.4. - The option was added in PostgreSQL 9.0. + and were added in PostgreSQL 8.1. + The option was added in PostgreSQL 8.4. + The option was added in PostgreSQL 9.0. diff --git a/doc/src/sgml/ref/pg_controldata.sgml b/doc/src/sgml/ref/pg_controldata.sgml index 4a360d61fd..4d4feacb93 100644 --- a/doc/src/sgml/ref/pg_controldata.sgml +++ b/doc/src/sgml/ref/pg_controldata.sgml @@ -31,7 +31,7 @@ PostgreSQL documentation Description pg_controldata prints information initialized during - initdb, such as the catalog version. + initdb, such as the catalog version. It also shows information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. @@ -41,10 +41,10 @@ PostgreSQL documentation This utility can only be run by the user who initialized the cluster because it requires read access to the data directory. You can specify the data directory on the command line, or use - the environment variable PGDATA. This utility supports the options - and , which print the pg_controldata version and exit. It also - supports options and , which output the supported arguments. diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index 12fa011c4e..3bcf0a2e9f 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -159,13 +159,13 @@ PostgreSQL documentation mode launches a new server. The server is started in the background, and its standard input is attached - to /dev/null (or nul on Windows). + to /dev/null (or nul on Windows). On Unix-like systems, by default, the server's standard output and standard error are sent to pg_ctl's standard output (not standard error). The standard output of pg_ctl should then be redirected to a file or piped to another process such as a log rotating program - like rotatelogs; otherwise postgres + like rotatelogs; otherwise postgres will write its output to the controlling terminal (from the background) and will not leave the shell's process group. On Windows, by default the server's standard output and standard error @@ -203,7 +203,7 @@ PostgreSQL documentation mode simply sends the - postgres server process a SIGHUP + postgres server process a SIGHUP signal, causing it to reread its configuration files (postgresql.conf, pg_hba.conf, etc.). This allows changing @@ -228,14 +228,14 @@ PostgreSQL documentation mode sends a signal to a specified process. - This is primarily valuable on Microsoft Windows - which does not have a built-in kill command. Use - --help to see a list of supported signal names. + This is primarily valuable on Microsoft Windows + which does not have a built-in kill command. Use + --help to see a list of supported signal names. - mode registers the PostgreSQL - server as a system service on Microsoft Windows. + mode registers the PostgreSQL + server as a system service on Microsoft Windows. The option allows selection of service start type, either auto (start service automatically on system startup) or demand (start service on demand). @@ -243,7 +243,7 @@ PostgreSQL documentation mode unregisters a system service - on Microsoft Windows. This undoes the effects of the + on Microsoft Windows. This undoes the effects of the command. @@ -286,7 +286,7 @@ PostgreSQL documentation Append the server log output to filename. If the file does not - exist, it is created. The umask is set to 077, + exist, it is created. The umask is set to 077, so access to the log file is disallowed to other users by default. @@ -313,11 +313,11 @@ PostgreSQL documentation Specifies options to be passed directly to the postgres command. - can be specified multiple times, with all the given options being passed through. - The options should usually be surrounded by single or + The options should usually be surrounded by single or double quotes to ensure that they are passed through as a group. @@ -330,11 +330,11 @@ PostgreSQL documentation Specifies options to be passed directly to the initdb command. - can be specified multiple times, with all the given options being passed through. - The options should usually be surrounded by single or + The options should usually be surrounded by single or double quotes to ensure that they are passed through as a group. @@ -377,15 +377,15 @@ PostgreSQL documentation Specifies the maximum number of seconds to wait when waiting for an operation to complete (see option ). Defaults to - the value of the PGCTLTIMEOUT environment variable or, if + the value of the PGCTLTIMEOUT environment variable or, if not set, to 60 seconds. - - + + Print the pg_ctl version and exit. @@ -446,8 +446,8 @@ PostgreSQL documentation - - + + Show help about pg_ctl command line @@ -507,7 +507,7 @@ PostgreSQL documentation - Start type of the system service. start-type can + Start type of the system service. start-type can be auto, or demand, or the first letter of one of these two. If this option is omitted, auto is the default. @@ -559,14 +559,14 @@ PostgreSQL documentation Most pg_ctl modes require knowing the data directory - location; therefore, the option is required unless PGDATA is set. - pg_ctl, like most other PostgreSQL + pg_ctl, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + also uses the environment variables supported by libpq (see ). @@ -661,8 +661,8 @@ PostgreSQL documentation - But if diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 7ccbee4855..79a9ee0983 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -116,8 +116,8 @@ PostgreSQL documentation - - + + Dump only the data, not the schema (data definitions). @@ -126,19 +126,19 @@ PostgreSQL documentation This option is similar to, but for historical reasons not identical - to, specifying . - - + + Include large objects in the dump. This is the default behavior - except when , , or + is specified. The switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when @@ -148,17 +148,17 @@ PostgreSQL documentation - - + + Exclude large objects in the dump. - When both and are given, the behavior is to output large objects, when data is being dumped, see the - documentation. @@ -170,7 +170,7 @@ PostgreSQL documentation Output commands to clean (drop) database objects prior to outputting the commands for creating them. - (Unless is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.) @@ -184,8 +184,8 @@ PostgreSQL documentation - - + + Begin the output with a command to create the @@ -242,8 +242,8 @@ PostgreSQL documentation - p - plain + p + plain Output a plain-text SQL script file (the default). @@ -252,8 +252,8 @@ PostgreSQL documentation - c - custom + c + custom Output a custom-format archive suitable for input into @@ -267,8 +267,8 @@ PostgreSQL documentation - d - directory + d + directory Output a directory-format archive suitable for input into @@ -286,8 +286,8 @@ PostgreSQL documentation - t - tar + t + tar Output a tar-format archive suitable for input @@ -305,8 +305,8 @@ PostgreSQL documentation - - + + Run the dump in parallel by dumping njobs @@ -315,13 +315,13 @@ PostgreSQL documentation directory output format because this is the only output format where multiple processes can write their data at the same time. - pg_dump will open njobs + pg_dump will open njobs + 1 connections to the database, so make sure your setting is high enough to accommodate all connections. Requesting exclusive locks on database objects while running a parallel dump could - cause the dump to fail. The reason is that the pg_dump master process + cause the dump to fail. The reason is that the pg_dump master process requests shared locks on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. @@ -330,10 +330,10 @@ PostgreSQL documentation released. Consequently any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. - To detect this conflict, the pg_dump worker process requests another - shared lock using the NOWAIT option. If the worker process is not granted + To detect this conflict, the pg_dump worker process requests another + shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime - and there is no way to continue with the dump, so pg_dump has no choice + and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump. @@ -371,10 +371,10 @@ PostgreSQL documentation schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be - selected by writing multiple switches. Also, the schema parameter is interpreted as a pattern according to the same rules used by - psql's \d commands (see psql's \d commands (see ), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern @@ -384,7 +384,7 @@ PostgreSQL documentation - When is specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully @@ -394,9 +394,9 @@ PostgreSQL documentation - Non-schema objects such as blobs are not dumped when is specified. You can add blobs back to the dump with the - switch. @@ -410,29 +410,29 @@ PostgreSQL documentation Do not dump any schemas matching the schema pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude schemas matching any of several patterns. - When both and are given, the behavior + is to dump just the schemas that match at least one + switch but no switches. If appears + without , then schemas matching are excluded from what is otherwise a normal dump. - - + + Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references - the OID + the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used. @@ -440,21 +440,21 @@ PostgreSQL documentation - + Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues - ALTER OWNER or + ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give - that user ownership of all the objects, specify . @@ -484,18 +484,18 @@ PostgreSQL documentation Dump only the object definitions (schema), not data. - This option is the inverse of . It is similar to, but for historical reasons not identical to, specifying - . - (Do not confuse this with the To exclude table data for only a subset of tables in the database, - see . @@ -506,7 +506,7 @@ PostgreSQL documentation Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.) @@ -520,12 +520,12 @@ PostgreSQL documentation Dump only tables with names matching table. - For this purpose, table includes views, materialized views, + For this purpose, table includes views, materialized views, sequences, and foreign tables. Multiple tables - can be selected by writing multiple switches. Also, the table parameter is interpreted as a pattern according to the same rules used by - psql's \d commands (see psql's \d commands (see ), so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern @@ -534,15 +534,15 @@ PostgreSQL documentation - The and switches have no effect when + is used, because tables selected by will be dumped regardless of those switches, and non-table objects will not be dumped. - When is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully @@ -552,14 +552,14 @@ PostgreSQL documentation - The behavior of the switch is not entirely upward compatible with pre-8.2 PostgreSQL - versions. Formerly, writing -t tab would dump all - tables named tab, but now it just dumps whichever one + versions. Formerly, writing -t tab would dump all + tables named tab, but now it just dumps whichever one is visible in your default search path. To get the old behavior - you can write -t '*.tab'. Also, you must write something - like -t sch.tab to select a table in a particular schema, - rather than the old locution of -n sch -t tab. + you can write -t '*.tab'. Also, you must write something + like -t sch.tab to select a table in a particular schema, + rather than the old locution of -n sch -t tab. @@ -572,24 +572,24 @@ PostgreSQL documentation Do not dump any tables matching the table pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude tables matching any of several patterns. - When both and are given, the behavior + is to dump just the tables that match at least one + switch but no switches. If appears + without , then tables matching are excluded from what is otherwise a normal dump. - - + + Specifies verbose mode. This will cause @@ -601,8 +601,8 @@ PostgreSQL documentation - - + + Print the pg_dump version and exit. @@ -611,9 +611,9 @@ PostgreSQL documentation - - - + + + Prevent dumping of access privileges (grant/revoke commands). @@ -632,7 +632,7 @@ PostgreSQL documentation at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been - fed through gzip; but the default is not to compress. + fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all. @@ -670,7 +670,7 @@ PostgreSQL documentation - + This option disables the use of dollar quoting for function bodies, @@ -680,7 +680,7 @@ PostgreSQL documentation - + This option is relevant only when creating a data-only dump. @@ -692,9 +692,9 @@ PostgreSQL documentation - Presently, the commands emitted for must be done as superuser. So, you should also specify - a superuser name with , or preferably be careful to start the resulting script as a superuser. @@ -707,7 +707,7 @@ PostgreSQL documentation - + This option is relevant only when dumping the contents of a table @@ -734,14 +734,14 @@ PostgreSQL documentation Do not dump data for any tables matching the table pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it. - To exclude data for all tables in the database, see . @@ -752,7 +752,7 @@ PostgreSQL documentation Use conditional commands (i.e. add an IF EXISTS clause) when cleaning database objects. This option is not valid - unless is also specified. @@ -782,9 +782,9 @@ PostgreSQL documentation Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified - timeout. The timeout may be + timeout. The timeout may be specified in any of the formats accepted by SET - statement_timeout. (Allowed formats vary depending on the server + statement_timeout. (Allowed formats vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions.) @@ -833,10 +833,10 @@ PostgreSQL documentation - + - This option allows running pg_dump -j against a pre-9.2 + This option allows running pg_dump -j against a pre-9.2 server, see the documentation of the parameter for more details. @@ -873,25 +873,25 @@ PostgreSQL documentation - + Force quoting of all identifiers. This option is recommended when - dumping a database from a server whose PostgreSQL - major version is different from pg_dump's, or when + dumping a database from a server whose PostgreSQL + major version is different from pg_dump's, or when the output is intended to be loaded into a server of a different - major version. By default, pg_dump quotes only + major version. By default, pg_dump quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets - of reserved words. Using prevents such issues, at the price of a harder-to-read dump script. - + When dumping a COPY or INSERT statement for a partitioned table, @@ -910,7 +910,7 @@ PostgreSQL documentation Only dump the named section. The section name can be - , , or . This option can be specified more than once to select multiple sections. The default is to dump all sections. @@ -981,7 +981,7 @@ PostgreSQL documentation - + Require that each schema @@ -1003,23 +1003,23 @@ PostgreSQL documentation - + - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore - properly. Also, a dump using SET SESSION AUTHORIZATION + properly. Also, a dump using SET SESSION AUTHORIZATION will certainly require superuser privileges to restore correctly, - whereas ALTER OWNER requires lesser privileges. + whereas ALTER OWNER requires lesser privileges. - - + + Show help about pg_dump command line @@ -1036,8 +1036,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to connect to. This is @@ -1093,8 +1093,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -1122,7 +1122,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_dump will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -1133,11 +1133,11 @@ PostgreSQL documentation Specifies a role name to be used to create the dump. - This option causes pg_dump to issue a - SET ROLE rolename + This option causes pg_dump to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -1192,7 +1192,7 @@ PostgreSQL documentation The database activity of pg_dump is normally collected by the statistics collector. If this is - undesirable, you can set parameter track_counts + undesirable, you can set parameter track_counts to false via PGOPTIONS or the ALTER USER command. @@ -1204,11 +1204,11 @@ PostgreSQL documentation Notes - If your database cluster has any local additions to the template1 database, + If your database cluster has any local additions to the template1 database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database - without any local additions, copy from template0 not template1, + without any local additions, copy from template0 not template1, for example: CREATE DATABASE foo WITH TEMPLATE template0; @@ -1216,7 +1216,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - When a data-only dump is chosen and the option is used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been @@ -1232,30 +1232,30 @@ CREATE DATABASE foo WITH TEMPLATE template0; to ensure optimal performance; see and for more information. The dump file also does not - contain any ALTER DATABASE ... SET commands; + contain any ALTER DATABASE ... SET commands; these settings are dumped by , along with database users and other installation-wide settings. Because pg_dump is used to transfer data - to newer versions of PostgreSQL, the output of + to newer versions of PostgreSQL, the output of pg_dump can be expected to load into - PostgreSQL server versions newer than - pg_dump's version. pg_dump can also - dump from PostgreSQL servers older than its own version. + PostgreSQL server versions newer than + pg_dump's version. pg_dump can also + dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.) - However, pg_dump cannot dump from - PostgreSQL servers newer than its own major version; + However, pg_dump cannot dump from + PostgreSQL servers newer than its own major version; it will refuse to even try, rather than risk making an invalid dump. - Also, it is not guaranteed that pg_dump's output can + Also, it is not guaranteed that pg_dump's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. Use of the option is recommended in cross-version cases, as it can prevent problems arising from varying - reserved-word lists in different PostgreSQL versions. + reserved-word lists in different PostgreSQL versions. @@ -1276,7 +1276,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; Examples - To dump a database called mydb into a SQL-script file: + To dump a database called mydb into a SQL-script file: $ pg_dump mydb > db.sql @@ -1284,7 +1284,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To reload such a script into a (freshly created) database named - newdb: + newdb: $ psql -d newdb -f db.sql @@ -1318,7 +1318,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To reload an archive file into a (freshly created) database named - newdb: + newdb: $ pg_restore -d newdb db.dump @@ -1326,7 +1326,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump a single table named mytab: + To dump a single table named mytab: $ pg_dump -t mytab mydb > db.sql @@ -1334,8 +1334,8 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump all tables whose names start with emp in the - detroit schema, except for the table named + To dump all tables whose names start with emp in the + detroit schema, except for the table named employee_log: @@ -1344,9 +1344,9 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump all schemas whose names start with east or - west and end in gsm, excluding any schemas whose - names contain the word test: + To dump all schemas whose names start with east or + west and end in gsm, excluding any schemas whose + names contain the word test: $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql @@ -1371,7 +1371,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To specify an upper-case or mixed-case name in and related switches, you need to double-quote the name; else it will be folded to lower case (see ). But diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 1dba702ad9..0a64c3548e 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -32,7 +32,7 @@ PostgreSQL documentation pg_dumpall is a utility for writing out - (dumping) all PostgreSQL databases + (dumping) all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to to restore the databases. It does this by @@ -63,7 +63,7 @@ PostgreSQL documentation times to the PostgreSQL server (once per database). If you use password authentication it will ask for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. @@ -78,8 +78,8 @@ PostgreSQL documentation - - + + Dump only the data, not the schema (data definitions). @@ -93,7 +93,7 @@ PostgreSQL documentation Include SQL commands to clean (drop) databases before - recreating them. DROP commands for roles and + recreating them. DROP commands for roles and tablespaces are added as well. @@ -134,13 +134,13 @@ PostgreSQL documentation - - + + Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references - the OID + the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used. @@ -148,21 +148,21 @@ PostgreSQL documentation - + Do not output commands to set ownership of objects to match the original database. By default, pg_dumpall issues - ALTER OWNER or + ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created schema elements. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give - that user ownership of all the objects, specify . @@ -193,7 +193,7 @@ PostgreSQL documentation Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.) @@ -211,21 +211,21 @@ PostgreSQL documentation - - + + Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and progress messages to standard error. - It will also enable verbose output in pg_dump. + It will also enable verbose output in pg_dump. - - + + Print the pg_dumpall version and exit. @@ -234,9 +234,9 @@ PostgreSQL documentation - - - + + + Prevent dumping of access privileges (grant/revoke commands). @@ -273,7 +273,7 @@ PostgreSQL documentation - + This option disables the use of dollar quoting for function bodies, @@ -283,7 +283,7 @@ PostgreSQL documentation - + This option is relevant only when creating a data-only dump. @@ -295,9 +295,9 @@ PostgreSQL documentation - Presently, the commands emitted for must be done as superuser. So, you should also specify - a superuser name with , or preferably be careful to start the resulting script as a superuser. @@ -309,7 +309,7 @@ PostgreSQL documentation Use conditional commands (i.e. add an IF EXISTS clause) to clean databases and other objects. This option is not valid - unless is also specified. @@ -335,9 +335,9 @@ PostgreSQL documentation Do not wait forever to acquire shared table locks at the beginning of the dump. Instead, fail if unable to lock a table within the specified - timeout. The timeout may be + timeout. The timeout may be specified in any of the formats accepted by SET - statement_timeout. Allowed values vary depending on the server + statement_timeout. Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7.3. This option is ignored when dumping from a pre-7.3 server. @@ -426,25 +426,25 @@ PostgreSQL documentation - + Force quoting of all identifiers. This option is recommended when - dumping a database from a server whose PostgreSQL - major version is different from pg_dumpall's, or when + dumping a database from a server whose PostgreSQL + major version is different from pg_dumpall's, or when the output is intended to be loaded into a server of a different - major version. By default, pg_dumpall quotes only + major version. By default, pg_dumpall quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets - of reserved words. Using prevents such issues, at the price of a harder-to-read dump script. - + When dumping a COPY or INSERT statement for a partitioned table, @@ -459,11 +459,11 @@ PostgreSQL documentation - + - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, might not restore properly. @@ -472,8 +472,8 @@ PostgreSQL documentation - - + + Show help about pg_dumpall command line @@ -498,7 +498,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_dumpall needs to connect to many databases, database name in the connection string will be ignored. Use -l option to specify @@ -559,8 +559,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -588,14 +588,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_dumpall will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. Note that the password prompt will occur again for each database to be dumped. Usually, it's better to set up a - ~/.pgpass file than to rely on manual password entry. + ~/.pgpass file than to rely on manual password entry. @@ -605,11 +605,11 @@ PostgreSQL documentation Specifies a role name to be used to create the dump. - This option causes pg_dumpall to issue a - SET ROLE rolename + This option causes pg_dumpall to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by - Once restored, it is wise to run ANALYZE on each + Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You - can also run vacuumdb -a -z to analyze all + can also run vacuumdb -a -z to analyze all databases. diff --git a/doc/src/sgml/ref/pg_isready.sgml b/doc/src/sgml/ref/pg_isready.sgml index 2ee79a0bbe..f140c82079 100644 --- a/doc/src/sgml/ref/pg_isready.sgml +++ b/doc/src/sgml/ref/pg_isready.sgml @@ -43,8 +43,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to connect to. @@ -61,8 +61,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -74,8 +74,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or the local Unix-domain @@ -98,8 +98,8 @@ PostgreSQL documentation - - + + The maximum number of seconds to wait when attempting connection before @@ -110,8 +110,8 @@ PostgreSQL documentation - - + + Connect to the database as the user - - + + Print the pg_isready version and exit. @@ -131,8 +131,8 @@ PostgreSQL documentation - - + + Show help about pg_isready command line @@ -159,9 +159,9 @@ PostgreSQL documentation Environment - pg_isready, like most other PostgreSQL + pg_isready, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index f0513dad2a..5395fde6d6 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation pg_receivewal - option + option @@ -49,9 +49,9 @@ PostgreSQL documentation - Unlike the WAL receiver of a PostgreSQL standby server, pg_receivewal + Unlike the WAL receiver of a PostgreSQL standby server, pg_receivewal by default flushes WAL data only when a WAL file is closed. - The option must be specified to flush WAL data in real time. @@ -77,7 +77,7 @@ PostgreSQL documentation In the absence of fatal errors, pg_receivewal will run until terminated by the SIGINT signal - (ControlC). + (ControlC). @@ -108,7 +108,7 @@ PostgreSQL documentation - If there is a record with LSN exactly equal to lsn, + If there is a record with LSN exactly equal to lsn, the record will be processed. @@ -156,7 +156,7 @@ PostgreSQL documentation Require pg_receivewal to use an existing replication slot (see ). - When this option is used, pg_receivewal will report + When this option is used, pg_receivewal will report a flush position to the server, indicating when each segment has been synchronized to disk so that the server can remove that segment if it is not otherwise needed. @@ -181,7 +181,7 @@ PostgreSQL documentation Flush the WAL data to disk immediately after it has been received. Also send a status packet back to the server immediately after flushing, - regardless of --status-interval. + regardless of --status-interval. @@ -230,7 +230,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_receivewal doesn't connect to any particular database in the cluster, database name in the connection string will be ignored. @@ -276,8 +276,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -305,7 +305,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_receivewal will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -345,8 +345,8 @@ PostgreSQL documentation - - + + Print the pg_receivewal version and exit. @@ -355,8 +355,8 @@ PostgreSQL documentation - - + + Show help about pg_receivewal command line @@ -386,8 +386,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml index 9c7bb1907b..5add6113f3 100644 --- a/doc/src/sgml/ref/pg_recvlogical.sgml +++ b/doc/src/sgml/ref/pg_recvlogical.sgml @@ -40,11 +40,11 @@ PostgreSQL documentation - pg_recvlogical has no equivalent to the logical decoding + pg_recvlogical has no equivalent to the logical decoding SQL interface's peek and get modes. It sends replay confirmations for data lazily as it receives it and on clean exit. To examine pending data on a slot without consuming it, use - pg_logical_slot_peek_changes. + pg_logical_slot_peek_changes. @@ -125,7 +125,7 @@ PostgreSQL documentation - If there's a record with LSN exactly equal to lsn, + If there's a record with LSN exactly equal to lsn, the record will be output. @@ -145,7 +145,7 @@ PostgreSQL documentation Write received and decoded transaction data into this - file. Use - for stdout. + file. Use - for stdout. @@ -257,8 +257,8 @@ PostgreSQL documentation - - + + Enables verbose mode. @@ -353,7 +353,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_recvlogical will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -366,8 +366,8 @@ PostgreSQL documentation - - + + Print the pg_recvlogical version and exit. @@ -376,8 +376,8 @@ PostgreSQL documentation - - + + Show help about pg_recvlogical command line @@ -393,8 +393,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_resetwal.sgml b/doc/src/sgml/ref/pg_resetwal.sgml index defaf170dc..c8e5790a8e 100644 --- a/doc/src/sgml/ref/pg_resetwal.sgml +++ b/doc/src/sgml/ref/pg_resetwal.sgml @@ -34,7 +34,7 @@ PostgreSQL documentation pg_resetwal clears the write-ahead log (WAL) and optionally resets some other control information stored in the - pg_control file. This function is sometimes needed + pg_control file. This function is sometimes needed if these files have become corrupted. It should be used only as a last resort, when the server will not start due to such corruption. @@ -43,7 +43,7 @@ PostgreSQL documentation After running this command, it should be possible to start the server, but bear in mind that the database might contain inconsistent data due to partially-committed transactions. You should immediately dump your data, - run initdb, and reload. After reload, check for + run initdb, and reload. After reload, check for inconsistencies and repair as needed. @@ -52,21 +52,21 @@ PostgreSQL documentation it requires read/write access to the data directory. For safety reasons, you must specify the data directory on the command line. pg_resetwal does not use the environment variable - PGDATA. + PGDATA. If pg_resetwal complains that it cannot determine - valid data for pg_control, you can force it to proceed anyway - by specifying the (force) option. In this case plausible values will be substituted for the missing data. Most of the fields can be expected to match, but manual assistance might be needed for the next OID, next transaction ID and epoch, next multitransaction ID and offset, and WAL starting address fields. These fields can be set using the options discussed below. If you are not able to determine correct values for all - these fields, can still be used, but the recovered database must be treated with even more suspicion than - usual: an immediate dump and reload is imperative. Do not + usual: an immediate dump and reload is imperative. Do not execute any data-modifying operations in the database before you dump, as any such action is likely to make the corruption worse. @@ -81,7 +81,7 @@ PostgreSQL documentation Force pg_resetwal to proceed even if it cannot determine - valid data for pg_control, as explained above. + valid data for pg_control, as explained above. @@ -90,9 +90,9 @@ PostgreSQL documentation - The (no operation) option instructs pg_resetwal to print the values reconstructed from - pg_control and values about to be changed, and then exit + pg_control and values about to be changed, and then exit without modifying anything. This is mainly a debugging tool, but can be useful as a sanity check before allowing pg_resetwal to proceed for real. @@ -116,7 +116,7 @@ PostgreSQL documentation The following options are only needed when pg_resetwal is unable to determine appropriate values - by reading pg_control. Safe values can be determined as + by reading pg_control. Safe values can be determined as described below. For values that take numeric arguments, hexadecimal values can be specified by using the prefix 0x. @@ -134,7 +134,7 @@ PostgreSQL documentation A safe value for the oldest transaction ID for which the commit time can be retrieved (first part) can be determined by looking for the numerically smallest file name in the directory - pg_commit_ts under the data directory. Conversely, a safe + pg_commit_ts under the data directory. Conversely, a safe value for the newest transaction ID for which the commit time can be retrieved (second part) can be determined by looking for the numerically greatest file name in the same directory. The file names are in @@ -155,8 +155,8 @@ PostgreSQL documentation except in the field that is set by pg_resetwal, so any value will work so far as the database itself is concerned. You might need to adjust this value to ensure that replication - systems such as Slony-I and - Skytools work correctly — + systems such as Slony-I and + Skytools work correctly — if so, an appropriate value should be obtainable from the state of the downstream replicated database. @@ -173,22 +173,22 @@ PostgreSQL documentation The WAL starting address should be larger than any WAL segment file name currently existing in - the directory pg_wal under the data directory. + the directory pg_wal under the data directory. These names are also in hexadecimal and have three parts. The first - part is the timeline ID and should usually be kept the same. - For example, if 00000001000000320000004A is the - largest entry in pg_wal, use -l 00000001000000320000004B or higher. + part is the timeline ID and should usually be kept the same. + For example, if 00000001000000320000004A is the + largest entry in pg_wal, use -l 00000001000000320000004B or higher. pg_resetwal itself looks at the files in - pg_wal and chooses a default setting beyond the last existing file name. Therefore, manual adjustment of - @@ -204,10 +204,10 @@ PostgreSQL documentation A safe value for the next multitransaction ID (first part) can be determined by looking for the numerically largest file name in the - directory pg_multixact/offsets under the data directory, + directory pg_multixact/offsets under the data directory, adding one, and then multiplying by 65536 (0x10000). Conversely, a safe value for the oldest multitransaction ID (second part of - ) can be determined by looking for the numerically smallest file name in the same directory and multiplying by 65536. The file names are in hexadecimal, so the easiest way to do this is to specify the option value in hexadecimal and append four zeroes. @@ -239,7 +239,7 @@ PostgreSQL documentation A safe value can be determined by looking for the numerically largest - file name in the directory pg_multixact/members under the + file name in the directory pg_multixact/members under the data directory, adding one, and then multiplying by 52352 (0xCC80). The file names are in hexadecimal. There is no simple recipe such as the ones for other options of appending zeroes. @@ -256,12 +256,12 @@ PostgreSQL documentation A safe value can be determined by looking for the numerically largest - file name in the directory pg_xact under the data directory, + file name in the directory pg_xact under the data directory, adding one, and then multiplying by 1048576 (0x100000). Note that the file names are in hexadecimal. It is usually easiest to specify the option value in - hexadecimal too. For example, if 0011 is the largest entry - in pg_xact, -x 0x1200000 will work (five + hexadecimal too. For example, if 0011 is the largest entry + in pg_xact, -x 0x1200000 will work (five trailing zeroes provide the proper multiplier). diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index a628e79310..ed535f6f89 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -98,7 +98,7 @@ This option is similar to, but for historical reasons not identical - to, specifying . @@ -109,7 +109,7 @@ Clean (drop) database objects before recreating them. - (Unless is used, this might generate some harmless error messages, if any objects were not present in the destination database.) @@ -128,8 +128,8 @@ When this option is used, the database named with - is used only to issue the initial DROP DATABASE and - CREATE DATABASE commands. All data is restored into the + is used only to issue the initial DROP DATABASE and + CREATE DATABASE commands. All data is restored into the database name that appears in the archive. @@ -183,8 +183,8 @@ - c - custom + c + custom The archive is in the custom format of @@ -194,8 +194,8 @@ - d - directory + d + directory The archive is a directory archive. @@ -204,8 +204,8 @@ - t - tar + t + tar The archive is a tar archive. @@ -222,7 +222,7 @@ Restore definition of named index only. Multiple indexes - may be specified with multiple switches. @@ -233,7 +233,7 @@ Run the most time-consuming parts - of pg_restore — those which load data, + of pg_restore — those which load data, create indexes, or create constraints — using multiple concurrent jobs. This option can dramatically reduce the time to restore a large database to a server running on a @@ -275,8 +275,8 @@ List the table of contents of the archive. The output of this operation can be used as input to the option. Note that - if filtering switches such as or are + used with , they will restrict the items listed. @@ -289,11 +289,11 @@ Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that - if filtering switches such as or are + used with , they will further restrict the items restored. - list-file is normally created by - editing the output of a previous - This option is the inverse of . It is similar to, but for historical reasons not identical to, specifying - . - (Do not confuse this with the @@ -401,7 +401,7 @@ Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. @@ -412,16 +412,16 @@ Restore definition and/or data of only the named table. - For this purpose, table includes views, materialized views, + For this purpose, table includes views, materialized views, sequences, and foreign tables. Multiple tables - can be selected by writing multiple switches. This option can be combined with the option to specify table(s) in a particular schema. - When is specified, pg_restore + When is specified, pg_restore makes no attempt to restore any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that a specific-table restore into a clean database will @@ -433,14 +433,14 @@ This flag does not behave identically to the flag of pg_dump. There is not currently - any provision for wild-card matching in pg_restore, - nor can you include a schema name within its . - In versions prior to PostgreSQL 9.6, this flag + In versions prior to PostgreSQL 9.6, this flag matched only tables, not any other type of relation. @@ -453,7 +453,7 @@ Restore named trigger only. Multiple triggers may be specified with - multiple switches. @@ -469,8 +469,8 @@ - - + + Print the pg_restore version and exit. @@ -495,16 +495,16 @@ Execute the restore as a single transaction (that is, wrap the - emitted commands in BEGIN/COMMIT). This + emitted commands in BEGIN/COMMIT). This ensures that either all the commands complete successfully, or no changes are applied. This option implies - . - + This option is relevant only when performing a data-only restore. @@ -517,16 +517,16 @@ Presently, the commands emitted for - must be done as superuser. So you + should also specify a superuser name with or, preferably, run pg_restore as a - PostgreSQL superuser. + PostgreSQL superuser. - + This option is relevant only when restoring the contents of a table @@ -554,7 +554,7 @@ Use conditional commands (i.e. add an IF EXISTS clause) when cleaning database objects. This option is not valid - unless is also specified. @@ -568,8 +568,8 @@ With this option, data for such a table is skipped. This behavior is useful if the target database already contains the desired table contents. For example, - auxiliary tables for PostgreSQL extensions - such as PostGIS might already be loaded in + auxiliary tables for PostgreSQL extensions + such as PostGIS might already be loaded in the target database; specifying this option prevents duplicate or obsolete data from being loaded into them. @@ -627,7 +627,7 @@ Only restore the named section. The section name can be - , , or . This option can be specified more than once to select multiple sections. The default is to restore all sections. @@ -642,7 +642,7 @@ - + Require that each schema @@ -657,8 +657,8 @@ - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. @@ -667,8 +667,8 @@ - - + + Show help about pg_restore command line @@ -723,8 +723,8 @@ - - + + Never issue a password prompt. If the server requires @@ -752,7 +752,7 @@ for a password if the server demands password authentication. However, pg_restore will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -763,11 +763,11 @@ Specifies a role name to be used to perform the restore. - This option causes pg_restore to issue a - SET ROLE rolename + This option causes pg_restore to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by @@ -192,9 +192,9 @@ PostgreSQL documentation Environment - When option is used, pg_rewind also uses the environment variables - supported by libpq (see ). + supported by libpq (see ). @@ -224,7 +224,7 @@ PostgreSQL documentation Copy all those changed blocks from the source cluster to the target cluster, either using direct file system access - () or SQL (). @@ -237,9 +237,9 @@ PostgreSQL documentation Apply the WAL from the source cluster, starting from the checkpoint - created at failover. (Strictly speaking, pg_rewind + created at failover. (Strictly speaking, pg_rewind doesn't apply the WAL, it just creates a backup label file that - makes PostgreSQL start by replaying all WAL from + makes PostgreSQL start by replaying all WAL from that checkpoint forward.) diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml index cff88a4c1e..0b39726e30 100644 --- a/doc/src/sgml/ref/pg_waldump.sgml +++ b/doc/src/sgml/ref/pg_waldump.sgml @@ -133,7 +133,7 @@ PostgreSQL documentation Only display records generated by the specified resource manager. - If list is passed as name, print a list of valid resource manager + If list is passed as name, print a list of valid resource manager names, and exit. @@ -156,15 +156,15 @@ PostgreSQL documentation Timeline from which to read log records. The default is to use the - value in startseg, if that is specified; otherwise, the + value in startseg, if that is specified; otherwise, the default is 1. - - + + Print the pg_waldump version and exit. @@ -195,8 +195,8 @@ PostgreSQL documentation - - + + Show help about pg_waldump command line @@ -220,8 +220,8 @@ PostgreSQL documentation - pg_waldump cannot read WAL files with suffix - .partial. If those files need to be read, .partial + pg_waldump cannot read WAL files with suffix + .partial. If those files need to be read, .partial suffix needs to be removed from the file name. diff --git a/doc/src/sgml/ref/pgarchivecleanup.sgml b/doc/src/sgml/ref/pgarchivecleanup.sgml index abe01bef4f..65ba3df928 100644 --- a/doc/src/sgml/ref/pgarchivecleanup.sgml +++ b/doc/src/sgml/ref/pgarchivecleanup.sgml @@ -29,44 +29,44 @@ Description - pg_archivecleanup is designed to be used as an + pg_archivecleanup is designed to be used as an archive_cleanup_command to clean up WAL file archives when running as a standby server (see ). - pg_archivecleanup can also be used as a standalone program to + pg_archivecleanup can also be used as a standalone program to clean WAL file archives. To configure a standby - server to use pg_archivecleanup, put this into its + server to use pg_archivecleanup, put this into its recovery.conf configuration file: -archive_cleanup_command = 'pg_archivecleanup archivelocation %r' +archive_cleanup_command = 'pg_archivecleanup archivelocation %r' - where archivelocation is the directory from which WAL segment + where archivelocation is the directory from which WAL segment files should be removed. When used within , all WAL files - logically preceding the value of the %r argument will be removed - from archivelocation. This minimizes the number of files + logically preceding the value of the %r argument will be removed + from archivelocation. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of - this parameter is appropriate if the archivelocation is a + this parameter is appropriate if the archivelocation is a transient staging area for this particular standby server, but - not when the archivelocation is intended as a + not when the archivelocation is intended as a long-term WAL archive area, or when multiple standby servers are recovering from the same archive location. When used as a standalone program all WAL files logically preceding the - oldestkeptwalfile will be removed from archivelocation. - In this mode, if you specify a .partial or .backup + oldestkeptwalfile will be removed from archivelocation. + In this mode, if you specify a .partial or .backup file name, then only the file prefix will be used as the - oldestkeptwalfile. This treatment of .backup + oldestkeptwalfile. This treatment of .backup file name allows you to remove all WAL files archived prior to a specific base backup without error. For example, the following example will remove all files older than - WAL file name 000000010000003700000010: + WAL file name 000000010000003700000010: pg_archivecleanup -d archive 000000010000003700000010.00000020.backup @@ -77,7 +77,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" pg_archivecleanup assumes that - archivelocation is a directory readable and writable by the + archivelocation is a directory readable and writable by the server-owning user. @@ -94,7 +94,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - Print lots of debug logging output on stderr. + Print lots of debug logging output on stderr. @@ -103,14 +103,14 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - Print the names of the files that would have been removed on stdout (performs a dry run). + Print the names of the files that would have been removed on stdout (performs a dry run). - - + + Print the pg_archivecleanup version and exit. @@ -119,7 +119,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - extension + extension Provide an extension @@ -134,8 +134,8 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - - + + Show help about pg_archivecleanup command line @@ -152,8 +152,8 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" pg_archivecleanup is designed to work with - PostgreSQL 8.0 and later when used as a standalone utility, - or with PostgreSQL 9.0 and later when used as an + PostgreSQL 8.0 and later when used as a standalone utility, + or with PostgreSQL 9.0 and later when used as an archive cleanup command. @@ -172,14 +172,14 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" archive_cleanup_command = 'pg_archivecleanup -d /mnt/standby/archive %r 2>>cleanup.log' where the archive directory is physically located on the standby server, - so that the archive_command is accessing it across NFS, + so that the archive_command is accessing it across NFS, but the files are local to the standby. This will: - produce debugging output in cleanup.log + produce debugging output in cleanup.log diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index f5db8d18d3..e509e6c7f6 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -34,12 +34,12 @@ Description pgbench is a simple program for running benchmark - tests on PostgreSQL. It runs the same sequence of SQL + tests on PostgreSQL. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is - loosely based on TPC-B, involving five SELECT, - UPDATE, and INSERT commands per transaction. + loosely based on TPC-B, involving five SELECT, + UPDATE, and INSERT commands per transaction. However, it is easy to test other cases by writing your own transaction script files. @@ -63,7 +63,7 @@ tps = 85.296346 (excluding connections establishing) settings. The next line reports the number of transactions completed and intended (the latter being just the product of number of clients and number of transactions per client); these will be equal unless the run - failed before completion. (In mode, only the actual number of transactions is printed.) The last two lines report the number of transactions per second, figured with and without counting the time to start database sessions. @@ -71,27 +71,27 @@ tps = 85.296346 (excluding connections establishing) The default TPC-B-like transaction test requires specific tables to be - set up beforehand. pgbench should be invoked with - the (initialize) option to create and populate these tables. (When you are testing a custom script, you don't need this step, but will instead need to do whatever setup your test needs.) Initialization looks like: -pgbench -i other-options dbname +pgbench -i other-options dbname - where dbname is the name of the already-created - database to test in. (You may also need , + , and/or options to specify how to connect to the database server.) - pgbench -i creates four tables pgbench_accounts, - pgbench_branches, pgbench_history, and - pgbench_tellers, + pgbench -i creates four tables pgbench_accounts, + pgbench_branches, pgbench_history, and + pgbench_tellers, destroying any existing tables of these names. Be very careful to use another database if you have tables having these names! @@ -99,7 +99,7 @@ pgbench -i other-options dbn - At the default scale factor of 1, the tables initially + At the default scale factor of 1, the tables initially contain this many rows: table # of rows @@ -110,22 +110,22 @@ pgbench_accounts 100000 pgbench_history 0 You can (and, for most purposes, probably should) increase the number - of rows by using the (scale factor) option. The + (fillfactor) option might also be used at this point. Once you have done the necessary setup, you can run your benchmark - with a command that doesn't include , that is -pgbench options dbname +pgbench options dbname In nearly all cases, you'll need some options to make a useful test. - The most important options are (number of clients), + (number of transactions), (time limit), + and (specify a custom script file). See below for a full list. @@ -159,13 +159,13 @@ pgbench options dbname - fillfactor - fillfactor + fillfactor + fillfactor - Create the pgbench_accounts, - pgbench_tellers and - pgbench_branches tables with the given fillfactor. + Create the pgbench_accounts, + pgbench_tellers and + pgbench_branches tables with the given fillfactor. Default is 100. @@ -194,13 +194,13 @@ pgbench options dbname - scale_factor - scale_factor + scale_factor + scale_factor Multiply the number of rows generated by the scale factor. - For example, -s 100 will create 10,000,000 rows - in the pgbench_accounts table. Default is 1. + For example, -s 100 will create 10,000,000 rows + in the pgbench_accounts table. Default is 1. When the scale is 20,000 or larger, the columns used to hold account identifiers (aid columns) will switch to using larger integers (bigint), @@ -262,17 +262,17 @@ pgbench options dbname - - + scriptname[@weight] + =scriptname[@weight] Add the specified built-in script to the list of executed scripts. - An optional integer weight after @ allows to adjust the + An optional integer weight after @ allows to adjust the probability of drawing the script. If not specified, it is set to 1. - Available built-in scripts are: tpcb-like, - simple-update and select-only. + Available built-in scripts are: tpcb-like, + simple-update and select-only. Unambiguous prefixes of built-in names are accepted. - With special name list, show the list of built-in scripts + With special name list, show the list of built-in scripts and exit immediately. @@ -280,8 +280,8 @@ pgbench options dbname - clients - clients + clients + clients Number of clients simulated, that is, number of concurrent database @@ -313,24 +313,24 @@ pgbench options dbname - varname=value - varname=value + varname=value + varname=value Define a variable for use by a custom script (see below). - Multiple options are allowed. - - + filename[@weight] + filename[@weight] - Add a transaction script read from filename to + Add a transaction script read from filename to the list of executed scripts. - An optional integer weight after @ allows to adjust the + An optional integer weight after @ allows to adjust the probability of drawing the test. See below for details. @@ -338,8 +338,8 @@ pgbench options dbname - threads - threads + threads + threads Number of worker threads within pgbench. @@ -362,38 +362,38 @@ pgbench options dbname - limit - limit + limit + limit - Transaction which last more than limit milliseconds - are counted and reported separately, as late. + Transaction which last more than limit milliseconds + are counted and reported separately, as late. - When throttling is used ( - querymode - querymode + querymode + querymode Protocol to use for submitting queries to the server: - simple: use simple query protocol. + simple: use simple query protocol. - extended: use extended query protocol. + extended: use extended query protocol. - prepared: use extended query protocol with prepared statements. + prepared: use extended query protocol with prepared statements. The default is simple query protocol. (See @@ -408,11 +408,11 @@ pgbench options dbname Perform no vacuuming before running the test. - This option is necessary + This option is necessary if you are running a custom test scenario that does not include - the standard tables pgbench_accounts, - pgbench_branches, pgbench_history, and - pgbench_tellers. + the standard tables pgbench_accounts, + pgbench_branches, pgbench_history, and + pgbench_tellers. @@ -423,20 +423,20 @@ pgbench options dbname Run built-in simple-update script. - Shorthand for . - sec - sec + sec + sec - Show progress report every sec seconds. The report + Show progress report every sec seconds. The report includes the time since the beginning of the run, the tps since the last report, and the transaction latency average and standard - deviation since the last report. Under throttling (), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time, thus it also includes the average schedule lag time. @@ -457,8 +457,8 @@ pgbench options dbname - rate - rate + rate + rate Execute transactions targeting the specified rate instead of running @@ -487,7 +487,7 @@ pgbench options dbname - If is used together with , a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time. Such transactions are @@ -508,15 +508,15 @@ pgbench options dbname - scale_factor - scale_factor + scale_factor + scale_factor - Report the specified scale factor in pgbench's + Report the specified scale factor in pgbench's output. With the built-in tests, this is not necessary; the correct scale factor will be detected by counting the number of - rows in the pgbench_branches table. - However, when testing only custom benchmarks ( option), the scale factor will be reported as 1 unless this option is used. @@ -528,14 +528,14 @@ pgbench options dbname Run built-in select-only script. - Shorthand for . - transactions - transactions + transactions + transactions Number of transactions each client runs. Default is 10. @@ -544,8 +544,8 @@ pgbench options dbname - seconds - seconds + seconds + seconds Run the test for this many seconds, rather than a fixed number of @@ -561,15 +561,15 @@ pgbench options dbname Vacuum all four standard tables before running the test. - With neither - + Length of aggregation interval (in seconds). May be used only @@ -580,11 +580,11 @@ pgbench options dbname - + Set the filename prefix for the log files created by - @@ -593,7 +593,7 @@ pgbench options dbname - When showing progress (option ), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run. The unit is in seconds, with millisecond precision after the dot. @@ -603,7 +603,7 @@ pgbench options dbname - + Sampling rate, used when writing data into the log, to reduce the @@ -635,8 +635,8 @@ pgbench options dbname - hostname - hostname + hostname + hostname The database server's host name @@ -645,8 +645,8 @@ pgbench options dbname - port - port + port + port The database server's port number @@ -655,8 +655,8 @@ pgbench options dbname - login - login + login + login The user name to connect as @@ -665,8 +665,8 @@ pgbench options dbname - - + + Print the pgbench version and exit. @@ -675,8 +675,8 @@ pgbench options dbname - - + + Show help about pgbench command line @@ -694,23 +694,23 @@ pgbench options dbname Notes - What is the <quote>Transaction</> Actually Performed in <application>pgbench</application>? + What is the <quote>Transaction</quote> Actually Performed in <application>pgbench</application>? - pgbench executes test scripts chosen randomly + pgbench executes test scripts chosen randomly from a specified list. - They include built-in scripts with and + user-provided custom scripts with . Each script may be given a relative weight specified after a - @ so as to change its drawing probability. - The default weight is 1. - Scripts with a weight of 0 are ignored. + @ so as to change its drawing probability. + The default weight is 1. + Scripts with a weight of 0 are ignored. - The default built-in transaction script (also invoked with @@ -726,15 +726,15 @@ pgbench options dbname - If you select the simple-update built-in (also ), steps 4 and 5 aren't included in the transaction. This will avoid update contention on these tables, but it makes the test case even less like TPC-B. - If you select the select-only built-in (also @@ -745,26 +745,26 @@ pgbench options dbname pgbench has support for running custom benchmark scenarios by replacing the default transaction script (described above) with a transaction script read from a file - ( option). In this case a transaction + ( option). In this case a transaction counts as one execution of a script file. A script file contains one or more SQL commands terminated by semicolons. Empty lines and lines beginning with - -- are ignored. Script files can also contain - meta commands, which are interpreted by pgbench + -- are ignored. Script files can also contain + meta commands, which are interpreted by pgbench itself, as described below. - Before PostgreSQL 9.6, SQL commands in script files + Before PostgreSQL 9.6, SQL commands in script files were terminated by newlines, and so they could not be continued across - lines. Now a semicolon is required to separate consecutive + lines. Now a semicolon is required to separate consecutive SQL commands (though a SQL command does not need one if it is followed by a meta command). If you need to create a script file that works with - both old and new versions of pgbench, be sure to write + both old and new versions of pgbench, be sure to write each SQL command on a single line ending with a semicolon. @@ -773,15 +773,15 @@ pgbench options dbname There is a simple variable-substitution facility for script files. Variable names must consist of letters (including non-Latin letters), digits, and underscores. - Variables can be set by the command-line option, explained above, or by the meta commands explained below. - In addition to any variables preset by command-line options, there are a few variables that are preset automatically, listed in . A value specified for these - variables using takes precedence over the automatic presets. Once set, a variable's value can be inserted into a SQL command by writing - :variablename. When running more than + :variablename. When running more than one client session, each session has its own set of variables. @@ -810,7 +810,7 @@ pgbench options dbname
- Script file meta commands begin with a backslash (\) and + Script file meta commands begin with a backslash (\) and normally extend to the end of the line, although they can be continued to additional lines by writing backslash-return. Arguments to a meta command are separated by white space. @@ -820,20 +820,20 @@ pgbench options dbname - \set varname expression + \set varname expression - Sets variable varname to a value calculated - from expression. - The expression may contain integer constants such as 5432, - double constants such as 3.14159, - references to variables :variablename, - unary operators (+, -) and binary operators - (+, -, *, /, - %) with their usual precedence and associativity, - function calls, and + Sets variable varname to a value calculated + from expression. + The expression may contain integer constants such as 5432, + double constants such as 3.14159, + references to variables :variablename, + unary operators (+, -) and binary operators + (+, -, *, /, + %) with their usual precedence and associativity, + function calls, and parentheses. @@ -849,16 +849,16 @@ pgbench options dbname - \sleep number [ us | ms | s ] + \sleep number [ us | ms | s ] Causes script execution to sleep for the specified duration in - microseconds (us), milliseconds (ms) or seconds - (s). If the unit is omitted then seconds are the default. - number can be either an integer constant or a - :variablename reference to a variable + microseconds (us), milliseconds (ms) or seconds + (s). If the unit is omitted then seconds are the default. + number can be either an integer constant or a + :variablename reference to a variable having an integer value. @@ -872,22 +872,22 @@ pgbench options dbname - \setshell varname command [ argument ... ] + \setshell varname command [ argument ... ] - Sets variable varname to the result of the shell command - command with the given argument(s). + Sets variable varname to the result of the shell command + command with the given argument(s). The command must return an integer value through its standard output. - command and each argument can be either - a text constant or a :variablename reference - to a variable. If you want to use an argument starting + command and each argument can be either + a text constant or a :variablename reference + to a variable. If you want to use an argument starting with a colon, write an additional colon at the beginning of - argument. + argument. @@ -900,7 +900,7 @@ pgbench options dbname - \shell command [ argument ... ] + \shell command [ argument ... ] @@ -924,7 +924,7 @@ pgbench options dbname The functions listed in are built - into pgbench and may be used in expressions appearing in + into pgbench and may be used in expressions appearing in \set. @@ -943,123 +943,123 @@ pgbench options dbname - abs(a) - same as a - absolute value - abs(-17) - 17 + abs(a) + same as a + absolute value + abs(-17) + 17 - debug(a) - same as a - print a to stderr, - and return a - debug(5432.1) - 5432.1 + debug(a) + same as a + print a to stderr, + and return a + debug(5432.1) + 5432.1 - double(i) - double - cast to double - double(5432) - 5432.0 + double(i) + double + cast to double + double(5432) + 5432.0 - greatest(a [, ... ] ) - double if any a is double, else integer - largest value among arguments - greatest(5, 4, 3, 2) - 5 + greatest(a [, ... ] ) + double if any a is double, else integer + largest value among arguments + greatest(5, 4, 3, 2) + 5 - int(x) - integer - cast to int - int(5.4 + 3.8) - 9 + int(x) + integer + cast to int + int(5.4 + 3.8) + 9 - least(a [, ... ] ) - double if any a is double, else integer - smallest value among arguments - least(5, 4, 3, 2.1) - 2.1 + least(a [, ... ] ) + double if any a is double, else integer + smallest value among arguments + least(5, 4, 3, 2.1) + 2.1 - pi() - double - value of the constant PI - pi() - 3.14159265358979323846 + pi() + double + value of the constant PI + pi() + 3.14159265358979323846 - random(lb, ub) - integer - uniformly-distributed random integer in [lb, ub] - random(1, 10) - an integer between 1 and 10 + random(lb, ub) + integer + uniformly-distributed random integer in [lb, ub] + random(1, 10) + an integer between 1 and 10 - random_exponential(lb, ub, parameter) - integer - exponentially-distributed random integer in [lb, ub], - see below - random_exponential(1, 10, 3.0) - an integer between 1 and 10 + random_exponential(lb, ub, parameter) + integer + exponentially-distributed random integer in [lb, ub], + see below + random_exponential(1, 10, 3.0) + an integer between 1 and 10 - random_gaussian(lb, ub, parameter) - integer - Gaussian-distributed random integer in [lb, ub], - see below - random_gaussian(1, 10, 2.5) - an integer between 1 and 10 + random_gaussian(lb, ub, parameter) + integer + Gaussian-distributed random integer in [lb, ub], + see below + random_gaussian(1, 10, 2.5) + an integer between 1 and 10 - sqrt(x) - double - square root - sqrt(2.0) - 1.414213562 + sqrt(x) + double + square root + sqrt(2.0) + 1.414213562 - The random function generates values using a uniform + The random function generates values using a uniform distribution, that is all the values are drawn within the specified - range with equal probability. The random_exponential and - random_gaussian functions require an additional double + range with equal probability. The random_exponential and + random_gaussian functions require an additional double parameter which determines the precise shape of the distribution. - For an exponential distribution, parameter + For an exponential distribution, parameter controls the distribution by truncating a quickly-decreasing - exponential distribution at parameter, and then + exponential distribution at parameter, and then projecting onto integers between the bounds. To be precise, with f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter)) - Then value i between min and - max inclusive is drawn with probability: - f(i) - f(i + 1). + Then value i between min and + max inclusive is drawn with probability: + f(i) - f(i + 1). - Intuitively, the larger the parameter, the more - frequently values close to min are accessed, and the - less frequently values close to max are accessed. - The closer to 0 parameter is, the flatter (more + Intuitively, the larger the parameter, the more + frequently values close to min are accessed, and the + less frequently values close to max are accessed. + The closer to 0 parameter is, the flatter (more uniform) the access distribution. A crude approximation of the distribution is that the most frequent 1% - values in the range, close to min, are drawn - parameter% of the time. - The parameter value must be strictly positive. + values in the range, close to min, are drawn + parameter% of the time. + The parameter value must be strictly positive. @@ -1067,32 +1067,32 @@ f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter)) For a Gaussian distribution, the interval is mapped onto a standard normal distribution (the classical bell-shaped Gaussian curve) truncated - at -parameter on the left and +parameter + at -parameter on the left and +parameter on the right. Values in the middle of the interval are more likely to be drawn. - To be precise, if PHI(x) is the cumulative distribution - function of the standard normal distribution, with mean mu - defined as (max + min) / 2.0, with + To be precise, if PHI(x) is the cumulative distribution + function of the standard normal distribution, with mean mu + defined as (max + min) / 2.0, with f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) / (2.0 * PHI(parameter) - 1) - then value i between min and - max inclusive is drawn with probability: - f(i + 0.5) - f(i - 0.5). - Intuitively, the larger the parameter, the more + then value i between min and + max inclusive is drawn with probability: + f(i + 0.5) - f(i - 0.5). + Intuitively, the larger the parameter, the more frequently values close to the middle of the interval are drawn, and the - less frequently values close to the min and - max bounds. About 67% of values are drawn from the - middle 1.0 / parameter, that is a relative - 0.5 / parameter around the mean, and 95% in the middle - 2.0 / parameter, that is a relative - 1.0 / parameter around the mean; for instance, if - parameter is 4.0, 67% of values are drawn from the + less frequently values close to the min and + max bounds. About 67% of values are drawn from the + middle 1.0 / parameter, that is a relative + 0.5 / parameter around the mean, and 95% in the middle + 2.0 / parameter, that is a relative + 1.0 / parameter around the mean; for instance, if + parameter is 4.0, 67% of values are drawn from the middle quarter (1.0 / 4.0) of the interval (i.e. from - 3.0 / 8.0 to 5.0 / 8.0) and 95% from - the middle half (2.0 / 4.0) of the interval (second and third - quartiles). The minimum parameter is 2.0 for performance + 3.0 / 8.0 to 5.0 / 8.0) and 95% from + the middle half (2.0 / 4.0) of the interval (second and third + quartiles). The minimum parameter is 2.0 for performance of the Box-Muller transform. @@ -1128,21 +1128,21 @@ END; Per-Transaction Logging - With the option (but without the option), - pgbench writes information about each transaction + pgbench writes information about each transaction to a log file. The log file will be named - prefix.nnn, - where prefix defaults to pgbench_log, and - nnn is the PID of the + prefix.nnn, + where prefix defaults to pgbench_log, and + nnn is the PID of the pgbench process. - The prefix can be changed by using the option. + If the option is 2 or higher, so that there are multiple worker threads, each will have its own log file. The first worker will use the same name for its log file as in the standard single worker case. The additional log files for the other workers will be named - prefix.nnn.mmm, - where mmm is a sequential number for each worker starting + prefix.nnn.mmm, + where mmm is a sequential number for each worker starting with 1. @@ -1150,27 +1150,27 @@ END; The format of the log is: -client_id transaction_no time script_no time_epoch time_us schedule_lag +client_id transaction_no time script_no time_epoch time_us schedule_lag where - client_id indicates which client session ran the transaction, - transaction_no counts how many transactions have been + client_id indicates which client session ran the transaction, + transaction_no counts how many transactions have been run by that session, - time is the total elapsed transaction time in microseconds, - script_no identifies which script file was used (useful when - multiple scripts were specified with @@ -1182,9 +1182,9 @@ END; 0 202 2038 0 1175850569 2663 - Another example with --rate=100 - and --latency-limit=5 (note the additional - schedule_lag column): + Another example with --rate=100 + and --latency-limit=5 (note the additional + schedule_lag column): 0 81 4621 0 1412881037 912698 3005 0 82 6173 0 1412881037 914578 4304 @@ -1201,7 +1201,7 @@ END; When running a long test on hardware that can handle a lot of transactions, - the log files can become very large. The option can be used to log only a random sample of transactions. @@ -1214,30 +1214,30 @@ END; format is used for the log files: -interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency sum_lag sum_lag_2 min_lag max_lag skipped +interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency sum_lag sum_lag_2 min_lag max_lag skipped where - interval_start is the start of the interval (as a Unix + interval_start is the start of the interval (as a Unix epoch time stamp), - num_transactions is the number of transactions + num_transactions is the number of transactions within the interval, sum_latency is the sum of the transaction latencies within the interval, sum_latency_2 is the sum of squares of the transaction latencies within the interval, - min_latency is the minimum latency within the interval, + min_latency is the minimum latency within the interval, and - max_latency is the maximum latency within the interval. + max_latency is the maximum latency within the interval. The next fields, - sum_lag, sum_lag_2, min_lag, - and max_lag, are only present if the option is used. They provide statistics about the time each transaction had to wait for the previous one to finish, i.e. the difference between each transaction's scheduled start time and the time it actually started. - The very last field, skipped, - is only present if the option is used, too. It counts the number of transactions skipped because they would have started too late. Each transaction is counted in the interval when it was committed. @@ -1265,7 +1265,7 @@ END; Per-Statement Latencies - With the @@ -79,8 +79,8 @@ - - + + Print the pg_test_fsync version and exit. @@ -89,8 +89,8 @@ - - + + Show help about pg_test_fsync command line diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml index c659101361..966546747e 100644 --- a/doc/src/sgml/ref/pgtesttiming.sgml +++ b/doc/src/sgml/ref/pgtesttiming.sgml @@ -27,7 +27,7 @@ Description - pg_test_timing is a tool to measure the timing overhead + pg_test_timing is a tool to measure the timing overhead on your system and confirm that the system time never moves backwards. Systems that are slow to collect timing data can give less accurate EXPLAIN ANALYZE results. @@ -57,8 +57,8 @@ - - + + Print the pg_test_timing version and exit. @@ -67,8 +67,8 @@ - - + + Show help about pg_test_timing command line diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index c3df343571..8785a3ded2 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -35,38 +35,38 @@ Description - pg_upgrade (formerly called pg_migrator) allows data - stored in PostgreSQL data files to be upgraded to a later PostgreSQL + pg_upgrade (formerly called pg_migrator) allows data + stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/reload typically required for major version upgrades, e.g. from 9.6.3 to the current major release - of PostgreSQL. It is not required for minor version upgrades, e.g. from + of PostgreSQL. It is not required for minor version upgrades, e.g. from 9.6.2 to 9.6.3. Major PostgreSQL releases regularly add new features that often change the layout of the system tables, but the internal data storage - format rarely changes. pg_upgrade uses this fact + format rarely changes. pg_upgrade uses this fact to perform rapid upgrades by creating new system tables and simply reusing the old user data files. If a future major release ever changes the data storage format in a way that makes the old data - format unreadable, pg_upgrade will not be usable + format unreadable, pg_upgrade will not be usable for such upgrades. (The community will attempt to avoid such situations.) - pg_upgrade does its best to + pg_upgrade does its best to make sure the old and new clusters are binary-compatible, e.g. by checking for compatible compile-time settings, including 32/64-bit binaries. It is important that any external modules are also binary compatible, though this cannot - be checked by pg_upgrade. + be checked by pg_upgrade. pg_upgrade supports upgrades from 8.4.X and later to the current - major release of PostgreSQL, including snapshot and beta releases. + major release of PostgreSQL, including snapshot and beta releases. @@ -79,17 +79,17 @@ - bindir - bindir + bindir + bindir the old PostgreSQL executable directory; - environment variable PGBINOLD + environment variable PGBINOLD - bindir - bindir + bindir + bindir the new PostgreSQL executable directory; - environment variable PGBINNEW + environment variable PGBINNEW
@@ -99,17 +99,17 @@ - datadir - datadir + datadir + datadir the old cluster data directory; environment - variable PGDATAOLD + variable PGDATAOLD - datadir - datadir + datadir + datadir the new cluster data directory; environment - variable PGDATANEW + variable PGDATANEW @@ -143,17 +143,17 @@ - port - port + port + port the old cluster port number; environment - variable PGPORTOLD + variable PGPORTOLD - port - port + port + port the new cluster port number; environment - variable PGPORTNEW + variable PGPORTNEW @@ -164,10 +164,10 @@ - username - username + username + username cluster's install user name; environment - variable PGUSER + variable PGUSER @@ -207,17 +207,17 @@ If you are using a version-specific installation directory, e.g. - /opt/PostgreSQL/&majorversion;, you do not need to move the old cluster. The + /opt/PostgreSQL/&majorversion;, you do not need to move the old cluster. The graphical installers all use version-specific installation directories. If your installation directory is not version-specific, e.g. - /usr/local/pgsql, it is necessary to move the current PostgreSQL install - directory so it does not interfere with the new PostgreSQL installation. - Once the current PostgreSQL server is shut down, it is safe to rename the + /usr/local/pgsql, it is necessary to move the current PostgreSQL install + directory so it does not interfere with the new PostgreSQL installation. + Once the current PostgreSQL server is shut down, it is safe to rename the PostgreSQL installation directory; assuming the old directory is - /usr/local/pgsql, you can do: + /usr/local/pgsql, you can do: mv /usr/local/pgsql /usr/local/pgsql.old @@ -230,8 +230,8 @@ mv /usr/local/pgsql /usr/local/pgsql.old For source installs, build the new version - Build the new PostgreSQL source with configure flags that are compatible - with the old cluster. pg_upgrade will check pg_controldata to make + Build the new PostgreSQL source with configure flags that are compatible + with the old cluster. pg_upgrade will check pg_controldata to make sure all settings are compatible before starting the upgrade. @@ -241,7 +241,7 @@ mv /usr/local/pgsql /usr/local/pgsql.old Install the new server's binaries and support - files. pg_upgrade is included in a default installation. + files. pg_upgrade is included in a default installation. @@ -273,7 +273,7 @@ make prefix=/usr/local/pgsql.new install into the new cluster, e.g. pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g. - CREATE EXTENSION pgcrypto, because these will be upgraded + CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster. @@ -284,9 +284,9 @@ make prefix=/usr/local/pgsql.new install Adjust authentication - pg_upgrade will connect to the old and new servers several - times, so you might want to set authentication to peer - in pg_hba.conf or use a ~/.pgpass file + pg_upgrade will connect to the old and new servers several + times, so you might want to set authentication to peer + in pg_hba.conf or use a ~/.pgpass file (see ). @@ -322,23 +322,23 @@ NET STOP postgresql-&majorversion; If you are upgrading standby servers using methods outlined in section , verify that the old standby - servers are caught up by running pg_controldata + servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the - Latest checkpoint location values match in all clusters. + Latest checkpoint location values match in all clusters. (There will be a mismatch if old standby servers were shut down - before the old primary.) Also, change wal_level to - replica in the postgresql.conf file on the + before the old primary.) Also, change wal_level to + replica in the postgresql.conf file on the new primary cluster. - Run <application>pg_upgrade</> + Run <application>pg_upgrade</application> - Always run the pg_upgrade binary of the new server, not the old one. - pg_upgrade requires the specification of the old and new cluster's - data and executable (bin) directories. You can also specify + Always run the pg_upgrade binary of the new server, not the old one. + pg_upgrade requires the specification of the old and new cluster's + data and executable (bin) directories. You can also specify user and port values, and whether you want the data files linked instead of the default copy behavior. @@ -349,13 +349,13 @@ NET STOP postgresql-&majorversion; your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the - same file system. (Tablespaces and pg_wal can be on - different file systems.) See pg_upgrade --help for a full + same file system. (Tablespaces and pg_wal can be on + different file systems.) See pg_upgrade --help for a full list of options. - The option allows multiple CPU cores to be used for copying/linking of files and to dump and reload database schemas in parallel; a good place to start is the maximum of the number of CPU cores and tablespaces. This option can dramatically reduce the @@ -365,14 +365,14 @@ NET STOP postgresql-&majorversion; For Windows users, you must be logged into an administrative account, and - then start a shell as the postgres user and set the proper path: + then start a shell as the postgres user and set the proper path: RUNAS /USER:postgres "CMD.EXE" SET PATH=%PATH%;C:\Program Files\PostgreSQL\&majorversion;\bin; - and then run pg_upgrade with quoted directories, e.g.: + and then run pg_upgrade with quoted directories, e.g.: pg_upgrade.exe @@ -382,19 +382,19 @@ pg_upgrade.exe --new-bindir "C:/Program Files/PostgreSQL/&majorversion;/bin" - Once started, pg_upgrade will verify the two clusters are compatible - and then do the upgrade. You can use pg_upgrade --check + Once started, pg_upgrade will verify the two clusters are compatible + and then do the upgrade. You can use pg_upgrade --check to perform only the checks, even if the old server is still - running. pg_upgrade --check will also outline any + running. pg_upgrade --check will also outline any manual adjustments you will need to make after the upgrade. If you - are going to be using link mode, you should use the option with to enable link-mode-specific checks. - pg_upgrade requires write permission in the current directory. + pg_upgrade requires write permission in the current directory. Obviously, no one should be accessing the clusters during the - upgrade. pg_upgrade defaults to running servers + upgrade. pg_upgrade defaults to running servers on port 50432 to avoid unintended client connections. You can use the same port number for both clusters when doing an upgrade because the old and new clusters will not be running at the @@ -403,7 +403,7 @@ pg_upgrade.exe - If an error occurs while restoring the database schema, pg_upgrade will + If an error occurs while restoring the database schema, pg_upgrade will exit and you will have to revert to the old cluster as outlined in below. To try pg_upgrade again, you will need to modify the old cluster so the pg_upgrade schema restore succeeds. If the problem is a @@ -420,16 +420,16 @@ pg_upgrade.exe If you used link mode and have Streaming Replication (see ) or Log-Shipping (see ) standby servers, you can follow these steps to - quickly upgrade them. You will not be running pg_upgrade on - the standby servers, but rather rsync on the primary. + quickly upgrade them. You will not be running pg_upgrade on + the standby servers, but rather rsync on the primary. Do not start any servers yet. - If you did not use link mode, do not have or do not - want to use rsync, or want an easier solution, skip + If you did not use link mode, do not have or do not + want to use rsync, or want an easier solution, skip the instructions in this section and simply recreate the standby - servers once pg_upgrade completes and the new primary + servers once pg_upgrade completes and the new primary is running. @@ -445,11 +445,11 @@ pg_upgrade.exe - Make sure the new standby data directories do <emphasis>not</> exist + Make sure the new standby data directories do <emphasis>not</emphasis> exist - Make sure the new standby data directories do not - exist or are empty. If initdb was run, delete + Make sure the new standby data directories do not + exist or are empty. If initdb was run, delete the standby servers' new data directories. @@ -477,32 +477,32 @@ pg_upgrade.exe Save any configuration files from the old standbys' data - directories you need to keep, e.g. postgresql.conf, - recovery.conf, because these will be overwritten or + directories you need to keep, e.g. postgresql.conf, + recovery.conf, because these will be overwritten or removed in the next step. - Run <application>rsync</> + Run <application>rsync</application> When using link mode, standby servers can be quickly upgraded using - rsync. To accomplish this, from a directory on + rsync. To accomplish this, from a directory on the primary server that is above the old and new database cluster - directories, run this on the primary for each standby + directories, run this on the primary for each standby server: rsync --archive --delete --hard-links --size-only --no-inc-recursive old_pgdata new_pgdata remote_dir - where What this does is to record the links created by - pg_upgrade's link mode that connect files in the + pg_upgrade's link mode that connect files in the old and new clusters on the primary server. It then finds matching files in the standby's old cluster and creates links for them in the standby's new cluster. Files that were not linked on the primary are copied from the primary to the standby. (They are usually small.) This provides rapid standby upgrades. Unfortunately, - rsync needlessly copies files associated with + rsync needlessly copies files associated with temporary and unlogged tables because these files don't normally exist on standby servers. If you have tablespaces, you will need to run a similar - rsync command for each tablespace directory, e.g.: + rsync command for each tablespace directory, e.g.: rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tblsp/PG_9.5_201510051 \ /vol1/pg_tblsp/PG_9.6_201608131 standby.example.com:/vol1/pg_tblsp - If you have relocated pg_wal outside the data - directories, rsync must be run on those directories + If you have relocated pg_wal outside the data + directories, rsync must be run on those directories too. @@ -551,7 +551,7 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb Configure the servers for log shipping. (You do not need to run - pg_start_backup() and pg_stop_backup() + pg_start_backup() and pg_stop_backup() or take a file system backup as the standbys are still synchronized with the primary.) @@ -562,12 +562,12 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb - Restore <filename>pg_hba.conf</> + Restore <filename>pg_hba.conf</filename> - If you modified pg_hba.conf, restore its original settings. + If you modified pg_hba.conf, restore its original settings. It might also be necessary to adjust other configuration files in the new - cluster to match the old cluster, e.g. postgresql.conf. + cluster to match the old cluster, e.g. postgresql.conf. @@ -576,7 +576,7 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb The new server can now be safely started, and then any - rsync'ed standby servers. + rsync'ed standby servers. @@ -612,7 +612,7 @@ psql --username=postgres --file=script.sql postgres Statistics - Because optimizer statistics are not transferred by pg_upgrade, you will + Because optimizer statistics are not transferred by pg_upgrade, you will be instructed to run a command to regenerate that information at the end of the upgrade. You might need to set connection parameters to match your new cluster. @@ -628,7 +628,7 @@ psql --username=postgres --file=script.sql postgres pg_upgrade completes. (Automatic deletion is not possible if you have user-defined tablespaces inside the old data directory.) You can also delete the old installation directories - (e.g. bin, share). + (e.g. bin, share). @@ -643,7 +643,7 @@ psql --username=postgres --file=script.sql postgres If you ran pg_upgrade - with , no modifications were made to the old cluster and you can re-use it anytime. @@ -651,7 +651,7 @@ psql --username=postgres --file=script.sql postgres If you ran pg_upgrade - with , the data files are shared between the old and new cluster. If you started the new cluster, the new server has written to those shared files and it is unsafe to use the old cluster. @@ -660,13 +660,13 @@ psql --username=postgres --file=script.sql postgres - If you ran pg_upgrade without - or did not start the new server, the old cluster was not modified except that, if linking - started, a .old suffix was appended to - $PGDATA/global/pg_control. To reuse the old - cluster, possibly remove the .old suffix from - $PGDATA/global/pg_control; you can then restart the + started, a .old suffix was appended to + $PGDATA/global/pg_control. To reuse the old + cluster, possibly remove the .old suffix from + $PGDATA/global/pg_control; you can then restart the old cluster. @@ -681,16 +681,16 @@ psql --username=postgres --file=script.sql postgres Notes - pg_upgrade does not support upgrading of databases - containing these reg* OID-referencing system data types: - regproc, regprocedure, regoper, - regoperator, regconfig, and - regdictionary. (regtype can be upgraded.) + pg_upgrade does not support upgrading of databases + containing these reg* OID-referencing system data types: + regproc, regprocedure, regoper, + regoperator, regconfig, and + regdictionary. (regtype can be upgraded.) All failure, rebuild, and reindex cases will be reported by - pg_upgrade if they affect your installation; + pg_upgrade if they affect your installation; post-upgrade scripts to rebuild tables and indexes will be generated automatically. If you are trying to automate the upgrade of many clusters, you should find that clusters with identical database @@ -705,17 +705,17 @@ psql --username=postgres --file=script.sql postgres - If you are upgrading a pre-PostgreSQL 9.2 cluster + If you are upgrading a pre-PostgreSQL 9.2 cluster that uses a configuration-file-only directory, you must pass the - real data directory location to pg_upgrade, and + real data directory location to pg_upgrade, and pass the configuration directory location to the server, e.g. - -d /real-data-directory -o '-D /configuration-directory'. + -d /real-data-directory -o '-D /configuration-directory'. If using a pre-9.1 old server that is using a non-default Unix-domain socket directory or a default that differs from the default of the - new cluster, set PGHOST to point to the old server's socket + new cluster, set PGHOST to point to the old server's socket location. (This is not relevant on Windows.) @@ -723,13 +723,13 @@ psql --username=postgres --file=script.sql postgres If you want to use link mode and you do not want your old cluster to be modified when the new cluster is started, make a copy of the old cluster and upgrade that in link mode. To make a valid copy - of the old cluster, use rsync to create a dirty + of the old cluster, use rsync to create a dirty copy of the old cluster while the server is running, then shut down - the old server and run rsync --checksum again to update the - copy with any changes to make it consistent. ( @@ -122,7 +122,7 @@ PostgreSQL documentation supported by PostgreSQL are described in . Most of the other command line options are in fact short forms of such a - parameter assignment. can appear multiple times to set multiple parameters. @@ -133,9 +133,9 @@ PostgreSQL documentation Prints the value of the named run-time parameter, and exits. - (See the option above for details.) This can be used on a running server, and returns values from - postgresql.conf, modified by any parameters + postgresql.conf, modified by any parameters supplied in this invocation. It does not reflect parameters supplied when the cluster was started. @@ -157,7 +157,7 @@ PostgreSQL documentation debugging output is written to the server log. Values are from 1 to 5. It is also possible to pass -d 0 for a specific session, which will prevent the - server log level of the parent postgres process from being + server log level of the parent postgres process from being propagated to this session. @@ -179,7 +179,7 @@ PostgreSQL documentation Sets the default date style to European, that is - DMY ordering of input date fields. This also causes + DMY ordering of input date fields. This also causes the day to be printed before the month in certain date output formats. See for more information. @@ -206,7 +206,7 @@ PostgreSQL documentation Specifies the IP host name or address on which postgres is to listen for TCP/IP connections from client applications. The value can also be a - comma-separated list of addresses, or * to specify + comma-separated list of addresses, or * to specify listening on all available interfaces. An empty value specifies not listening on any IP addresses, in which case only Unix-domain sockets can be used to connect to the @@ -225,13 +225,13 @@ PostgreSQL documentation Allows remote clients to connect via TCP/IP (Internet domain) connections. Without this option, only local connections are accepted. This option is equivalent to setting - listen_addresses to * in - postgresql.conf or via . This option is deprecated since it does not allow access to the full functionality of . - It's usually better to set listen_addresses directly. + It's usually better to set listen_addresses directly. @@ -291,11 +291,11 @@ PostgreSQL documentation - Spaces within extra-options are + Spaces within extra-options are considered to separate arguments, unless escaped with a backslash - (\); write \\ to represent a literal + (\); write \\ to represent a literal backslash. Multiple arguments can also be specified via multiple - uses of . @@ -340,15 +340,15 @@ PostgreSQL documentation Specifies the amount of memory to be used by internal sorts and hashes before resorting to temporary disk files. See the description of the - work_mem configuration parameter in work_mem configuration parameter in . - - + + Print the postgres version and exit. @@ -361,7 +361,7 @@ PostgreSQL documentation Sets a named run-time parameter; a shorter form of - . @@ -371,15 +371,15 @@ PostgreSQL documentation This option dumps out the server's internal configuration variables, - descriptions, and defaults in tab-delimited COPY format. + descriptions, and defaults in tab-delimited COPY format. It is designed primarily for use by administration tools. - - + + Show help about postgres command line @@ -643,13 +643,13 @@ PostgreSQL documentation Diagnostics - A failure message mentioning semget or - shmget probably indicates you need to configure your + A failure message mentioning semget or + shmget probably indicates you need to configure your kernel to provide adequate shared memory and semaphores. For more discussion see . You might be able to postpone reconfiguring your kernel by decreasing to reduce the shared memory - consumption of PostgreSQL, and/or by reducing + consumption of PostgreSQL, and/or by reducing to reduce the semaphore consumption. @@ -725,7 +725,7 @@ PostgreSQL documentation To cancel a running query, send the SIGINT signal to the process running that command. To terminate a backend process cleanly, send SIGTERM to that process. See - also pg_cancel_backend and pg_terminate_backend + also pg_cancel_backend and pg_terminate_backend in for the SQL-callable equivalents of these two actions. @@ -745,9 +745,9 @@ PostgreSQL documentation Bugs - The @@ -759,17 +759,17 @@ PostgreSQL documentation To start a single-user mode server, use a command like -postgres --single -D /usr/local/pgsql/data other-options my_database +postgres --single -D /usr/local/pgsql/data other-options my_database - Provide the correct path to the database directory with Normally, the single-user mode server treats newline as the command entry terminator; there is no intelligence about semicolons, - as there is in psql. To continue a command + as there is in psql. To continue a command across multiple lines, you must type backslash just before each newline except the last one. The backslash and adjacent newline are both dropped from the input command. Note that this will happen even @@ -777,7 +777,7 @@ PostgreSQL documentation - But if you use the command line switch, a single newline does not terminate command entry; instead, the sequence semicolon-newline-newline does. That is, type a semicolon immediately followed by a completely empty line. Backslash-newline is not @@ -794,10 +794,10 @@ PostgreSQL documentation To quit the session, type EOF - (ControlD, usually). + (ControlD, usually). If you've entered any text since the last command entry terminator, then EOF will be taken as a command entry terminator, - and another EOF will be needed to exit. + and another EOF will be needed to exit. @@ -826,7 +826,7 @@ PostgreSQL documentation $ postgres -p 1234 - To connect to this server using psql, specify this port with the -p option: + To connect to this server using psql, specify this port with the -p option: $ psql -p 1234 @@ -844,11 +844,11 @@ PostgreSQL documentation $ postgres --work-mem=1234 Either form overrides whatever setting might exist for - work_mem in postgresql.conf. Notice that + work_mem in postgresql.conf. Notice that underscores in parameter names can be written as either underscore or dash on the command line. Except for short-term experiments, it's probably better practice to edit the setting in - postgresql.conf than to rely on a command-line switch + postgresql.conf than to rely on a command-line switch to set a parameter. diff --git a/doc/src/sgml/ref/postmaster.sgml b/doc/src/sgml/ref/postmaster.sgml index 0a58a63331..ec11ec65f5 100644 --- a/doc/src/sgml/ref/postmaster.sgml +++ b/doc/src/sgml/ref/postmaster.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation postmaster - option + option diff --git a/doc/src/sgml/ref/prepare.sgml b/doc/src/sgml/ref/prepare.sgml index f4e1d54349..bcf188f4b9 100644 --- a/doc/src/sgml/ref/prepare.sgml +++ b/doc/src/sgml/ref/prepare.sgml @@ -48,7 +48,7 @@ PREPARE name [ ( $1, $2, etc. A corresponding list of + $1, $2, etc. A corresponding list of parameter data types can optionally be specified. When a parameter's data type is not specified or is declared as unknown, the type is inferred from the context @@ -115,8 +115,8 @@ PREPARE name [ ( statement - Any SELECT, INSERT, UPDATE, - DELETE, or VALUES statement. + Any SELECT, INSERT, UPDATE, + DELETE, or VALUES statement. @@ -155,9 +155,9 @@ PREPARE name [ ( To examine the query plan PostgreSQL is using for a prepared statement, use , e.g. - EXPLAIN EXECUTE. + EXPLAIN EXECUTE. If a generic plan is in use, it will contain parameter symbols - $n, while a custom plan will have the + $n, while a custom plan will have the supplied parameter values substituted into it. The row estimates in the generic plan reflect the selectivity computed for the parameters. @@ -172,13 +172,13 @@ PREPARE name [ ( Although the main point of a prepared statement is to avoid repeated parse - analysis and planning of the statement, PostgreSQL will + analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new - search_path. (This latter behavior is new as of + search_path. (This latter behavior is new as of PostgreSQL 9.3.) These rules make use of a prepared statement semantically almost equivalent to re-submitting the same query text over and over, but with a performance benefit if no object @@ -186,7 +186,7 @@ PREPARE name [ ( search_path, no automatic re-parse will occur + earlier in the search_path, no automatic re-parse will occur since no object used in the statement changed. However, if some other change forces a re-parse, the new table will be referenced in subsequent uses. @@ -222,7 +222,7 @@ EXECUTE usrrptplan(1, current_date); Note that the data type of the second parameter is not specified, - so it is inferred from the context in which $2 is used. + so it is inferred from the context in which $2 is used. diff --git a/doc/src/sgml/ref/prepare_transaction.sgml b/doc/src/sgml/ref/prepare_transaction.sgml index 9a2e38e98c..4f78e6b131 100644 --- a/doc/src/sgml/ref/prepare_transaction.sgml +++ b/doc/src/sgml/ref/prepare_transaction.sgml @@ -47,7 +47,7 @@ PREPARE TRANSACTION transaction_id From the point of view of the issuing session, PREPARE - TRANSACTION is not unlike a ROLLBACK command: + TRANSACTION is not unlike a ROLLBACK command: after executing it, there is no active current transaction, and the effects of the prepared transaction are no longer visible. (The effects will become visible again if the transaction is committed.) @@ -55,7 +55,7 @@ PREPARE TRANSACTION transaction_id If the PREPARE TRANSACTION command fails for any - reason, it becomes a ROLLBACK: the current transaction + reason, it becomes a ROLLBACK: the current transaction is canceled. @@ -69,7 +69,7 @@ PREPARE TRANSACTION transaction_id An arbitrary identifier that later identifies this transaction for - COMMIT PREPARED or ROLLBACK PREPARED. + COMMIT PREPARED or ROLLBACK PREPARED. The identifier must be written as a string literal, and must be less than 200 bytes long. It must not be the same as the identifier used for any currently prepared transaction. @@ -83,12 +83,12 @@ PREPARE TRANSACTION transaction_id Notes - PREPARE TRANSACTION is not intended for use in applications + PREPARE TRANSACTION is not intended for use in applications or interactive sessions. Its purpose is to allow an external transaction manager to perform atomic global transactions across multiple databases or other transactional resources. Unless you're writing a transaction manager, you probably shouldn't be using PREPARE - TRANSACTION. + TRANSACTION. @@ -97,22 +97,22 @@ PREPARE TRANSACTION transaction_id - It is not currently allowed to PREPARE a transaction that + It is not currently allowed to PREPARE a transaction that has executed any operations involving temporary tables, - created any cursors WITH HOLD, or executed - LISTEN or UNLISTEN. + created any cursors WITH HOLD, or executed + LISTEN or UNLISTEN. Those features are too tightly tied to the current session to be useful in a transaction to be prepared. - If the transaction modified any run-time parameters with SET - (without the LOCAL option), - those effects persist after PREPARE TRANSACTION, and will not + If the transaction modified any run-time parameters with SET + (without the LOCAL option), + those effects persist after PREPARE TRANSACTION, and will not be affected by any later COMMIT PREPARED or ROLLBACK PREPARED. Thus, in this one respect - PREPARE TRANSACTION acts more like COMMIT than - ROLLBACK. + PREPARE TRANSACTION acts more like COMMIT than + ROLLBACK. @@ -124,7 +124,7 @@ PREPARE TRANSACTION transaction_id It is unwise to leave transactions in the prepared state for a long time. - This will interfere with the ability of VACUUM to reclaim + This will interfere with the ability of VACUUM to reclaim storage, and in extreme cases could cause the database to shut down to prevent transaction ID wraparound (see ). Keep in mind also that the transaction @@ -149,7 +149,7 @@ PREPARE TRANSACTION transaction_id Examples Prepare the current transaction for two-phase commit, using - foobar as the transaction identifier: + foobar as the transaction identifier: PREPARE TRANSACTION 'foobar'; diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index e7a3e17c67..8cbe0569cf 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -50,8 +50,8 @@ PostgreSQL documentation - - + + Print all nonempty input lines to standard output as they are read. @@ -63,8 +63,8 @@ PostgreSQL documentation - - + + Switches to unaligned output mode. (The default output mode is @@ -75,8 +75,8 @@ PostgreSQL documentation - - + + Print failed SQL commands to standard error output. This is @@ -87,8 +87,8 @@ PostgreSQL documentation - - + + Specifies that psql is to execute the given @@ -116,14 +116,14 @@ psql -c '\x' -c 'SELECT * FROM foo;' echo '\x \\ SELECT * FROM foo;' | psql - (\\ is the separator meta-command.) + (\\ is the separator meta-command.) Each SQL command string passed to is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, - unless there are explicit BEGIN/COMMIT + unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See for more details about how the server handles multi-query strings.) @@ -152,8 +152,8 @@ EOF - - + + Specifies the name of the database to connect to. This is @@ -173,8 +173,8 @@ EOF - - + + Copy all SQL commands sent to the server to standard output as well. @@ -186,21 +186,21 @@ EOF - - + + Echo the actual queries generated by \d and other backslash commands. You can use this to study psql's internal operations. This is equivalent to - setting the variable ECHO_HIDDEN to on. + setting the variable ECHO_HIDDEN to on. - - + + Read commands from the @@ -219,7 +219,7 @@ EOF If filename is - (hyphen), then standard input is read until an EOF indication - or \q meta-command. This can be used to intersperse + or \q meta-command. This can be used to intersperse interactive input with input from files. Note however that Readline is not used in this case (much as if had been specified). @@ -241,8 +241,8 @@ EOF - - + + Use separator as the @@ -253,8 +253,8 @@ EOF - - + + Specifies the host name of the machine on which the @@ -266,8 +266,8 @@ EOF - - + + Turn on HTML tabular output. This is @@ -278,8 +278,8 @@ EOF - - + + List all available databases, then exit. Other non-connection @@ -290,8 +290,8 @@ EOF - - + + Write all query output into file - - + + Do not use Readline for line editing and do @@ -314,8 +314,8 @@ EOF - - + + Put all query output into file - - + + Specifies the TCP port or the local Unix-domain @@ -340,8 +340,8 @@ EOF - - + + Specifies printing options, in the style of @@ -354,8 +354,8 @@ EOF - - + + Specifies that psql should do its work @@ -363,14 +363,14 @@ EOF informational output. If this option is used, none of this happens. This is useful with the option. This is equivalent to setting the variable QUIET - to on. + to on. - - + + Use separator as the @@ -381,8 +381,8 @@ EOF - - + + Run in single-step mode. That means the user is prompted before @@ -393,8 +393,8 @@ EOF - - + + Runs in single-line mode where a newline terminates an SQL command, as a @@ -413,8 +413,8 @@ EOF - - + + Turn off printing of column names and result row count footers, @@ -425,8 +425,8 @@ EOF - - + + Specifies options to be placed within the @@ -437,8 +437,8 @@ EOF - - + + Connect to the database as the user - - - + + + Perform a variable assignment, like the \set @@ -466,8 +466,8 @@ EOF - - + + Print the psql version and exit. @@ -476,8 +476,8 @@ EOF - - + + Never issue a password prompt. If the server requires password @@ -496,8 +496,8 @@ EOF - - + + Force psql to prompt for a @@ -509,7 +509,7 @@ EOF will automatically prompt for a password if the server demands password authentication. However, psql will waste a connection attempt finding out that the server wants a - password. In some cases it is worth typing to avoid the extra connection attempt. @@ -522,8 +522,8 @@ EOF - - + + Turn on the expanded table formatting mode. This is equivalent to @@ -533,8 +533,8 @@ EOF - - + + Do not read the start-up file (neither the system-wide @@ -574,8 +574,8 @@ EOF This option can only be used in combination with one or more and/or options. It causes - psql to issue a BEGIN command - before the first such option and a COMMIT command after + psql to issue a BEGIN command + before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single transaction. This ensures that either all the commands complete successfully, or no changes are applied. @@ -583,8 +583,8 @@ EOF If the commands themselves - contain BEGIN, COMMIT, - or ROLLBACK, this option will not have the desired + contain BEGIN, COMMIT, + or ROLLBACK, this option will not have the desired effects. Also, if an individual command cannot be executed inside a transaction block, specifying this option will cause the whole transaction to fail. @@ -593,17 +593,17 @@ EOF - - + + Show help about psql and exit. The optional - topic parameter (defaulting + topic parameter (defaulting to options) selects which part of psql is - explained: commands describes psql's - backslash commands; options describes the command-line - options that can be passed to psql; - and variables shows help about psql configuration + explained: commands describes psql's + backslash commands; options describes the command-line + options that can be passed to psql; + and variables shows help about psql configuration variables. @@ -644,8 +644,8 @@ EOF not belong to any option it will be interpreted as the database name (or the user name, if the database name is already given). Not all of these options are required; there are useful defaults. If you omit the host - name, psql will connect via a Unix-domain socket - to a server on the local host, or via TCP/IP to localhost on + name, psql will connect via a Unix-domain socket + to a server on the local host, or via TCP/IP to localhost on machines that don't have Unix-domain sockets. The default port number is determined at compile time. Since the database server uses the same default, you will not have @@ -663,7 +663,7 @@ EOF PGPORT and/or PGUSER to appropriate values. (For additional environment variables, see .) It is also convenient to have a - ~/.pgpass file to avoid regularly having to type in + ~/.pgpass file to avoid regularly having to type in passwords. See for more information. @@ -777,13 +777,13 @@ testdb=> If an unquoted colon (:) followed by a - psql variable name appears within an argument, it is + psql variable name appears within an argument, it is replaced by the variable's value, as described in . - The forms :'variable_name' and - :"variable_name" described there + The forms :'variable_name' and + :"variable_name" described there work as well. - The :{?variable_name} syntax allows + The :{?variable_name} syntax allows testing whether a variable is defined. It is substituted by TRUE or FALSE. Escaping the colon with a backslash protects it from substitution. @@ -795,15 +795,15 @@ testdb=> shell. The output of the command (with any trailing newline removed) replaces the backquoted text. Within the text enclosed in backquotes, no special quoting or other processing occurs, except that appearances - of :variable_name where - variable_name is a psql variable name + of :variable_name where + variable_name is a psql variable name are replaced by the variable's value. Also, appearances of - :'variable_name' are replaced by the + :'variable_name' are replaced by the variable's value suitably quoted to become a single shell command argument. (The latter form is almost always preferable, unless you are very sure of what is in the variable.) Because carriage return and line feed characters cannot be safely quoted on all platforms, the - :'variable_name' form prints an + :'variable_name' form prints an error message and does not substitute the variable value when such characters appear in the value. @@ -812,13 +812,13 @@ testdb=> Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to - lowercase, while double quotes (") protect letters + lowercase, while double quotes (") protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, - FOO"BAR"BAZ is interpreted as fooBARbaz, - and "A weird"" name" becomes A weird" - name. + FOO"BAR"BAZ is interpreted as fooBARbaz, + and "A weird"" name" becomes A weird" + name. @@ -834,7 +834,7 @@ testdb=> - Many of the meta-commands act on the current query buffer. + Many of the meta-commands act on the current query buffer. This is simply a buffer holding whatever SQL command text has been typed but not yet sent to the server for execution. This will include previous input lines as well as any text appearing before the meta-command on the @@ -861,9 +861,9 @@ testdb=> \c or \connect [ -reuse-previous=on|off ] [ dbname [ username ] [ host ] [ port ] | conninfo ] - Establishes a new connection to a PostgreSQL + Establishes a new connection to a PostgreSQL server. The connection parameters to use can be specified either - using a positional syntax, or using conninfo connection + using a positional syntax, or using conninfo connection strings as detailed in . @@ -871,8 +871,8 @@ testdb=> Where the command omits database name, user, host, or port, the new connection can reuse values from the previous connection. By default, values from the previous connection are reused except when processing - a conninfo string. Passing a first argument - of -reuse-previous=on + a conninfo string. Passing a first argument + of -reuse-previous=on or -reuse-previous=off overrides that default. When the command neither specifies nor reuses a particular parameter, the libpq default is used. Specifying any @@ -969,7 +969,7 @@ testdb=> - When program is specified, + When program is specified, command is executed by psql and the data passed from or to command is @@ -980,17 +980,17 @@ testdb=> - For \copy ... from stdin, data rows are read from the same + For \copy ... from stdin, data rows are read from the same source that issued the command, continuing until \. - is read or the stream reaches EOF. This option is useful + is read or the stream reaches EOF. This option is useful for populating tables in-line within a SQL script file. - For \copy ... to stdout, output is sent to the same place - as psql command output, and - the COPY count command status is + For \copy ... to stdout, output is sent to the same place + as psql command output, and + the COPY count command status is not printed (since it might be confused with a data row). To read/write psql's standard input or - output regardless of the current command source or \o - option, write from pstdin or to pstdout. + output regardless of the current command source or \o + option, write from pstdin or to pstdout. @@ -998,9 +998,9 @@ testdb=> SQL command. All options other than the data source/destination are as specified for . - Because of this, special parsing rules apply to the \copy + Because of this, special parsing rules apply to the \copy meta-command. Unlike most other meta-commands, the entire remainder - of the line is always taken to be the arguments of \copy, + of the line is always taken to be the arguments of \copy, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1040,7 +1040,7 @@ testdb=> Executes the current query buffer (like \g) and shows the results in a crosstab grid. The query must return at least three columns. - The output column identified by colV + The output column identified by colV becomes a vertical header and the output column identified by colH becomes a horizontal header. @@ -1068,7 +1068,7 @@ testdb=> The vertical header, displayed as the leftmost column, contains the - values found in column colV, in the + values found in column colV, in the same order as in the query results, but with duplicates removed. @@ -1077,11 +1077,11 @@ testdb=> found in column colH, with duplicates removed. By default, these appear in the same order as in the query results. But if the - optional sortcolH argument is given, + optional sortcolH argument is given, it identifies a column whose values must be integer numbers, and the values from colH will appear in the horizontal header sorted according to the - corresponding sortcolH values. + corresponding sortcolH values. @@ -1094,7 +1094,7 @@ testdb=> the value of colH is x and the value of colV - is y. If there is no such row, the cell is empty. If + is y. If there is no such row, the cell is empty. If there are multiple such rows, an error is reported. @@ -1115,13 +1115,13 @@ testdb=> Associated indexes, constraints, rules, and triggers are also shown. For foreign tables, the associated foreign server is shown as well. - (Matching the pattern is defined in + (Matching the pattern is defined in below.) - For some types of relation, \d shows additional information + For some types of relation, \d shows additional information for each column: column values for sequences, indexed expressions for indexes, and foreign data wrapper options for foreign tables. @@ -1237,9 +1237,9 @@ testdb=> \dd[S] [ pattern ] - Shows the descriptions of objects of type constraint, - operator class, operator family, - rule, and trigger. All + Shows the descriptions of objects of type constraint, + operator class, operator family, + rule, and trigger. All other comments may be viewed by the respective backslash commands for those object types. @@ -1318,7 +1318,7 @@ testdb=> respectively. You can specify any or all of these letters, in any order, to obtain a listing of objects - of these types. For example, \dit lists indexes + of these types. For example, \dit lists indexes and tables. If + is appended to the command name, each object is listed with its physical size on disk and its associated description, if any. @@ -1408,11 +1408,11 @@ testdb=> Lists functions, together with their result data types, argument data - types, and function types, which are classified as agg - (aggregate), normal, trigger, or window. + types, and function types, which are classified as agg + (aggregate), normal, trigger, or window. To display only functions - of specific type(s), add the corresponding letters a, - n, t, or w to the command. + of specific type(s), add the corresponding letters a, + n, t, or w to the command. If pattern is specified, only functions whose names match the pattern are shown. @@ -1429,7 +1429,7 @@ testdb=> To look up functions taking arguments or returning values of a specific data type, use your pager's search capability to scroll through the - \df output. + \df output. @@ -1497,8 +1497,8 @@ testdb=> Lists database roles. - (Since the concepts of users and groups have been - unified into roles, this command is now equivalent to + (Since the concepts of users and groups have been + unified into roles, this command is now equivalent to \du.) By default, only user-created roles are shown; supply the S modifier to include system roles. @@ -1624,7 +1624,7 @@ testdb=> role-pattern and database-pattern are used to select specific roles and databases to list, respectively. If omitted, or if - * is specified, all settings are listed, including those + * is specified, all settings are listed, including those not role-specific or database-specific, respectively. @@ -1674,7 +1674,7 @@ testdb=> specified, only types whose names match the pattern are listed. If + is appended to the command name, each type is listed with its internal name and size, its allowed values - if it is an enum type, and its associated permissions. + if it is an enum type, and its associated permissions. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. @@ -1687,8 +1687,8 @@ testdb=> Lists database roles. - (Since the concepts of users and groups have been - unified into roles, this command is now equivalent to + (Since the concepts of users and groups have been + unified into roles, this command is now equivalent to \dg.) By default, only user-created roles are shown; supply the S modifier to include system roles. @@ -1730,7 +1730,7 @@ testdb=> - \e or \edit filename line_number + \e or \edit filename line_number @@ -1750,8 +1750,8 @@ testdb=> whole buffer as a single line. Any complete queries are immediately executed; that is, if the query buffer contains or ends with a semicolon, everything up to that point is executed. Whatever remains - will wait in the query buffer; type semicolon or \g to - send it, or \r to cancel it by clearing the query buffer. + will wait in the query buffer; type semicolon or \g to + send it, or \r to cancel it by clearing the query buffer. Treating the buffer as a single line primarily affects meta-commands: whatever is in the buffer after a meta-command will be taken as argument(s) to the meta-command, even if it spans multiple lines. @@ -1803,27 +1803,27 @@ Tue Oct 26 21:40:57 CEST 1999 - \ef function_description line_number + \ef function_description line_number This command fetches and edits the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. - Editing is done in the same way as for \edit. + in the form of a CREATE OR REPLACE FUNCTION command. + Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; - type semicolon or \g to send it, or \r + type semicolon or \g to send it, or \r to cancel. The target function can be specified by name alone, or by name - and arguments, for example foo(integer, text). + and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name. - If no function is specified, a blank CREATE FUNCTION + If no function is specified, a blank CREATE FUNCTION template is presented for editing. @@ -1836,7 +1836,7 @@ Tue Oct 26 21:40:57 CEST 1999 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \ef, and neither + always taken to be the argument(s) of \ef, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1871,28 +1871,28 @@ Tue Oct 26 21:40:57 CEST 1999 Repeats the most recent server error message at maximum verbosity, as though VERBOSITY were set - to verbose and SHOW_CONTEXT were - set to always. + to verbose and SHOW_CONTEXT were + set to always. - \ev view_name line_number + \ev view_name line_number This command fetches and edits the definition of the named view, - in the form of a CREATE OR REPLACE VIEW command. - Editing is done in the same way as for \edit. + in the form of a CREATE OR REPLACE VIEW command. + Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; - type semicolon or \g to send it, or \r + type semicolon or \g to send it, or \r to cancel. - If no view is specified, a blank CREATE VIEW + If no view is specified, a blank CREATE VIEW template is presented for editing. @@ -1903,7 +1903,7 @@ Tue Oct 26 21:40:57 CEST 1999 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \ev, and neither + always taken to be the argument(s) of \ev, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1944,7 +1944,7 @@ Tue Oct 26 21:40:57 CEST 1999 alternative to the \o command. - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -1982,13 +1982,13 @@ Tue Oct 26 21:40:57 CEST 1999 Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed. For example, to create an index on each - column of my_table: + column of my_table: -=> SELECT format('create index on my_table(%I)', attname) --> FROM pg_attribute --> WHERE attrelid = 'my_table'::regclass AND attnum > 0 --> ORDER BY attnum --> \gexec +=> SELECT format('create index on my_table(%I)', attname) +-> FROM pg_attribute +-> WHERE attrelid = 'my_table'::regclass AND attnum > 0 +-> ORDER BY attnum +-> \gexec CREATE INDEX CREATE INDEX CREATE INDEX @@ -2001,14 +2001,14 @@ CREATE INDEX are returned, and left-to-right within each row if there is more than one column. NULL fields are ignored. The generated queries are sent literally to the server for processing, so they cannot be - psql meta-commands nor contain psql + psql meta-commands nor contain psql variable references. If any individual query fails, execution of the remaining queries continues unless ON_ERROR_STOP is set. Execution of each query is subject to ECHO processing. (Setting ECHO to all or queries is often advisable when - using \gexec.) Query logging, single-step mode, + using \gexec.) Query logging, single-step mode, timing, and other query execution features apply to each generated query as well. @@ -2026,7 +2026,7 @@ CREATE INDEX Sends the current query buffer to the server and stores the - query's output into psql variables (see psql variables (see ). The query to be executed must return exactly one row. Each column of the row is stored into a separate variable, named the same as the @@ -2092,7 +2092,7 @@ hello 10 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \help, and neither + always taken to be the argument(s) of \help, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -2133,7 +2133,7 @@ hello 10 If filename is - (hyphen), then standard input is read until an EOF indication - or \q meta-command. This can be used to intersperse + or \q meta-command. This can be used to intersperse interactive input with input from files. Note that Readline behavior will be used only if it is active at the outermost level. @@ -2208,7 +2208,7 @@ hello 10 the same source file. If EOF is reached on the main input file or an \include-ed file before all local \if-blocks have been closed, - then psql will raise an error. + then psql will raise an error. Here is an example: @@ -2241,7 +2241,7 @@ SELECT \ir or \include_relative filename - The \ir command is similar to \i, but resolves + The \ir command is similar to \i, but resolves relative file names differently. When executing in interactive mode, the two commands behave identically. However, when invoked from a script, \ir interprets file names relative to the @@ -2366,7 +2366,7 @@ lo_import 152801 - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -2409,7 +2409,7 @@ lo_import 152801 Changes the password of the specified user (by default, the current user). This command prompts for the new password, encrypts it, and - sends it to the server as an ALTER ROLE command. This + sends it to the server as an ALTER ROLE command. This makes sure that the new password does not appear in cleartext in the command history, the server log, or elsewhere. @@ -2421,16 +2421,16 @@ lo_import 152801 Prompts the user to supply text, which is assigned to the variable - name. + name. An optional prompt string, text, can be specified. (For multiword + class="parameter">text, can be specified. (For multiword prompts, surround the text with single quotes.) - By default, \prompt uses the terminal for input and - output. However, if the @@ -2484,16 +2484,16 @@ lo_import 152801 columns - Sets the target width for the wrapped format, and also + Sets the target width for the wrapped format, and also the width limit for determining whether output is wide enough to require the pager or switch to the vertical display in expanded auto mode. Zero (the default) causes the target width to be controlled by the - environment variable COLUMNS, or the detected screen width - if COLUMNS is not set. - In addition, if columns is zero then the - wrapped format only affects screen output. - If columns is nonzero then file and pipe output is + environment variable COLUMNS, or the detected screen width + if COLUMNS is not set. + In addition, if columns is zero then the + wrapped format only affects screen output. + If columns is nonzero then file and pipe output is wrapped to that width as well. @@ -2552,7 +2552,7 @@ lo_import 152801 If value is specified it must be either on or off which will enable or disable display of the table footer - (the (n rows) count). + (the (n rows) count). If value is omitted the command toggles footer display on or off. @@ -2573,7 +2573,7 @@ lo_import 152801 is enough.) - unaligned format writes all columns of a row on one + unaligned format writes all columns of a row on one line, separated by the currently active field separator. This is useful for creating output that might be intended to be read in by other programs (for example, tab-separated or comma-separated @@ -2584,18 +2584,18 @@ lo_import 152801 nicely formatted text output; this is the default. - wrapped format is like aligned but wraps + wrapped format is like aligned but wraps wide data values across lines to make the output fit in the target column width. The target width is determined as described under - the columns option. Note that psql will + the columns option. Note that psql will not attempt to wrap column header titles; therefore, - wrapped format behaves the same as aligned + wrapped format behaves the same as aligned if the total width needed for column headers exceeds the target. - The html, asciidoc, latex, - latex-longtable, and troff-ms + The html, asciidoc, latex, + latex-longtable, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! This might not be @@ -2603,7 +2603,7 @@ lo_import 152801 LaTeX you must have a complete document wrapper. latex-longtable also requires the LaTeX - longtable and booktabs packages. + longtable and booktabs packages. @@ -2617,9 +2617,9 @@ lo_import 152801 or unicode. Unique abbreviations are allowed. (That would mean one letter is enough.) - The default setting is ascii. - This option only affects the aligned and - wrapped output formats. + The default setting is ascii. + This option only affects the aligned and + wrapped output formats. ascii style uses plain ASCII @@ -2627,17 +2627,17 @@ lo_import 152801 a + symbol in the right-hand margin. When the wrapped format wraps data from one line to the next without a newline character, a dot - (.) is shown in the right-hand margin of the first line, + (.) is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line. - old-ascii style uses plain ASCII + old-ascii style uses plain ASCII characters, using the formatting style used in PostgreSQL 8.4 and earlier. Newlines in data are shown using a : symbol in place of the left-hand column separator. When the data is wrapped from one line - to the next without a newline character, a ; + to the next without a newline character, a ; symbol is used in place of the left-hand column separator. @@ -2650,7 +2650,7 @@ lo_import 152801 - When the border setting is greater than zero, + When the border setting is greater than zero, the linestyle option also determines the characters with which the border lines are drawn. Plain ASCII characters work everywhere, but @@ -2689,7 +2689,7 @@ lo_import 152801 pager - Controls use of a pager program for query and psql + Controls use of a pager program for query and psql help output. If the environment variable PSQL_PAGER or PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default program @@ -2697,13 +2697,13 @@ lo_import 152801 - When the pager option is off, the pager - program is not used. When the pager option is - on, the pager is used when appropriate, i.e., when the + When the pager option is off, the pager + program is not used. When the pager option is + on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. - The pager option can also be set to always, + The pager option can also be set to always, which causes the pager to be used for all terminal output regardless - of whether it fits on the screen. \pset pager + of whether it fits on the screen. \pset pager without a value toggles pager use on and off. @@ -2714,7 +2714,7 @@ lo_import 152801 pager_min_lines - If pager_min_lines is set to a number greater than the + If pager_min_lines is set to a number greater than the page height, the pager program will not be called unless there are at least this many lines of output to show. The default setting is 0. @@ -2760,7 +2760,7 @@ lo_import 152801 In latex-longtable format, this controls the proportional width of each column containing a left-aligned data type. It is specified as a whitespace-separated list of values, - e.g. '0.2 0.2 0.6'. Unspecified output columns + e.g. '0.2 0.2 0.6'. Unspecified output columns use the last specified value. @@ -2902,7 +2902,7 @@ lo_import 152801 - Sets the psql variable psql variable name to value, or if more than one value is given, to the concatenation of all of them. If only one @@ -2910,8 +2910,8 @@ lo_import 152801 unset a variable, use the \unset command. - \set without any arguments displays the names and values - of all currently-set psql variables. + \set without any arguments displays the names and values + of all currently-set psql variables. @@ -2958,19 +2958,19 @@ testdb=> \setenv LESS -imx4F - \sf[+] function_description + \sf[+] function_description This command fetches and shows the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. + in the form of a CREATE OR REPLACE FUNCTION command. The definition is printed to the current query output channel, as set by \o. The target function can be specified by name alone, or by name - and arguments, for example foo(integer, text). + and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name. @@ -2983,7 +2983,7 @@ testdb=> \setenv LESS -imx4F Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \sf, and neither + always taken to be the argument(s) of \sf, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -2992,12 +2992,12 @@ testdb=> \setenv LESS -imx4F - \sv[+] view_name + \sv[+] view_name This command fetches and shows the definition of the named view, - in the form of a CREATE OR REPLACE VIEW command. + in the form of a CREATE OR REPLACE VIEW command. The definition is printed to the current query output channel, as set by \o. @@ -3009,7 +3009,7 @@ testdb=> \setenv LESS -imx4F Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \sv, and neither + always taken to be the argument(s) of \sv, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -3062,13 +3062,13 @@ testdb=> \setenv LESS -imx4F - Unsets (deletes) the psql variable psql variable name. Most variables that control psql's behavior - cannot be unset; instead, an \unset command is interpreted + cannot be unset; instead, an \unset command is interpreted as setting them to their default values. See , below. @@ -3079,7 +3079,7 @@ testdb=> \setenv LESS -imx4F \w or \write filename - \w or \write |command + \w or \write |command Writes the current query buffer to the file \setenv LESS -imx4F - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -3105,10 +3105,10 @@ testdb=> \setenv LESS -imx4F \watch [ seconds ] - Repeatedly execute the current query buffer (as \g does) + Repeatedly execute the current query buffer (as \g does) until interrupted or the query fails. Wait the specified number of seconds (default 2) between executions. Each query result is - displayed with a header that includes the \pset title + displayed with a header that includes the \pset title string (if any), the time as of query start, and the delay interval. @@ -3153,14 +3153,14 @@ testdb=> \setenv LESS -imx4F \! [ command ] - With no argument, escapes to a sub-shell; psql + With no argument, escapes to a sub-shell; psql resumes when the sub-shell exits. With an argument, executes the shell command command. Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \!, and neither + always taken to be the argument(s) of \!, and neither variable interpolation nor backquote expansion are performed in the arguments. The rest of the line is simply passed literally to the shell. @@ -3170,16 +3170,16 @@ testdb=> \setenv LESS -imx4F - \? [ topic ] + \? [ topic ] Shows help information. The optional - topic parameter - (defaulting to commands) selects which part of psql is - explained: commands describes psql's - backslash commands; options describes the command-line - options that can be passed to psql; - and variables shows help about psql configuration + topic parameter + (defaulting to commands) selects which part of psql is + explained: commands describes psql's + backslash commands; options describes the command-line + options that can be passed to psql; + and variables shows help about psql configuration variables. @@ -3196,7 +3196,7 @@ testdb=> \setenv LESS -imx4F - Normally, psql will dispatch a SQL command to the + Normally, psql will dispatch a SQL command to the server as soon as it reaches the command-ending semicolon, even if more input remains on the current line. Thus for example entering @@ -3205,7 +3205,7 @@ select 1; select 2; select 3; will result in the three SQL commands being individually sent to the server, with each one's results being displayed before continuing to the next command. However, a semicolon entered - as \; will not trigger command processing, so that the + as \; will not trigger command processing, so that the command before it and the one after are effectively combined and sent to the server in one request. So for example @@ -3214,14 +3214,14 @@ select 1\; select 2\; select 3; results in sending the three SQL commands to the server in a single request, when the non-backslashed semicolon is reached. The server executes such a request as a single transaction, - unless there are explicit BEGIN/COMMIT + unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See for more details about how the server handles multi-query strings.) psql prints only the last query result it receives for each request; in this example, although all - three SELECTs are indeed executed, psql - only prints the 3. + three SELECTs are indeed executed, psql + only prints the 3. @@ -3238,54 +3238,54 @@ select 1\; select 2\; select 3; - The various \d commands accept a \d commands accept a pattern parameter to specify the object name(s) to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, just as in SQL names; - for example, \dt FOO will display the table named - foo. As in SQL names, placing double quotes around + for example, \dt FOO will display the table named + foo. As in SQL names, placing double quotes around a pattern stops folding to lower case. Should you need to include an actual double quote character in a pattern, write it as a pair of double quotes within a double-quote sequence; again this is in accord with the rules for SQL quoted identifiers. For example, - \dt "FOO""BAR" will display the table named - FOO"BAR (not foo"bar). Unlike the normal + \dt "FOO""BAR" will display the table named + FOO"BAR (not foo"bar). Unlike the normal rules for SQL names, you can put double quotes around just part - of a pattern, for instance \dt FOO"FOO"BAR will display - the table named fooFOObar. + of a pattern, for instance \dt FOO"FOO"BAR will display + the table named fooFOObar. Whenever the pattern parameter - is omitted completely, the \d commands display all objects + is omitted completely, the \d commands display all objects that are visible in the current schema search path — this is - equivalent to using * as the pattern. - (An object is said to be visible if its + equivalent to using * as the pattern. + (An object is said to be visible if its containing schema is in the search path and no object of the same kind and name appears earlier in the search path. This is equivalent to the statement that the object can be referenced by name without explicit schema qualification.) To see all objects in the database regardless of visibility, - use *.* as the pattern. + use *.* as the pattern. - Within a pattern, * matches any sequence of characters - (including no characters) and ? matches any single character. + Within a pattern, * matches any sequence of characters + (including no characters) and ? matches any single character. (This notation is comparable to Unix shell file name patterns.) - For example, \dt int* displays tables whose names - begin with int. But within double quotes, * - and ? lose these special meanings and are just matched + For example, \dt int* displays tables whose names + begin with int. But within double quotes, * + and ? lose these special meanings and are just matched literally. - A pattern that contains a dot (.) is interpreted as a schema + A pattern that contains a dot (.) is interpreted as a schema name pattern followed by an object name pattern. For example, - \dt foo*.*bar* displays all tables whose table name - includes bar that are in schemas whose schema name - starts with foo. When no dot appears, then the pattern + \dt foo*.*bar* displays all tables whose table name + includes bar that are in schemas whose schema name + starts with foo. When no dot appears, then the pattern matches only objects that are visible in the current schema search path. Again, a dot within double quotes loses its special meaning and is matched literally. @@ -3293,28 +3293,28 @@ select 1\; select 2\; select 3; Advanced users can use regular-expression notations such as character - classes, for example [0-9] to match any digit. All regular + classes, for example [0-9] to match any digit. All regular expression special characters work as specified in - , except for . which - is taken as a separator as mentioned above, * which is - translated to the regular-expression notation .*, - ? which is translated to ., and - $ which is matched literally. You can emulate + , except for . which + is taken as a separator as mentioned above, * which is + translated to the regular-expression notation .*, + ? which is translated to ., and + $ which is matched literally. You can emulate these pattern characters at need by writing - ? for ., + ? for ., (R+|) for R*, or (R|) for R?. - $ is not needed as a regular-expression character since + $ is not needed as a regular-expression character since the pattern must match the whole name, unlike the usual - interpretation of regular expressions (in other words, $ - is automatically appended to your pattern). Write * at the + interpretation of regular expressions (in other words, $ + is automatically appended to your pattern). Write * at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name - patterns (i.e., the argument of \do). + patterns (i.e., the argument of \do). @@ -3387,14 +3387,14 @@ bar Variables that control psql's behavior - generally cannot be unset or set to invalid values. An \unset + generally cannot be unset or set to invalid values. An \unset command is allowed but is interpreted as setting the variable to its - default value. A \set command without a second argument is - interpreted as setting the variable to on, for control + default value. A \set command without a second argument is + interpreted as setting the variable to on, for control variables that accept that value, and is rejected for others. Also, - control variables that accept the values on - and off will also accept other common spellings of Boolean - values, such as true and false. + control variables that accept the values on + and off will also accept other common spellings of Boolean + values, such as true and false. @@ -3412,23 +3412,23 @@ bar - When on (the default), each SQL command is automatically + When on (the default), each SQL command is automatically committed upon successful completion. To postpone commit in this - mode, you must enter a BEGIN or START - TRANSACTION SQL command. When off or unset, SQL + mode, you must enter a BEGIN or START + TRANSACTION SQL command. When off or unset, SQL commands are not committed until you explicitly issue - COMMIT or END. The autocommit-off - mode works by issuing an implicit BEGIN for you, just + COMMIT or END. The autocommit-off + mode works by issuing an implicit BEGIN for you, just before any command that is not already in a transaction block and - is not itself a BEGIN or other transaction-control + is not itself a BEGIN or other transaction-control command, nor a command that cannot be executed inside a transaction - block (such as VACUUM). + block (such as VACUUM). In autocommit-off mode, you must explicitly abandon any failed - transaction by entering ABORT or ROLLBACK. + transaction by entering ABORT or ROLLBACK. Also keep in mind that if you exit the session without committing, your work will be lost. @@ -3436,7 +3436,7 @@ bar - The autocommit-on mode is PostgreSQL's traditional + The autocommit-on mode is PostgreSQL's traditional behavior, but autocommit-off is closer to the SQL spec. If you prefer autocommit-off, you might wish to set it in the system-wide psqlrc file or your @@ -3496,7 +3496,7 @@ bar ECHO_HIDDEN - When this variable is set to on and a backslash command + When this variable is set to on and a backslash command queries the database, the query is first shown. This feature helps you to study PostgreSQL internals and provide @@ -3504,7 +3504,7 @@ bar on program start-up, use the switch .) If you set this variable to the value noexec, the queries are just shown but are not actually sent to the server and executed. - The default value is off. + The default value is off. @@ -3516,7 +3516,7 @@ bar The current client character set encoding. This is set every time you connect to a database (including program start-up), and when you change the encoding - with \encoding, but it can be changed or unset. + with \encoding, but it can be changed or unset. @@ -3525,8 +3525,8 @@ bar ERROR - true if the last SQL query failed, false if - it succeeded. See also SQLSTATE. + true if the last SQL query failed, false if + it succeeded. See also SQLSTATE. @@ -3550,7 +3550,7 @@ bar Although you can use any output format with this feature, - the default aligned format tends to look bad + the default aligned format tends to look bad because each group of FETCH_COUNT rows will be formatted separately, leading to varying column widths across the row groups. The other output formats work better. @@ -3637,11 +3637,11 @@ bar IGNOREEOF - If set to 1 or less, sending an EOF character (usually - ControlD) + If set to 1 or less, sending an EOF character (usually + ControlD) to an interactive session of psql will terminate the application. If set to a larger numeric value, - that many consecutive EOF characters must be typed to + that many consecutive EOF characters must be typed to make an interactive session terminate. If the variable is set to a non-numeric value, it is interpreted as 10. The default is 0. @@ -3673,8 +3673,8 @@ bar The primary error message and associated SQLSTATE code for the most - recent failed query in the current psql session, or - an empty string and 00000 if no error has occurred in + recent failed query in the current psql session, or + an empty string and 00000 if no error has occurred in the current session. @@ -3690,14 +3690,14 @@ bar - When set to on, if a statement in a transaction block + When set to on, if a statement in a transaction block generates an error, the error is ignored and the transaction - continues. When set to interactive, such errors are only + continues. When set to interactive, such errors are only ignored in interactive sessions, and not when reading script - files. When set to off (the default), a statement in a + files. When set to off (the default), a statement in a transaction block that generates an error aborts the entire transaction. The error rollback mode works by issuing an - implicit SAVEPOINT for you, just before each command + implicit SAVEPOINT for you, just before each command that is in a transaction block, and then rolling back to the savepoint if the command fails. @@ -3709,7 +3709,7 @@ bar By default, command processing continues after an error. When this - variable is set to on, processing will instead stop + variable is set to on, processing will instead stop immediately. In interactive mode, psql will return to the command prompt; otherwise, psql will exit, returning @@ -3752,7 +3752,7 @@ bar QUIET - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . It is probably not too useful in interactive mode. @@ -3775,9 +3775,9 @@ bar The server's version number as a string, for - example 9.6.2, 10.1 or 11beta1, + example 9.6.2, 10.1 or 11beta1, and in numeric form, for - example 90602 or 100001. + example 90602 or 100001. These are set every time you connect to a database (including program start-up), but can be changed or unset. @@ -3789,13 +3789,13 @@ bar This variable can be set to the - values never, errors, or always - to control whether CONTEXT fields are displayed in - messages from the server. The default is errors (meaning + values never, errors, or always + to control whether CONTEXT fields are displayed in + messages from the server. The default is errors (meaning that context will be shown in error messages, but not in notice or warning messages). This setting has no effect - when VERBOSITY is set to terse. - (See also \errverbose, for use when you want a verbose + when VERBOSITY is set to terse. + (See also \errverbose, for use when you want a verbose version of the error you just got.) @@ -3805,7 +3805,7 @@ bar SINGLELINE - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . @@ -3815,7 +3815,7 @@ bar SINGLESTEP - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . @@ -3826,7 +3826,7 @@ bar The error code (see ) associated - with the last SQL query's failure, or 00000 if it + with the last SQL query's failure, or 00000 if it succeeded. @@ -3847,10 +3847,10 @@ bar VERBOSITY - This variable can be set to the values default, - verbose, or terse to control the verbosity + This variable can be set to the values default, + verbose, or terse to control the verbosity of error reports. - (See also \errverbose, for use when you want a verbose + (See also \errverbose, for use when you want a verbose version of the error you just got.) @@ -3863,10 +3863,10 @@ bar These variables are set at program start-up to reflect - psql's version, respectively as a verbose string, - a short string (e.g., 9.6.2, 10.1, - or 11beta1), and a number (e.g., 90602 - or 100001). They can be changed or unset. + psql's version, respectively as a verbose string, + a short string (e.g., 9.6.2, 10.1, + or 11beta1), and a number (e.g., 90602 + or 100001). They can be changed or unset. @@ -3916,7 +3916,7 @@ testdb=> SELECT * FROM :"foo"; Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a - construction such as ':foo' doesn't work to produce a quoted + construction such as ':foo' doesn't work to produce a quoted literal from a variable's value (and it would be unsafe if it did work, since it wouldn't correctly handle quotes embedded in the value). @@ -3943,7 +3943,7 @@ testdb=> INSERT INTO my_table VALUES (:'content'); - The :{?name} special syntax returns TRUE + The :{?name} special syntax returns TRUE or FALSE depending on whether the variable exists or not, and is thus always substituted, unless the colon is backslash-escaped. @@ -4086,8 +4086,8 @@ testdb=> INSERT INTO my_table VALUES (:'content'); Transaction status: an empty string when not in a transaction - block, or * when in a transaction block, or - ! when in a failed transaction block, or ? + block, or * when in a transaction block, or + ! when in a failed transaction block, or ? when the transaction state is indeterminate (for example, because there is no connection). @@ -4098,7 +4098,7 @@ testdb=> INSERT INTO my_table VALUES (:'content'); %l - The line number inside the current statement, starting from 1. + The line number inside the current statement, starting from 1. @@ -4186,7 +4186,7 @@ testdb=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%# ' supported, although the completion logic makes no claim to be an SQL parser. The queries generated by tab-completion can also interfere with other SQL commands, e.g. SET - TRANSACTION ISOLATION LEVEL. + TRANSACTION ISOLATION LEVEL. If for some reason you do not like the tab completion, you can turn it off by putting this in a file named .inputrc in your home directory: @@ -4214,8 +4214,8 @@ $endif - If \pset columns is zero, controls the - width for the wrapped format and width for determining + If \pset columns is zero, controls the + width for the wrapped format and width for determining if wide output requires the pager or should be switched to the vertical format in expanded auto mode. @@ -4261,8 +4261,8 @@ $endif \ev is used with a line number argument, this variable specifies the command-line argument used to pass the starting line number to - the user's editor. For editors such as Emacs or - vi, this is a plus sign. Include a trailing + the user's editor. For editors such as Emacs or + vi, this is a plus sign. Include a trailing space in the value of the variable if there needs to be space between the option name and the line number. Examples: @@ -4304,8 +4304,8 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' pager-related options of the \pset command. These variables are examined in the order listed; the first that is set is used. - If none of them is set, the default is to use more on most - platforms, but less on Cygwin. + If none of them is set, the default is to use more on most + platforms, but less on Cygwin. @@ -4344,8 +4344,8 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -4371,9 +4371,9 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' The system-wide startup file is named psqlrc and is - sought in the installation's system configuration directory, + sought in the installation's system configuration directory, which is most reliably identified by running pg_config - --sysconfdir. By default this directory will be ../etc/ + --sysconfdir. By default this directory will be ../etc/ relative to the directory containing the PostgreSQL executables. The name of this directory can be set explicitly via the PGSYSCONFDIR @@ -4410,7 +4410,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' The location of the history file can be set explicitly via - the HISTFILE psql variable or + the HISTFILE psql variable or the PSQL_HISTORY environment variable. @@ -4426,10 +4426,10 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' psql works best with servers of the same or an older major version. Backslash commands are particularly likely - to fail if the server is of a newer version than psql - itself. However, backslash commands of the \d family should + to fail if the server is of a newer version than psql + itself. However, backslash commands of the \d family should work with servers of versions back to 7.4, though not necessarily with - servers newer than psql itself. The general + servers newer than psql itself. The general functionality of running SQL commands and displaying query results should also work with servers of a newer major version, but this cannot be guaranteed in all cases. @@ -4449,7 +4449,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' Before PostgreSQL 9.6, the option implied - (); this is no longer the case. @@ -4471,7 +4471,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' psql is built as a console - application. Since the Windows console windows use a different + application. Since the Windows console windows use a different encoding than the rest of the system, you must take special care when using 8-bit characters within psql. If psql detects a problematic @@ -4490,7 +4490,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' - Set the console font to Lucida Console, because the + Set the console font to Lucida Console, because the raster font does not work with the ANSI code page. diff --git a/doc/src/sgml/ref/reassign_owned.sgml b/doc/src/sgml/ref/reassign_owned.sgml index c1751e7f47..2bbd6b8f07 100644 --- a/doc/src/sgml/ref/reassign_owned.sgml +++ b/doc/src/sgml/ref/reassign_owned.sgml @@ -88,7 +88,7 @@ REASSIGN OWNED BY { old_role | CURR The REASSIGN OWNED command does not affect any - privileges granted to the old_roles for + privileges granted to the old_roles for objects that are not owned by them. Use DROP OWNED to revoke such privileges. diff --git a/doc/src/sgml/ref/refresh_materialized_view.sgml b/doc/src/sgml/ref/refresh_materialized_view.sgml index e56e542eb5..0135d15cec 100644 --- a/doc/src/sgml/ref/refresh_materialized_view.sgml +++ b/doc/src/sgml/ref/refresh_materialized_view.sgml @@ -94,9 +94,9 @@ REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name While the default index for future - operations is retained, REFRESH MATERIALIZED VIEW does not + operations is retained, REFRESH MATERIALIZED VIEW does not order the generated rows based on this property. If you want the data - to be ordered upon generation, you must use an ORDER BY + to be ordered upon generation, you must use an ORDER BY clause in the backing query. diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml index 09fc61d15b..3dc2608f76 100644 --- a/doc/src/sgml/ref/reindex.sgml +++ b/doc/src/sgml/ref/reindex.sgml @@ -46,7 +46,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } - An index has become bloated, that is it contains many + An index has become bloated, that is it contains many empty or nearly-empty pages. This can occur with B-tree indexes in PostgreSQL under certain uncommon access patterns. REINDEX provides a way to reduce @@ -65,12 +65,12 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } - An index build with the CONCURRENTLY option failed, leaving - an invalid index. Such indexes are useless but it can be - convenient to use REINDEX to rebuild them. Note that - REINDEX will not perform a concurrent build. To build the + An index build with the CONCURRENTLY option failed, leaving + an invalid index. Such indexes are useless but it can be + convenient to use REINDEX to rebuild them. Note that + REINDEX will not perform a concurrent build. To build the index without interfering with production you should drop the index and - reissue the CREATE INDEX CONCURRENTLY command. + reissue the CREATE INDEX CONCURRENTLY command. @@ -95,7 +95,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } Recreate all indexes of the specified table. If the table has a - secondary TOAST table, that is reindexed as well. + secondary TOAST table, that is reindexed as well. @@ -105,7 +105,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } Recreate all indexes of the specified schema. If a table of this - schema has a secondary TOAST table, that is reindexed as + schema has a secondary TOAST table, that is reindexed as well. Indexes on shared system catalogs are also processed. This form of REINDEX cannot be executed inside a transaction block. @@ -144,7 +144,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } The name of the specific index, table, or database to be reindexed. Index and table names can be schema-qualified. - Presently, REINDEX DATABASE and REINDEX SYSTEM + Presently, REINDEX DATABASE and REINDEX SYSTEM can only reindex the current database, so their parameter must match the current database's name. @@ -186,10 +186,10 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } PostgreSQL
server with the option included on its command line. - Then, REINDEX DATABASE, REINDEX SYSTEM, - REINDEX TABLE, or REINDEX INDEX can be + Then, REINDEX DATABASE, REINDEX SYSTEM, + REINDEX TABLE, or REINDEX INDEX can be issued, depending on how much you want to reconstruct. If in - doubt, use REINDEX SYSTEM to select + doubt, use REINDEX SYSTEM to select reconstruction of all system indexes in the database. Then quit the single-user server session and restart the regular server. See the reference page for more @@ -201,8 +201,8 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } -P included in its command line options. The method for doing this varies across clients, but in all - libpq-based clients, it is possible to set - the PGOPTIONS environment variable to -P + libpq-based clients, it is possible to set + the PGOPTIONS environment variable to -P before starting the client. Note that while this method does not require locking out other clients, it might still be wise to prevent other users from connecting to the damaged database until repairs @@ -212,12 +212,12 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } REINDEX is similar to a drop and recreate of the index in that the index contents are rebuilt from scratch. However, the locking - considerations are rather different. REINDEX locks out writes + considerations are rather different. REINDEX locks out writes but not reads of the index's parent table. It also takes an exclusive lock on the specific index being processed, which will block reads that attempt - to use that index. In contrast, DROP INDEX momentarily takes + to use that index. In contrast, DROP INDEX momentarily takes an exclusive lock on the parent table, blocking both writes and reads. The - subsequent CREATE INDEX locks out writes but not reads; since + subsequent CREATE INDEX locks out writes but not reads; since the index is not there, no read will attempt to use it, meaning that there will be no blocking but reads might be forced into expensive sequential scans. diff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml index e4721d8113..627be6a0ad 100644 --- a/doc/src/sgml/ref/reindexdb.sgml +++ b/doc/src/sgml/ref/reindexdb.sgml @@ -109,8 +109,8 @@ PostgreSQL documentation - - + + Reindex all databases. @@ -119,8 +119,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be reindexed. @@ -134,8 +134,8 @@ PostgreSQL documentation - - + + Echo the commands that reindexdb generates @@ -145,20 +145,20 @@ PostgreSQL documentation - - + + Recreate index only. Multiple indexes can be recreated by writing multiple - switches. - - + + Do not display progress messages. @@ -167,8 +167,8 @@ PostgreSQL documentation - - + + Reindex database's system catalogs. @@ -177,32 +177,32 @@ PostgreSQL documentation - - + + Reindex schema only. Multiple schemas can be reindexed by writing multiple - switches. - - + + Reindex table only. Multiple tables can be reindexed by writing multiple - switches. - - + + Print detailed information during processing. @@ -211,8 +211,8 @@ PostgreSQL documentation - - + + Print the reindexdb version and exit. @@ -221,8 +221,8 @@ PostgreSQL documentation - - + + Show help about reindexdb command line @@ -241,8 +241,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the server is @@ -253,8 +253,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -265,8 +265,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -275,8 +275,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -290,8 +290,8 @@ PostgreSQL documentation - - + + Force reindexdb to prompt for a @@ -304,14 +304,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, reindexdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -345,8 +345,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -376,7 +376,7 @@ PostgreSQL documentation reindexdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. diff --git a/doc/src/sgml/ref/release_savepoint.sgml b/doc/src/sgml/ref/release_savepoint.sgml index b331b7226b..2e8dcc0746 100644 --- a/doc/src/sgml/ref/release_savepoint.sgml +++ b/doc/src/sgml/ref/release_savepoint.sgml @@ -109,7 +109,7 @@ COMMIT; Compatibility - This command conforms to the SQL standard. The standard + This command conforms to the SQL standard. The standard specifies that the key word SAVEPOINT is mandatory, but PostgreSQL allows it to be omitted. diff --git a/doc/src/sgml/ref/reset.sgml b/doc/src/sgml/ref/reset.sgml index 98c3207831..b434ad10c2 100644 --- a/doc/src/sgml/ref/reset.sgml +++ b/doc/src/sgml/ref/reset.sgml @@ -42,19 +42,19 @@ SET configuration_parameter TO DEFA The default value is defined as the value that the parameter would - have had, if no SET had ever been issued for it in the + have had, if no SET had ever been issued for it in the current session. The actual source of this value might be a compiled-in default, the configuration file, command-line options, or per-database or per-user default settings. This is subtly different from defining it as the value that the parameter had at session - start, because if the value came from the configuration file, it + start, because if the value came from the configuration file, it will be reset to whatever is specified by the configuration file now. See for details. - The transactional behavior of RESET is the same as - SET: its effects will be undone by transaction rollback. + The transactional behavior of RESET is the same as + SET: its effects will be undone by transaction rollback. @@ -88,7 +88,7 @@ SET configuration_parameter TO DEFA Examples - Set the timezone configuration variable to its default value: + Set the timezone configuration variable to its default value: RESET timezone; diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml index 91f69af9ee..c893666e83 100644 --- a/doc/src/sgml/ref/revoke.sgml +++ b/doc/src/sgml/ref/revoke.sgml @@ -130,13 +130,13 @@ REVOKE [ ADMIN OPTION FOR ] Note that any particular role will have the sum of privileges granted directly to it, privileges granted to any role it is presently a member of, and privileges granted to - PUBLIC. Thus, for example, revoking SELECT privilege + PUBLIC. Thus, for example, revoking SELECT privilege from PUBLIC does not necessarily mean that all roles - have lost SELECT privilege on the object: those who have it granted + have lost SELECT privilege on the object: those who have it granted directly or via another role will still have it. Similarly, revoking - SELECT from a user might not prevent that user from using - SELECT if PUBLIC or another membership - role still has SELECT rights. + SELECT from a user might not prevent that user from using + SELECT if PUBLIC or another membership + role still has SELECT rights. @@ -167,10 +167,10 @@ REVOKE [ ADMIN OPTION FOR ] - When revoking membership in a role, GRANT OPTION is instead - called ADMIN OPTION, but the behavior is similar. + When revoking membership in a role, GRANT OPTION is instead + called ADMIN OPTION, but the behavior is similar. Note also that this form of the command does not - allow the noise word GROUP. + allow the noise word GROUP. @@ -181,7 +181,7 @@ REVOKE [ ADMIN OPTION FOR ] Use 's \dp command to display the privileges granted on existing tables and columns. See for information about the - format. For non-table objects there are other \d commands + format. For non-table objects there are other \d commands that can display their privileges. @@ -198,12 +198,12 @@ REVOKE [ ADMIN OPTION FOR ] - When a non-owner of an object attempts to REVOKE privileges + When a non-owner of an object attempts to REVOKE privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will revoke only those privileges for which the user has grant options. The REVOKE ALL - PRIVILEGES forms will issue a warning message if no grant options are + PRIVILEGES forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but @@ -212,7 +212,7 @@ REVOKE [ ADMIN OPTION FOR ] - If a superuser chooses to issue a GRANT or REVOKE + If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it were issued by the owner of the affected object. Since all privileges ultimately come from the object owner (possibly indirectly via chains of grant options), @@ -221,26 +221,26 @@ REVOKE [ ADMIN OPTION FOR ] - REVOKE can also be done by a role + REVOKE can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges WITH GRANT OPTION on the object. In this case the command is performed as though it were issued by the containing role that actually owns the object or holds the privileges WITH GRANT OPTION. For example, if table - t1 is owned by role g1, of which role - u1 is a member, then u1 can revoke privileges - on t1 that are recorded as being granted by g1. - This would include grants made by u1 as well as by other - members of role g1. + t1 is owned by role g1, of which role + u1 is a member, then u1 can revoke privileges + on t1 that are recorded as being granted by g1. + This would include grants made by u1 as well as by other + members of role g1. - If the role executing REVOKE holds privileges + If the role executing REVOKE holds privileges indirectly via more than one role membership path, it is unspecified which containing role will be used to perform the command. In such cases - it is best practice to use SET ROLE to become the specific - role you want to do the REVOKE as. Failure to do so might + it is best practice to use SET ROLE to become the specific + role you want to do the REVOKE as. Failure to do so might lead to revoking privileges other than the ones you intended, or not revoking anything at all. @@ -267,11 +267,11 @@ REVOKE ALL PRIVILEGES ON kinds FROM manuel; Note that this actually means revoke all privileges that I - granted. + granted. - Revoke membership in role admins from user joe: + Revoke membership in role admins from user joe: REVOKE admins FROM joe; @@ -285,7 +285,7 @@ REVOKE admins FROM joe; The compatibility notes of the command apply analogously to REVOKE. The keyword RESTRICT or CASCADE - is required according to the standard, but PostgreSQL + is required according to the standard, but PostgreSQL assumes RESTRICT by default. diff --git a/doc/src/sgml/ref/rollback.sgml b/doc/src/sgml/ref/rollback.sgml index b0b1e8d0e3..1a0e5a0ebc 100644 --- a/doc/src/sgml/ref/rollback.sgml +++ b/doc/src/sgml/ref/rollback.sgml @@ -59,7 +59,7 @@ ROLLBACK [ WORK | TRANSACTION ] - Issuing ROLLBACK outside of a transaction + Issuing ROLLBACK outside of a transaction block emits a warning and otherwise has no effect. diff --git a/doc/src/sgml/ref/rollback_prepared.sgml b/doc/src/sgml/ref/rollback_prepared.sgml index a0ffc65083..6c44049a89 100644 --- a/doc/src/sgml/ref/rollback_prepared.sgml +++ b/doc/src/sgml/ref/rollback_prepared.sgml @@ -75,7 +75,7 @@ ROLLBACK PREPARED transaction_id Examples Roll back the transaction identified by the transaction - identifier foobar: + identifier foobar: ROLLBACK PREPARED 'foobar'; diff --git a/doc/src/sgml/ref/rollback_to.sgml b/doc/src/sgml/ref/rollback_to.sgml index e8072d8974..f1da804f67 100644 --- a/doc/src/sgml/ref/rollback_to.sgml +++ b/doc/src/sgml/ref/rollback_to.sgml @@ -40,7 +40,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name - ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that + ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint. @@ -50,7 +50,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name - savepoint_name + savepoint_name The savepoint to roll back to. @@ -77,17 +77,17 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_nameFETCH or MOVE command inside a + affected by a FETCH or MOVE command inside a savepoint that is later rolled back, the cursor remains at the - position that FETCH left it pointing to (that is, the cursor - motion caused by FETCH is not rolled back). + position that FETCH left it pointing to (that is, the cursor + motion caused by FETCH is not rolled back). Closing a cursor is not undone by rolling back, either. However, other side-effects caused by the cursor's query (such as - side-effects of volatile functions called by the query) are + side-effects of volatile functions called by the query) are rolled back if they occur during a savepoint that is later rolled back. A cursor whose execution causes a transaction to abort is put in a cannot-execute state, so while the transaction can be restored using - ROLLBACK TO SAVEPOINT, the cursor can no longer be used. + ROLLBACK TO SAVEPOINT, the cursor can no longer be used. @@ -133,13 +133,13 @@ COMMIT; Compatibility - The SQL standard specifies that the key word - SAVEPOINT is mandatory, but PostgreSQL - and Oracle allow it to be omitted. SQL allows - only WORK, not TRANSACTION, as a noise word - after ROLLBACK. Also, SQL has an optional clause - AND [ NO ] CHAIN which is not currently supported by - PostgreSQL. Otherwise, this command conforms to + The SQL standard specifies that the key word + SAVEPOINT is mandatory, but PostgreSQL + and Oracle allow it to be omitted. SQL allows + only WORK, not TRANSACTION, as a noise word + after ROLLBACK. Also, SQL has an optional clause + AND [ NO ] CHAIN which is not currently supported by + PostgreSQL. Otherwise, this command conforms to the SQL standard. diff --git a/doc/src/sgml/ref/savepoint.sgml b/doc/src/sgml/ref/savepoint.sgml index 5b944a2561..6d40f4da42 100644 --- a/doc/src/sgml/ref/savepoint.sgml +++ b/doc/src/sgml/ref/savepoint.sgml @@ -114,11 +114,11 @@ COMMIT; SQL requires a savepoint to be destroyed automatically when another savepoint with the same name is established. In - PostgreSQL, the old savepoint is kept, though only the more + PostgreSQL, the old savepoint is kept, though only the more recent one will be used when rolling back or releasing. (Releasing the - newer savepoint with RELEASE SAVEPOINT will cause the older one - to again become accessible to ROLLBACK TO SAVEPOINT and - RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is + newer savepoint with RELEASE SAVEPOINT will cause the older one + to again become accessible to ROLLBACK TO SAVEPOINT and + RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is fully SQL conforming. diff --git a/doc/src/sgml/ref/security_label.sgml b/doc/src/sgml/ref/security_label.sgml index 971b928a02..999f9c80cd 100644 --- a/doc/src/sgml/ref/security_label.sgml +++ b/doc/src/sgml/ref/security_label.sgml @@ -60,12 +60,12 @@ SECURITY LABEL [ FOR provider ] ON object. An arbitrary number of security labels, one per label provider, can be associated with a given database object. Label providers are loadable modules which register themselves by using the function - register_label_provider. + register_label_provider. - register_label_provider is not an SQL function; it can + register_label_provider is not an SQL function; it can only be called from C code loaded into the backend. @@ -74,11 +74,11 @@ SECURITY LABEL [ FOR provider ] ON The label provider determines whether a given label is valid and whether it is permissible to assign that label to a given object. The meaning of a given label is likewise at the discretion of the label provider. - PostgreSQL places no restrictions on whether or how a + PostgreSQL places no restrictions on whether or how a label provider must interpret security labels; it merely provides a mechanism for storing them. In practice, this facility is intended to allow integration with label-based mandatory access control (MAC) systems such as - SE-Linux. Such systems make all access control decisions + SE-Linux. Such systems make all access control decisions based on object labels, rather than traditional discretionary access control (DAC) concepts such as users and groups. @@ -120,14 +120,14 @@ SECURITY LABEL [ FOR provider ] ON The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that SECURITY LABEL does not actually - pay any attention to OUT arguments, since only the input + pay any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -178,7 +178,7 @@ SECURITY LABEL [ FOR provider ] ON label - The new security label, written as a string literal; or NULL + The new security label, written as a string literal; or NULL to drop the security label. diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml index 57f11e66fb..7355e790f6 100644 --- a/doc/src/sgml/ref/select.sgml +++ b/doc/src/sgml/ref/select.sgml @@ -163,10 +163,10 @@ TABLE [ ONLY ] table_name [ * ] operator returns the rows that are in the first result set but not in the second. In all three cases, duplicate rows are eliminated unless ALL is specified. The noise - word DISTINCT can be added to explicitly specify - eliminating duplicate rows. Notice that DISTINCT is + word DISTINCT can be added to explicitly specify + eliminating duplicate rows. Notice that DISTINCT is the default behavior here, even though ALL is - the default for SELECT itself. (See + the default for SELECT itself. (See , , and below.) @@ -194,7 +194,7 @@ TABLE [ ONLY ] table_name [ * ] - If FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE + If FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE or FOR KEY SHARE is specified, the SELECT statement locks the selected rows @@ -207,7 +207,7 @@ TABLE [ ONLY ] table_name [ * ] You must have SELECT privilege on each column used - in a SELECT command. The use of FOR NO KEY UPDATE, + in a SELECT command. The use of FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE or FOR KEY SHARE requires UPDATE privilege as well (for at least one column @@ -226,15 +226,15 @@ TABLE [ ONLY ] table_name [ * ] subqueries that can be referenced by name in the primary query. The subqueries effectively act as temporary tables or views for the duration of the primary query. - Each subquery can be a SELECT, TABLE, VALUES, + Each subquery can be a SELECT, TABLE, VALUES, INSERT, UPDATE or DELETE statement. When writing a data-modifying statement (INSERT, UPDATE or DELETE) in - WITH, it is usual to include a RETURNING clause. - It is the output of RETURNING, not the underlying + WITH, it is usual to include a RETURNING clause. + It is the output of RETURNING, not the underlying table that the statement modifies, that forms the temporary table that is - read by the primary query. If RETURNING is omitted, the + read by the primary query. If RETURNING is omitted, the statement is still executed, but it produces no output so it cannot be referenced as a table by the primary query. @@ -254,7 +254,7 @@ TABLE [ ONLY ] table_name [ * ] non_recursive_term UNION [ ALL | DISTINCT ] recursive_term where the recursive self-reference must appear on the right-hand - side of the UNION. Only one recursive self-reference + side of the UNION. Only one recursive self-reference is permitted per query. Recursive data-modifying statements are not supported, but you can use the results of a recursive SELECT query in @@ -285,7 +285,7 @@ TABLE [ ONLY ] table_name [ * ] The primary query and the WITH queries are all (notionally) executed at the same time. This implies that the effects of a data-modifying statement in WITH cannot be seen from - other parts of the query, other than by reading its RETURNING + other parts of the query, other than by reading its RETURNING output. If two such data-modifying statements attempt to modify the same row, the results are unspecified. @@ -303,7 +303,7 @@ TABLE [ ONLY ] table_name [ * ] tables for the SELECT. If multiple sources are specified, the result is the Cartesian product (cross join) of all the sources. But usually qualification conditions are added (via - WHERE) to restrict the returned rows to a small subset of the + WHERE) to restrict the returned rows to a small subset of the Cartesian product. @@ -317,10 +317,10 @@ TABLE [ ONLY ] table_name [ * ] The name (optionally schema-qualified) of an existing table or view. - If ONLY is specified before the table name, only that - table is scanned. If ONLY is not specified, the table + If ONLY is specified before the table name, only that + table is scanned. If ONLY is not specified, the table and all its descendant tables (if any) are scanned. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -330,14 +330,14 @@ TABLE [ ONLY ] table_name [ * ] alias - A substitute name for the FROM item containing the + A substitute name for the FROM item containing the alias. An alias is used for brevity or to eliminate ambiguity for self-joins (where the same table is scanned multiple times). When an alias is provided, it completely hides the actual name of the table or function; for example given - FROM foo AS f, the remainder of the - SELECT must refer to this FROM - item as f not foo. If an alias is + FROM foo AS f, the remainder of the + SELECT must refer to this FROM + item as f not foo. If an alias is written, a column alias list can also be written to provide substitute names for one or more columns of the table. @@ -348,12 +348,12 @@ TABLE [ ONLY ] table_name [ * ] TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] - A TABLESAMPLE clause after - a table_name indicates that the + A TABLESAMPLE clause after + a table_name indicates that the specified sampling_method should be used to retrieve a subset of the rows in that table. This sampling precedes the application of any other filters such - as WHERE clauses. + as WHERE clauses. The standard PostgreSQL distribution includes two sampling methods, BERNOULLI and SYSTEM, and other sampling methods can be @@ -361,11 +361,11 @@ TABLE [ ONLY ] table_name [ * ] - The BERNOULLI and SYSTEM sampling methods - each accept a single argument + The BERNOULLI and SYSTEM sampling methods + each accept a single argument which is the fraction of the table to sample, expressed as a percentage between 0 and 100. This argument can be - any real-valued expression. (Other sampling methods might + any real-valued expression. (Other sampling methods might accept more or different arguments.) These two methods each return a randomly-chosen sample of the table that will contain approximately the specified percentage of the table's rows. @@ -383,10 +383,10 @@ TABLE [ ONLY ] table_name [ * ] The optional REPEATABLE clause specifies - a seed number or expression to use + a seed number or expression to use for generating random numbers within the sampling method. The seed value can be any non-null floating-point value. Two queries that - specify the same seed and argument + specify the same seed and argument values will select the same sample of the table, if the table has not been changed meanwhile. But different seed values will usually produce different samples. @@ -420,9 +420,9 @@ TABLE [ ONLY ] table_name [ * ] with_query_name - A WITH query is referenced by writing its name, + A WITH query is referenced by writing its name, just as though the query's name were a table name. (In fact, - the WITH query hides any real table of the same name + the WITH query hides any real table of the same name for the purposes of the primary query. If necessary, you can refer to a real table of the same name by schema-qualifying the table's name.) @@ -456,8 +456,8 @@ TABLE [ ONLY ] table_name [ * ] Multiple function calls can be combined into a - single FROM-clause item by surrounding them - with ROWS FROM( ... ). The output of such an item is the + single FROM-clause item by surrounding them + with ROWS FROM( ... ). The output of such an item is the concatenation of the first row from each function, then the second row from each function, etc. If some of the functions produce fewer rows than others, null values are substituted for the missing data, so @@ -467,28 +467,28 @@ TABLE [ ONLY ] table_name [ * ] If the function has been defined as returning the - record data type, then an alias or the key word - AS must be present, followed by a column + record data type, then an alias or the key word + AS must be present, followed by a column definition list in the form ( column_name data_type , ... - ). The column definition list must match the +
)
. The column definition list must match the actual number and types of columns returned by the function. - When using the ROWS FROM( ... ) syntax, if one of the + When using the ROWS FROM( ... ) syntax, if one of the functions requires a column definition list, it's preferred to put the column definition list after the function call inside - ROWS FROM( ... ). A column definition list can be placed - after the ROWS FROM( ... ) construct only if there's just - a single function and no WITH ORDINALITY clause. + ROWS FROM( ... ). A column definition list can be placed + after the ROWS FROM( ... ) construct only if there's just + a single function and no WITH ORDINALITY clause. To use ORDINALITY together with a column definition - list, you must use the ROWS FROM( ... ) syntax and put the - column definition list inside ROWS FROM( ... ). + list, you must use the ROWS FROM( ... ) syntax and put the + column definition list inside ROWS FROM( ... ). @@ -516,9 +516,9 @@ TABLE [ ONLY ] table_name [ * ] - For the INNER and OUTER join types, a + For the INNER and OUTER join types, a join condition must be specified, namely exactly one of - NATURAL, ON NATURAL, ON join_condition, or USING (join_column [, ...]). @@ -527,46 +527,46 @@ TABLE [ ONLY ] table_name [ * ] - A JOIN clause combines two FROM - items, which for convenience we will refer to as tables, - though in reality they can be any type of FROM item. + A JOIN clause combines two FROM + items, which for convenience we will refer to as tables, + though in reality they can be any type of FROM item. Use parentheses if necessary to determine the order of nesting. In the absence of parentheses, JOINs nest left-to-right. In any case JOIN binds more - tightly than the commas separating FROM-list items. + tightly than the commas separating FROM-list items. - CROSS JOIN and INNER JOIN + CROSS JOIN and INNER JOIN produce a simple Cartesian product, the same result as you get from - listing the two tables at the top level of FROM, + listing the two tables at the top level of FROM, but restricted by the join condition (if any). - CROSS JOIN is equivalent to INNER JOIN ON - (TRUE), that is, no rows are removed by qualification. + CROSS JOIN is equivalent to INNER JOIN ON + (TRUE), that is, no rows are removed by qualification. These join types are just a notational convenience, since they - do nothing you couldn't do with plain FROM and - WHERE. + do nothing you couldn't do with plain FROM and + WHERE. - LEFT OUTER JOIN returns all rows in the qualified + LEFT OUTER JOIN returns all rows in the qualified Cartesian product (i.e., all combined rows that pass its join condition), plus one copy of each row in the left-hand table for which there was no right-hand row that passed the join condition. This left-hand row is extended to the full width of the joined table by inserting null values for the - right-hand columns. Note that only the JOIN + right-hand columns. Note that only the JOIN clause's own condition is considered while deciding which rows have matches. Outer conditions are applied afterwards. - Conversely, RIGHT OUTER JOIN returns all the + Conversely, RIGHT OUTER JOIN returns all the joined rows, plus one row for each unmatched right-hand row (extended with nulls on the left). This is just a notational convenience, since you could convert it to a LEFT - OUTER JOIN by switching the left and right tables. + OUTER JOIN by switching the left and right tables. - FULL OUTER JOIN returns all the joined rows, plus + FULL OUTER JOIN returns all the joined rows, plus one row for each unmatched left-hand row (extended with nulls on the right), plus one row for each unmatched right-hand row (extended with nulls on the left). @@ -593,7 +593,7 @@ TABLE [ ONLY ] table_name [ * ] A clause of the form USING ( a, b, ... ) is shorthand for ON left_table.a = right_table.a AND left_table.b = right_table.b .... Also, - USING implies that only one of each pair of + USING implies that only one of each pair of equivalent columns will be included in the join output, not both. @@ -605,10 +605,10 @@ TABLE [ ONLY ] table_name [ * ] NATURAL is shorthand for a - USING list that mentions all columns in the two + USING list that mentions all columns in the two tables that have matching names. If there are no common column names, NATURAL is equivalent - to ON TRUE. + to ON TRUE. @@ -618,32 +618,32 @@ TABLE [ ONLY ] table_name [ * ] The LATERAL key word can precede a - sub-SELECT FROM item. This allows the - sub-SELECT to refer to columns of FROM - items that appear before it in the FROM list. (Without + sub-SELECT FROM item. This allows the + sub-SELECT to refer to columns of FROM + items that appear before it in the FROM list. (Without LATERAL, each sub-SELECT is evaluated independently and so cannot cross-reference any other - FROM item.) + FROM item.) LATERAL can also precede a function-call - FROM item, but in this case it is a noise word, because - the function expression can refer to earlier FROM items + FROM item, but in this case it is a noise word, because + the function expression can refer to earlier FROM items in any case. A LATERAL item can appear at top level in the - FROM list, or within a JOIN tree. In the + FROM list, or within a JOIN tree. In the latter case it can also refer to any items that are on the left-hand - side of a JOIN that it is on the right-hand side of. + side of a JOIN that it is on the right-hand side of. - When a FROM item contains LATERAL + When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the - FROM item providing the cross-referenced column(s), or - set of rows of multiple FROM items providing the + FROM item providing the cross-referenced column(s), or + set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is @@ -651,14 +651,14 @@ TABLE [ ONLY ] table_name [ * ] - The column source table(s) must be INNER or - LEFT joined to the LATERAL item, else + The column source table(s) must be INNER or + LEFT joined to the LATERAL item, else there would not be a well-defined set of rows from which to compute each set of rows for the LATERAL item. Thus, - although a construct such as X RIGHT JOIN - LATERAL Y is syntactically valid, it is - not actually allowed for Y to reference - X. + although a construct such as X RIGHT JOIN + LATERAL Y is syntactically valid, it is + not actually allowed for Y to reference + X. @@ -707,13 +707,13 @@ GROUP BY grouping_element [, ...] - If any of GROUPING SETS, ROLLUP or - CUBE are present as grouping elements, then the - GROUP BY clause as a whole defines some number of - independent grouping sets. The effect of this is - equivalent to constructing a UNION ALL between + If any of GROUPING SETS, ROLLUP or + CUBE are present as grouping elements, then the + GROUP BY clause as a whole defines some number of + independent grouping sets. The effect of this is + equivalent to constructing a UNION ALL between subqueries with the individual grouping sets as their - GROUP BY clauses. For further details on the handling + GROUP BY clauses. For further details on the handling of grouping sets see . @@ -744,15 +744,15 @@ GROUP BY grouping_element [, ...] Keep in mind that all aggregate functions are evaluated before - evaluating any scalar expressions in the HAVING - clause or SELECT list. This means that, for example, - a CASE expression cannot be used to skip evaluation of + evaluating any scalar expressions in the HAVING + clause or SELECT list. This means that, for example, + a CASE expression cannot be used to skip evaluation of an aggregate function; see . - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with GROUP BY. @@ -784,9 +784,9 @@ HAVING condition The presence of HAVING turns a query into a grouped - query even if there is no GROUP BY clause. This is the + query even if there is no GROUP BY clause. This is the same as what happens when the query contains aggregate functions but - no GROUP BY clause. All the selected rows are considered to + no GROUP BY clause. All the selected rows are considered to form a single group, and the SELECT list and HAVING clause can only reference table columns from within aggregate functions. Such a query will emit a single row if the @@ -794,8 +794,8 @@ HAVING condition - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with HAVING. @@ -809,7 +809,7 @@ HAVING condition WINDOW window_name AS ( window_definition ) [, ...] where window_name is - a name that can be referenced from OVER clauses or + a name that can be referenced from OVER clauses or subsequent window definitions, and window_definition is @@ -822,29 +822,29 @@ WINDOW window_name AS ( If an existing_window_name - is specified it must refer to an earlier entry in the WINDOW + is specified it must refer to an earlier entry in the WINDOW list; the new window copies its partitioning clause from that entry, as well as its ordering clause if any. In this case the new window cannot - specify its own PARTITION BY clause, and it can specify - ORDER BY only if the copied window does not have one. + specify its own PARTITION BY clause, and it can specify + ORDER BY only if the copied window does not have one. The new window always uses its own frame clause; the copied window must not specify a frame clause. - The elements of the PARTITION BY list are interpreted in + The elements of the PARTITION BY list are interpreted in much the same fashion as elements of a , except that they are always simple expressions and never the name or number of an output column. Another difference is that these expressions can contain aggregate - function calls, which are not allowed in a regular GROUP BY + function calls, which are not allowed in a regular GROUP BY clause. They are allowed here because windowing occurs after grouping and aggregation. - Similarly, the elements of the ORDER BY list are interpreted + Similarly, the elements of the ORDER BY list are interpreted in much the same fashion as elements of an , except that the expressions are always taken as simple expressions and never the name @@ -852,18 +852,18 @@ WINDOW window_name AS ( - The optional frame_clause defines - the window frame for window functions that depend on the + The optional frame_clause defines + the window frame for window functions that depend on the frame (not all do). The window frame is a set of related rows for - each row of the query (called the current row). - The frame_clause can be one of + each row of the query (called the current row). + The frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS } frame_start +{ RANGE | ROWS } BETWEEN frame_start AND frame_end - where frame_start and frame_end can be + where frame_start and frame_end can be one of @@ -874,34 +874,34 @@ CURRENT ROW UNBOUNDED FOLLOWING - If frame_end is omitted it defaults to CURRENT - ROW. Restrictions are that - frame_start cannot be UNBOUNDED FOLLOWING, - frame_end cannot be UNBOUNDED PRECEDING, - and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + If frame_end is omitted it defaults to CURRENT + ROW. Restrictions are that + frame_start cannot be UNBOUNDED FOLLOWING, + frame_end cannot be UNBOUNDED PRECEDING, + and the frame_end choice cannot appear earlier in the + above list than the frame_start choice — for example + RANGE BETWEEN CURRENT ROW AND value PRECEDING is not allowed. - The default framing option is RANGE UNBOUNDED PRECEDING, + The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND - CURRENT ROW; it sets the frame to be all rows from the partition start + CURRENT ROW; it sets the frame to be all rows from the partition start up through the current row's last peer (a row that ORDER - BY considers equivalent to the current row, or all rows if there - is no ORDER BY). - In general, UNBOUNDED PRECEDING means that the frame + BY considers equivalent to the current row, or all rows if there + is no ORDER BY). + In general, UNBOUNDED PRECEDING means that the frame starts with the first row of the partition, and similarly - UNBOUNDED FOLLOWING means that the frame ends with the last - row of the partition (regardless of RANGE or ROWS - mode). In ROWS mode, CURRENT ROW + UNBOUNDED FOLLOWING means that the frame ends with the last + row of the partition (regardless of RANGE or ROWS + mode). In ROWS mode, CURRENT ROW means that the frame starts or ends with the current row; but in - RANGE mode it means that the frame starts or ends with - the current row's first or last peer in the ORDER BY ordering. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts + RANGE mode it means that the frame starts or ends with + the current row's first or last peer in the ORDER BY ordering. + The value PRECEDING and + value FOLLOWING cases are currently only + allowed in ROWS mode. They indicate that the frame starts or ends with the row that many rows before or after the current row. value must be an integer expression not containing any variables, aggregate functions, or window functions. @@ -910,32 +910,32 @@ UNBOUNDED FOLLOWING - Beware that the ROWS options can produce unpredictable - results if the ORDER BY ordering does not order the rows - uniquely. The RANGE options are designed to ensure that - rows that are peers in the ORDER BY ordering are treated + Beware that the ROWS options can produce unpredictable + results if the ORDER BY ordering does not order the rows + uniquely. The RANGE options are designed to ensure that + rows that are peers in the ORDER BY ordering are treated alike; all peer rows will be in the same frame. The purpose of a WINDOW clause is to specify the - behavior of window functions appearing in the query's + behavior of window functions appearing in the query's or . These functions can reference the WINDOW clause entries by name - in their OVER clauses. A WINDOW clause + in their OVER clauses. A WINDOW clause entry does not have to be referenced anywhere, however; if it is not used in the query it is simply ignored. It is possible to use window functions without any WINDOW clause at all, since a window function call can specify its window definition directly in - its OVER clause. However, the WINDOW + its OVER clause. However, the WINDOW clause saves typing when the same window definition is needed for more than one window function. - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with WINDOW. @@ -952,20 +952,20 @@ UNBOUNDED FOLLOWING The SELECT list (between the key words - SELECT and FROM) specifies expressions + SELECT and FROM) specifies expressions that form the output rows of the SELECT statement. The expressions can (and usually do) refer to columns - computed in the FROM clause. + computed in the FROM clause. Just as in a table, every output column of a SELECT has a name. In a simple SELECT this name is just - used to label the column for display, but when the SELECT + used to label the column for display, but when the SELECT is a sub-query of a larger query, the name is seen by the larger query as the column name of the virtual table produced by the sub-query. To specify the name to use for an output column, write - AS output_name + AS output_name after the column's expression. (You can omit AS, but only if the desired output name does not match any PostgreSQL keyword (see An output column's name can be used to refer to the column's value in - ORDER BY and GROUP BY clauses, but not in the - WHERE or HAVING clauses; there you must write + ORDER BY and GROUP BY clauses, but not in the + WHERE or HAVING clauses; there you must write out the expression instead. @@ -993,7 +993,7 @@ UNBOUNDED FOLLOWING rows. Also, you can write table_name.* as a shorthand for the columns coming from just that table. In these - cases it is not possible to specify new names with AS; + cases it is not possible to specify new names with AS; the output column names will be the same as the table columns' names. @@ -1008,11 +1008,11 @@ UNBOUNDED FOLLOWING contains any volatile or expensive functions. With that behavior, the order of function evaluations is more intuitive and there will not be evaluations corresponding to rows that never appear in the output. - PostgreSQL will effectively evaluate output expressions + PostgreSQL will effectively evaluate output expressions after sorting and limiting, so long as those expressions are not referenced in DISTINCT, ORDER BY or GROUP BY. (As a counterexample, SELECT - f(x) FROM tab ORDER BY 1 clearly must evaluate f(x) + f(x) FROM tab ORDER BY 1 clearly must evaluate f(x) before sorting.) Output expressions that contain set-returning functions are effectively evaluated after sorting and before limiting, so that LIMIT will act to cut off the output from a @@ -1021,7 +1021,7 @@ UNBOUNDED FOLLOWING - PostgreSQL versions before 9.6 did not provide any + PostgreSQL versions before 9.6 did not provide any guarantees about the timing of evaluation of output expressions versus sorting and limiting; it depended on the form of the chosen query plan. @@ -1032,9 +1032,9 @@ UNBOUNDED FOLLOWING <literal>DISTINCT</literal> Clause - If SELECT DISTINCT is specified, all duplicate rows are + If SELECT DISTINCT is specified, all duplicate rows are removed from the result set (one row is kept from each group of - duplicates). SELECT ALL specifies the opposite: all rows are + duplicates). SELECT ALL specifies the opposite: all rows are kept; that is the default. @@ -1044,9 +1044,9 @@ UNBOUNDED FOLLOWING keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for - ORDER BY (see above). Note that the first + ORDER BY (see above). Note that the first row of each set is unpredictable unless ORDER - BY is used to ensure that the desired row appears first. For + BY is used to ensure that the desired row appears first. For example: SELECT DISTINCT ON (location) location, time, report @@ -1054,21 +1054,21 @@ SELECT DISTINCT ON (location) location, time, report ORDER BY location, time DESC; retrieves the most recent weather report for each location. But - if we had not used ORDER BY to force descending order + if we had not used ORDER BY to force descending order of time values for each location, we'd have gotten a report from an unpredictable time for each location. - The DISTINCT ON expression(s) must match the leftmost - ORDER BY expression(s). The ORDER BY clause + The DISTINCT ON expression(s) must match the leftmost + ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the - desired precedence of rows within each DISTINCT ON group. + desired precedence of rows within each DISTINCT ON group. - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with DISTINCT. @@ -1082,9 +1082,9 @@ SELECT DISTINCT ON (location) location, time, report select_statement UNION [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE, or FOR KEY SHARE clause. - (ORDER BY and LIMIT can be attached to a + (ORDER BY and LIMIT can be attached to a subexpression if it is enclosed in parentheses. Without parentheses, these clauses will be taken to apply to the result of the UNION, not to its right-hand input @@ -1103,26 +1103,26 @@ SELECT DISTINCT ON (location) location, time, report - The result of UNION does not contain any duplicate - rows unless the ALL option is specified. - ALL prevents elimination of duplicates. (Therefore, - UNION ALL is usually significantly quicker than - UNION; use ALL when you can.) - DISTINCT can be written to explicitly specify the + The result of UNION does not contain any duplicate + rows unless the ALL option is specified. + ALL prevents elimination of duplicates. (Therefore, + UNION ALL is usually significantly quicker than + UNION; use ALL when you can.) + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. - Multiple UNION operators in the same + Multiple UNION operators in the same SELECT statement are evaluated left to right, unless otherwise indicated by parentheses. - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for a UNION result or for any input of a - UNION. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for a UNION result or for any input of a + UNION. @@ -1135,8 +1135,8 @@ SELECT DISTINCT ON (location) location, time, report select_statement INTERSECT [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE, or FOR KEY SHARE clause. + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE, or FOR KEY SHARE clause. @@ -1148,11 +1148,11 @@ SELECT DISTINCT ON (location) location, time, report The result of INTERSECT does not contain any - duplicate rows unless the ALL option is specified. - With ALL, a row that has m duplicates in the - left table and n duplicates in the right table will appear - min(m,n) times in the result set. - DISTINCT can be written to explicitly specify the + duplicate rows unless the ALL option is specified. + With ALL, a row that has m duplicates in the + left table and n duplicates in the right table will appear + min(m,n) times in the result set. + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. @@ -1167,10 +1167,10 @@ SELECT DISTINCT ON (location) location, time, report - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for an INTERSECT result or for any input of - an INTERSECT. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for an INTERSECT result or for any input of + an INTERSECT. @@ -1183,8 +1183,8 @@ SELECT DISTINCT ON (location) location, time, report select_statement EXCEPT [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE, or FOR KEY SHARE clause. + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE, or FOR KEY SHARE clause. @@ -1195,26 +1195,26 @@ SELECT DISTINCT ON (location) location, time, report The result of EXCEPT does not contain any - duplicate rows unless the ALL option is specified. - With ALL, a row that has m duplicates in the - left table and n duplicates in the right table will appear - max(m-n,0) times in the result set. - DISTINCT can be written to explicitly specify the + duplicate rows unless the ALL option is specified. + With ALL, a row that has m duplicates in the + left table and n duplicates in the right table will appear + max(m-n,0) times in the result set. + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. Multiple EXCEPT operators in the same SELECT statement are evaluated left to right, - unless parentheses dictate otherwise. EXCEPT binds at - the same level as UNION. + unless parentheses dictate otherwise. EXCEPT binds at + the same level as UNION. - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for an EXCEPT result or for any input of - an EXCEPT. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for an EXCEPT result or for any input of + an EXCEPT. @@ -1247,7 +1247,7 @@ ORDER BY expression [ ASC | DESC | ordering on the basis of a column that does not have a unique name. This is never absolutely necessary because it is always possible to assign a name to an output column using the - AS clause. + AS clause. @@ -1258,59 +1258,59 @@ ORDER BY expression [ ASC | DESC | SELECT name FROM distributors ORDER BY code; - A limitation of this feature is that an ORDER BY - clause applying to the result of a UNION, - INTERSECT, or EXCEPT clause can only + A limitation of this feature is that an ORDER BY + clause applying to the result of a UNION, + INTERSECT, or EXCEPT clause can only specify an output column name or number, not an expression. - If an ORDER BY expression is a simple name that + If an ORDER BY expression is a simple name that matches both an output column name and an input column name, - ORDER BY will interpret it as the output column name. - This is the opposite of the choice that GROUP BY will + ORDER BY will interpret it as the output column name. + This is the opposite of the choice that GROUP BY will make in the same situation. This inconsistency is made to be compatible with the SQL standard. - Optionally one can add the key word ASC (ascending) or - DESC (descending) after any expression in the - ORDER BY clause. If not specified, ASC is + Optionally one can add the key word ASC (ascending) or + DESC (descending) after any expression in the + ORDER BY clause. If not specified, ASC is assumed by default. Alternatively, a specific ordering operator - name can be specified in the USING clause. + name can be specified in the USING clause. An ordering operator must be a less-than or greater-than member of some B-tree operator family. - ASC is usually equivalent to USING < and - DESC is usually equivalent to USING >. + ASC is usually equivalent to USING < and + DESC is usually equivalent to USING >. (But the creator of a user-defined data type can define exactly what the default sort ordering is, and it might correspond to operators with other names.) - If NULLS LAST is specified, null values sort after all - non-null values; if NULLS FIRST is specified, null values + If NULLS LAST is specified, null values sort after all + non-null values; if NULLS FIRST is specified, null values sort before all non-null values. If neither is specified, the default - behavior is NULLS LAST when ASC is specified - or implied, and NULLS FIRST when DESC is specified + behavior is NULLS LAST when ASC is specified + or implied, and NULLS FIRST when DESC is specified (thus, the default is to act as though nulls are larger than non-nulls). - When USING is specified, the default nulls ordering depends + When USING is specified, the default nulls ordering depends on whether the operator is a less-than or greater-than operator. Note that ordering options apply only to the expression they follow; - for example ORDER BY x, y DESC does not mean - the same thing as ORDER BY x DESC, y DESC. + for example ORDER BY x, y DESC does not mean + the same thing as ORDER BY x DESC, y DESC. Character-string data is sorted according to the collation that applies to the column being sorted. That can be overridden at need by including - a COLLATE clause in the + a COLLATE clause in the expression, for example - ORDER BY mycolumn COLLATE "en_US". + ORDER BY mycolumn COLLATE "en_US". For more information see and . @@ -1337,60 +1337,60 @@ OFFSET start If the count expression - evaluates to NULL, it is treated as LIMIT ALL, i.e., no + evaluates to NULL, it is treated as LIMIT ALL, i.e., no limit. If start evaluates - to NULL, it is treated the same as OFFSET 0. + to NULL, it is treated the same as OFFSET 0. SQL:2008 introduced a different syntax to achieve the same result, - which PostgreSQL also supports. It is: + which PostgreSQL also supports. It is: OFFSET start { ROW | ROWS } FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY In this syntax, to write anything except a simple integer constant for - start or start or count, you must write parentheses around it. - If count is - omitted in a FETCH clause, it defaults to 1. + If count is + omitted in a FETCH clause, it defaults to 1. ROW and ROWS as well as FIRST and NEXT are noise words that don't influence the effects of these clauses. According to the standard, the OFFSET clause must come before the FETCH clause if both are present; but - PostgreSQL is laxer and allows either order. + PostgreSQL is laxer and allows either order. - When using LIMIT, it is a good idea to use an - ORDER BY clause that constrains the result rows into a + When using LIMIT, it is a good idea to use an + ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows — you might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? You - don't know what ordering unless you specify ORDER BY. + don't know what ordering unless you specify ORDER BY. - The query planner takes LIMIT into account when + The query planner takes LIMIT into account when generating a query plan, so you are very likely to get different plans (yielding different row orders) depending on what you use - for LIMIT and OFFSET. Thus, using - different LIMIT/OFFSET values to select + for LIMIT and OFFSET. Thus, using + different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable - result ordering with ORDER BY. This is not a bug; it + result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless - ORDER BY is used to constrain the order. + ORDER BY is used to constrain the order. - It is even possible for repeated executions of the same LIMIT + It is even possible for repeated executions of the same LIMIT query to return different subsets of the rows of a table, if there - is not an ORDER BY to enforce selection of a deterministic + is not an ORDER BY to enforce selection of a deterministic subset. Again, this is not a bug; determinism of the results is simply not guaranteed in such a case. @@ -1400,9 +1400,9 @@ FETCH { FIRST | NEXT } [ count ] { The Locking Clause - FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE - and FOR KEY SHARE - are locking clauses; they affect how SELECT + FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE + and FOR KEY SHARE + are locking clauses; they affect how SELECT locks rows as they are obtained from the table. @@ -1410,10 +1410,10 @@ FETCH { FIRST | NEXT } [ count ] { The locking clause has the general form -FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] +FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] - where lock_strength can be one of + where lock_strength can be one of UPDATE @@ -1430,20 +1430,20 @@ KEY SHARE To prevent the operation from waiting for other transactions to commit, - use either the NOWAIT or SKIP LOCKED - option. With NOWAIT, the statement reports an error, rather + use either the NOWAIT or SKIP LOCKED + option. With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately. With SKIP LOCKED, any selected rows that cannot be immediately locked are skipped. Skipping locked rows provides an inconsistent view of the data, so this is not suitable for general purpose work, but can be used to avoid lock contention with multiple consumers accessing a queue-like table. - Note that NOWAIT and SKIP LOCKED apply only + Note that NOWAIT and SKIP LOCKED apply only to the row-level lock(s) — the required ROW SHARE table-level lock is still taken in the ordinary way (see ). You can use - with the NOWAIT option first, + with the NOWAIT option first, if you need to acquire the table-level lock without waiting. @@ -1457,9 +1457,9 @@ KEY SHARE applied to a view or sub-query, it affects all tables used in the view or sub-query. However, these clauses - do not apply to WITH queries referenced by the primary query. - If you want row locking to occur within a WITH query, specify - a locking clause within the WITH query. + do not apply to WITH queries referenced by the primary query. + If you want row locking to occur within a WITH query, specify + a locking clause within the WITH query. @@ -1469,7 +1469,7 @@ KEY SHARE implicitly affected) by more than one locking clause, then it is processed as if it was only specified by the strongest one. Similarly, a table is processed - as NOWAIT if that is specified in any of the clauses + as NOWAIT if that is specified in any of the clauses affecting it. Otherwise, it is processed as SKIP LOCKED if that is specified in any of the clauses affecting it. @@ -1483,16 +1483,16 @@ KEY SHARE When a locking clause - appears at the top level of a SELECT query, the rows that + appears at the top level of a SELECT query, the rows that are locked are exactly those that are returned by the query; in the case of a join query, the rows locked are those that contribute to returned join rows. In addition, rows that satisfied the query conditions as of the query snapshot will be locked, although they will not be returned if they were updated after the snapshot and no longer satisfy the query conditions. If a - LIMIT is used, locking stops + LIMIT is used, locking stops once enough rows have been returned to satisfy the limit (but note that - rows skipped over by OFFSET will get locked). Similarly, + rows skipped over by OFFSET will get locked). Similarly, if a locking clause is used in a cursor's query, only rows actually fetched or stepped past by the cursor will be locked. @@ -1500,7 +1500,7 @@ KEY SHARE When a locking clause - appears in a sub-SELECT, the rows locked are those + appears in a sub-SELECT, the rows locked are those returned to the outer query by the sub-query. This might involve fewer rows than inspection of the sub-query alone would suggest, since conditions from the outer query might be used to optimize @@ -1508,7 +1508,7 @@ KEY SHARE SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss WHERE col1 = 5; - will lock only rows having col1 = 5, even though that + will lock only rows having col1 = 5, even though that condition is not textually within the sub-query. @@ -1522,18 +1522,18 @@ SAVEPOINT s; UPDATE mytable SET ... WHERE key = 1; ROLLBACK TO s; - would fail to preserve the FOR UPDATE lock after the - ROLLBACK TO. This has been fixed in release 9.3. + would fail to preserve the FOR UPDATE lock after the + ROLLBACK TO. This has been fixed in release 9.3. - It is possible for a SELECT command running at the READ + It is possible for a SELECT command running at the READ COMMITTED transaction isolation level and using ORDER BY and a locking clause to return rows out of - order. This is because ORDER BY is applied first. + order. This is because ORDER BY is applied first. The command sorts the result, but might then block trying to obtain a lock - on one or more of the rows. Once the SELECT unblocks, some + on one or more of the rows. Once the SELECT unblocks, some of the ordering column values might have been modified, leading to those rows appearing to be out of order (though they are in order in terms of the original column values). This can be worked around at need by @@ -1542,11 +1542,11 @@ ROLLBACK TO s; SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss ORDER BY column1; - Note that this will result in locking all rows of mytable, - whereas FOR UPDATE at the top level would lock only the + Note that this will result in locking all rows of mytable, + whereas FOR UPDATE at the top level would lock only the actually returned rows. This can make for a significant performance - difference, particularly if the ORDER BY is combined with - LIMIT or other restrictions. So this technique is recommended + difference, particularly if the ORDER BY is combined with + LIMIT or other restrictions. So this technique is recommended only if concurrent updates of the ordering columns are expected and a strictly sorted result is required. @@ -1573,11 +1573,11 @@ TABLE name SELECT * FROM name It can be used as a top-level command or as a space-saving syntax - variant in parts of complex queries. Only the WITH, - UNION, INTERSECT, EXCEPT, - ORDER BY, LIMIT, OFFSET, - FETCH and FOR locking clauses can be used - with TABLE; the WHERE clause and any form of + variant in parts of complex queries. Only the WITH, + UNION, INTERSECT, EXCEPT, + ORDER BY, LIMIT, OFFSET, + FETCH and FOR locking clauses can be used + with TABLE; the WHERE clause and any form of aggregation cannot be used. @@ -1702,7 +1702,7 @@ SELECT actors.name - This example shows how to use a function in the FROM + This example shows how to use a function in the FROM clause, both with and without a column definition list: @@ -1744,7 +1744,7 @@ SELECT * FROM unnest(ARRAY['a','b','c','d','e','f']) WITH ORDINALITY; - This example shows how to use a simple WITH clause: + This example shows how to use a simple WITH clause: WITH t AS ( @@ -1764,7 +1764,7 @@ SELECT * FROM t 0.0735620250925422 - Notice that the WITH query was evaluated only once, + Notice that the WITH query was evaluated only once, so that we got two sets of the same three random values. @@ -1796,9 +1796,9 @@ SELECT distance, employee_name FROM employee_recursive; - This example uses LATERAL to apply a set-returning function - get_product_names() for each row of the - manufacturers table: + This example uses LATERAL to apply a set-returning function + get_product_names() for each row of the + manufacturers table: SELECT m.name AS mname, pname @@ -1866,7 +1866,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; This is not valid syntax according to the SQL standard. PostgreSQL allows it to be consistent with allowing zero-column tables. - However, an empty list is not allowed when DISTINCT is used. + However, an empty list is not allowed when DISTINCT is used. @@ -1874,19 +1874,19 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; Omitting the <literal>AS</literal> Key Word - In the SQL standard, the optional key word AS can be + In the SQL standard, the optional key word AS can be omitted before an output column name whenever the new column name is a valid column name (that is, not the same as any reserved keyword). PostgreSQL is slightly more - restrictive: AS is required if the new column name + restrictive: AS is required if the new column name matches any keyword at all, reserved or not. Recommended practice is - to use AS or double-quote output column names, to prevent + to use AS or double-quote output column names, to prevent any possible conflict against future keyword additions. In FROM items, both the standard and - PostgreSQL allow AS to + PostgreSQL allow AS to be omitted before an alias that is an unreserved keyword. But this is impractical for output column names, because of syntactic ambiguities. @@ -1899,12 +1899,12 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; The SQL standard requires parentheses around the table name when writing ONLY, for example SELECT * FROM ONLY - (tab1), ONLY (tab2) WHERE .... PostgreSQL + (tab1), ONLY (tab2) WHERE .... PostgreSQL considers these parentheses to be optional. - PostgreSQL allows a trailing * to be written to + PostgreSQL allows a trailing * to be written to explicitly specify the non-ONLY behavior of including child tables. The standard does not allow this. @@ -1919,9 +1919,9 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; <literal>TABLESAMPLE</literal> Clause Restrictions - The TABLESAMPLE clause is currently accepted only on + The TABLESAMPLE clause is currently accepted only on regular tables and materialized views. According to the SQL standard - it should be possible to apply it to any FROM item. + it should be possible to apply it to any FROM item. @@ -1930,16 +1930,16 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; PostgreSQL allows a function call to be - written directly as a member of the FROM list. In the SQL + written directly as a member of the FROM list. In the SQL standard it would be necessary to wrap such a function call in a sub-SELECT; that is, the syntax - FROM func(...) alias + FROM func(...) alias is approximately equivalent to - FROM LATERAL (SELECT func(...)) alias. - Note that LATERAL is considered to be implicit; this is - because the standard requires LATERAL semantics for an - UNNEST() item in FROM. - PostgreSQL treats UNNEST() the + FROM LATERAL (SELECT func(...)) alias. + Note that LATERAL is considered to be implicit; this is + because the standard requires LATERAL semantics for an + UNNEST() item in FROM. + PostgreSQL treats UNNEST() the same as other set-returning functions. @@ -1974,8 +1974,8 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; PostgreSQL recognizes functional dependency - (allowing columns to be omitted from GROUP BY) only when - a table's primary key is included in the GROUP BY list. + (allowing columns to be omitted from GROUP BY) only when + a table's primary key is included in the GROUP BY list. The SQL standard specifies additional conditions that should be recognized. @@ -1986,7 +1986,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; The SQL standard provides additional options for the window - frame_clause. + frame_clause. PostgreSQL currently supports only the options listed above. @@ -2011,26 +2011,26 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; - <literal>FOR NO KEY UPDATE</>, <literal>FOR UPDATE</>, <literal>FOR SHARE</>, <literal>FOR KEY SHARE</> + <literal>FOR NO KEY UPDATE</literal>, <literal>FOR UPDATE</literal>, <literal>FOR SHARE</literal>, <literal>FOR KEY SHARE</literal> - Although FOR UPDATE appears in the SQL standard, the - standard allows it only as an option of DECLARE CURSOR. - PostgreSQL allows it in any SELECT - query as well as in sub-SELECTs, but this is an extension. - The FOR NO KEY UPDATE, FOR SHARE and - FOR KEY SHARE variants, as well as the NOWAIT + Although FOR UPDATE appears in the SQL standard, the + standard allows it only as an option of DECLARE CURSOR. + PostgreSQL allows it in any SELECT + query as well as in sub-SELECTs, but this is an extension. + The FOR NO KEY UPDATE, FOR SHARE and + FOR KEY SHARE variants, as well as the NOWAIT and SKIP LOCKED options, do not appear in the standard. - Data-Modifying Statements in <literal>WITH</> + Data-Modifying Statements in <literal>WITH</literal> - PostgreSQL allows INSERT, - UPDATE, and DELETE to be used as WITH + PostgreSQL allows INSERT, + UPDATE, and DELETE to be used as WITH queries. This is not found in the SQL standard. @@ -2044,7 +2044,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; - ROWS FROM( ... ) is an extension of the SQL standard. + ROWS FROM( ... ) is an extension of the SQL standard. diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml index 89c0fad195..8c44d0e156 100644 --- a/doc/src/sgml/ref/set.sgml +++ b/doc/src/sgml/ref/set.sgml @@ -66,15 +66,15 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone If SET LOCAL is used within a function that has a - SET option for the same variable (see + SET option for the same variable (see ), the effects of the SET LOCAL command disappear at function exit; that is, the value in effect when the function was called is restored anyway. This allows SET LOCAL to be used for dynamic or repeated changes of a parameter within a function, while still - having the convenience of using the SET option to save and - restore the caller's value. However, a regular SET command - overrides any surrounding function's SET option; its effects + having the convenience of using the SET option to save and + restore the caller's value. However, a regular SET command + overrides any surrounding function's SET option; its effects will persist unless rolled back. @@ -94,22 +94,22 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone - SESSION + SESSION Specifies that the command takes effect for the current session. - (This is the default if neither SESSION nor - LOCAL appears.) + (This is the default if neither SESSION nor + LOCAL appears.) - LOCAL + LOCAL Specifies that the command takes effect for only the current - transaction. After COMMIT or ROLLBACK, + transaction. After COMMIT or ROLLBACK, the session-level setting takes effect again. Issuing this outside of a transaction block emits a warning and otherwise has no effect. @@ -136,7 +136,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezoneDEFAULT can be written to specify resetting the parameter to its default value (that is, whatever - value it would have had if no SET had been executed + value it would have had if no SET had been executed in the current session). @@ -153,8 +153,8 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone SCHEMA - SET SCHEMA 'value' is an alias for - SET search_path TO value. Only one + SET SCHEMA 'value' is an alias for + SET search_path TO value. Only one schema can be specified using this syntax. @@ -163,8 +163,8 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone NAMES - SET NAMES value is an alias for - SET client_encoding TO value. + SET NAMES value is an alias for + SET client_encoding TO value. @@ -176,7 +176,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezonerandom). Allowed values are floating-point numbers between -1 and 1, which are then - multiplied by 231-1. + multiplied by 231-1. @@ -191,8 +191,8 @@ SELECT setseed(value); TIME ZONE - SET TIME ZONE value is an alias - for SET timezone TO value. The + SET TIME ZONE value is an alias + for SET timezone TO value. The syntax SET TIME ZONE allows special syntax for the time zone specification. Here are examples of valid values: @@ -238,7 +238,7 @@ SELECT setseed(value); Set the time zone to your local time zone (that is, the - server's default value of timezone). + server's default value of timezone). @@ -248,8 +248,8 @@ SELECT setseed(value); Timezone settings given as numbers or intervals are internally translated to POSIX timezone syntax. For example, after - SET TIME ZONE -7, SHOW TIME ZONE would - report <-07>+07. + SET TIME ZONE -7, SHOW TIME ZONE would + report <-07>+07. @@ -270,7 +270,7 @@ SELECT setseed(value); functionality; see . Also, it is possible to UPDATE the pg_settings - system view to perform the equivalent of SET. + system view to perform the equivalent of SET. @@ -286,7 +286,7 @@ SET search_path TO my_schema, public; Set the style of date to traditional - POSTGRES with day before month + POSTGRES with day before month input convention: SET datestyle TO postgres, dmy; diff --git a/doc/src/sgml/ref/set_constraints.sgml b/doc/src/sgml/ref/set_constraints.sgml index 7c31871b0b..237a0a3988 100644 --- a/doc/src/sgml/ref/set_constraints.sgml +++ b/doc/src/sgml/ref/set_constraints.sgml @@ -67,18 +67,18 @@ SET CONSTRAINTS { ALL | name [, ... - Currently, only UNIQUE, PRIMARY KEY, - REFERENCES (foreign key), and EXCLUDE + Currently, only UNIQUE, PRIMARY KEY, + REFERENCES (foreign key), and EXCLUDE constraints are affected by this setting. - NOT NULL and CHECK constraints are + NOT NULL and CHECK constraints are always checked immediately when a row is inserted or modified - (not at the end of the statement). + (not at the end of the statement). Uniqueness and exclusion constraints that have not been declared - DEFERRABLE are also checked immediately. + DEFERRABLE are also checked immediately. - The firing of triggers that are declared as constraint triggers + The firing of triggers that are declared as constraint triggers is also controlled by this setting — they fire at the same time that the associated constraint should be checked. @@ -111,7 +111,7 @@ SET CONSTRAINTS { ALL | name [, ... This command complies with the behavior defined in the SQL standard, except for the limitation that, in PostgreSQL, it does not apply to - NOT NULL and CHECK constraints. + NOT NULL and CHECK constraints. Also, PostgreSQL checks non-deferrable uniqueness constraints immediately, not at end of statement as the standard would suggest. diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml index a97ceabcff..eac4b3405a 100644 --- a/doc/src/sgml/ref/set_role.sgml +++ b/doc/src/sgml/ref/set_role.sgml @@ -35,7 +35,7 @@ RESET ROLE identifier of the current SQL session to be role_name. The role name can be written as either an identifier or a string literal. - After SET ROLE, permissions checking for SQL commands + After SET ROLE, permissions checking for SQL commands is carried out as though the named role were the one that had logged in originally. @@ -47,13 +47,13 @@ RESET ROLE - The SESSION and LOCAL modifiers act the same + The SESSION and LOCAL modifiers act the same as for the regular command. - The NONE and RESET forms reset the current + The NONE and RESET forms reset the current user identifier to be the current session user identifier. These forms can be executed by any user. @@ -64,41 +64,41 @@ RESET ROLE Using this command, it is possible to either add privileges or restrict - one's privileges. If the session user role has the INHERITS + one's privileges. If the session user role has the INHERITS attribute, then it automatically has all the privileges of every role that - it could SET ROLE to; in this case SET ROLE + it could SET ROLE to; in this case SET ROLE effectively drops all the privileges assigned directly to the session user and to the other roles it is a member of, leaving only the privileges available to the named role. On the other hand, if the session user role - has the NOINHERITS attribute, SET ROLE drops the + has the NOINHERITS attribute, SET ROLE drops the privileges assigned directly to the session user and instead acquires the privileges available to the named role. - In particular, when a superuser chooses to SET ROLE to a + In particular, when a superuser chooses to SET ROLE to a non-superuser role, they lose their superuser privileges. - SET ROLE has effects comparable to + SET ROLE has effects comparable to , but the privilege checks involved are quite different. Also, - SET SESSION AUTHORIZATION determines which roles are - allowable for later SET ROLE commands, whereas changing - roles with SET ROLE does not change the set of roles - allowed to a later SET ROLE. + SET SESSION AUTHORIZATION determines which roles are + allowable for later SET ROLE commands, whereas changing + roles with SET ROLE does not change the set of roles + allowed to a later SET ROLE. - SET ROLE does not process session variables as specified by + SET ROLE does not process session variables as specified by the role's settings; this only happens during login. - SET ROLE cannot be used within a - SECURITY DEFINER function. + SET ROLE cannot be used within a + SECURITY DEFINER function. @@ -127,14 +127,14 @@ SELECT SESSION_USER, CURRENT_USER; PostgreSQL - allows identifier syntax ("rolename"), while + allows identifier syntax ("rolename"), while the SQL standard requires the role name to be written as a string literal. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. - The SESSION and LOCAL modifiers are a + The SESSION and LOCAL modifiers are a PostgreSQL extension, as is the - RESET syntax. + RESET syntax. diff --git a/doc/src/sgml/ref/set_session_auth.sgml b/doc/src/sgml/ref/set_session_auth.sgml index 96d279aaf9..a8aee6f632 100644 --- a/doc/src/sgml/ref/set_session_auth.sgml +++ b/doc/src/sgml/ref/set_session_auth.sgml @@ -39,7 +39,7 @@ RESET SESSION AUTHORIZATION The session user identifier is initially set to be the (possibly authenticated) user name provided by the client. The current user identifier is normally equal to the session user identifier, but - might change temporarily in the context of SECURITY DEFINER + might change temporarily in the context of SECURITY DEFINER functions and similar mechanisms; it can also be changed by . The current user identifier is relevant for permission checking. @@ -53,13 +53,13 @@ RESET SESSION AUTHORIZATION - The SESSION and LOCAL modifiers act the same + The SESSION and LOCAL modifiers act the same as for the regular command. - The DEFAULT and RESET forms reset the session + The DEFAULT and RESET forms reset the session and current user identifiers to be the originally authenticated user name. These forms can be executed by any user. @@ -69,8 +69,8 @@ RESET SESSION AUTHORIZATION Notes - SET SESSION AUTHORIZATION cannot be used within a - SECURITY DEFINER function. + SET SESSION AUTHORIZATION cannot be used within a + SECURITY DEFINER function. @@ -101,13 +101,13 @@ SELECT SESSION_USER, CURRENT_USER; The SQL standard allows some other expressions to appear in place of the literal user_name, but these options are not important in practice. PostgreSQL - allows identifier syntax ("username"), which SQL + allows identifier syntax ("username"), which SQL does not. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. - The SESSION and LOCAL modifiers are a + The SESSION and LOCAL modifiers are a PostgreSQL extension, as is the - RESET syntax. + RESET syntax. diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml index 188d2ed92e..f5631372f5 100644 --- a/doc/src/sgml/ref/set_transaction.sgml +++ b/doc/src/sgml/ref/set_transaction.sgml @@ -153,14 +153,14 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa The SET TRANSACTION SNAPSHOT command allows a new - transaction to run with the same snapshot as an existing + transaction to run with the same snapshot as an existing transaction. The pre-existing transaction must have exported its snapshot with the pg_export_snapshot function (see ). That function returns a snapshot identifier, which must be given to SET TRANSACTION SNAPSHOT to specify which snapshot is to be imported. The identifier must be written as a string literal in this command, for example - '000003A1-1'. + '000003A1-1'. SET TRANSACTION SNAPSHOT can only be executed at the start of a transaction, before the first query or data-modification statement (SELECT, @@ -169,7 +169,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa COPY) of the transaction. Furthermore, the transaction must already be set to SERIALIZABLE or REPEATABLE READ isolation level (otherwise, the snapshot - would be discarded immediately, since READ COMMITTED mode takes + would be discarded immediately, since READ COMMITTED mode takes a new snapshot for each command). If the importing transaction uses SERIALIZABLE isolation level, then the transaction that exported the snapshot must also use that isolation level. Also, a @@ -203,9 +203,9 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa , and . (In fact SET SESSION CHARACTERISTICS is just a - verbose equivalent for setting these variables with SET.) + verbose equivalent for setting these variables with SET.) This means the defaults can be set in the configuration file, via - ALTER DATABASE, etc. Consult + ALTER DATABASE, etc. Consult for more information. @@ -243,7 +243,7 @@ SET TRANSACTION SNAPSHOT '00000003-0000001B-1'; These commands are defined in the SQL standard, except for the DEFERRABLE transaction mode - and the SET TRANSACTION SNAPSHOT form, which are + and the SET TRANSACTION SNAPSHOT form, which are PostgreSQL extensions. diff --git a/doc/src/sgml/ref/show.sgml b/doc/src/sgml/ref/show.sgml index 7e198e6df8..2a2b2fbb9f 100644 --- a/doc/src/sgml/ref/show.sgml +++ b/doc/src/sgml/ref/show.sgml @@ -35,7 +35,7 @@ SHOW ALL SET statement, by editing the postgresql.conf configuration file, through the PGOPTIONS environmental variable (when using - libpq or a libpq-based + libpq or a libpq-based application), or through command-line flags when starting the postgres server. See for details. diff --git a/doc/src/sgml/ref/start_transaction.sgml b/doc/src/sgml/ref/start_transaction.sgml index 60926f5dfe..8dcf6318d2 100644 --- a/doc/src/sgml/ref/start_transaction.sgml +++ b/doc/src/sgml/ref/start_transaction.sgml @@ -55,12 +55,12 @@ START TRANSACTION [ transaction_modeCompatibility - In the standard, it is not necessary to issue START TRANSACTION + In the standard, it is not necessary to issue START TRANSACTION to start a transaction block: any SQL command implicitly begins a block. PostgreSQL's behavior can be seen as implicitly issuing a COMMIT after each command that does not - follow START TRANSACTION (or BEGIN), - and it is therefore often called autocommit. + follow START TRANSACTION (or BEGIN), + and it is therefore often called autocommit. Other relational database systems might offer an autocommit feature as a convenience. diff --git a/doc/src/sgml/ref/truncate.sgml b/doc/src/sgml/ref/truncate.sgml index fef3315599..80abe67525 100644 --- a/doc/src/sgml/ref/truncate.sgml +++ b/doc/src/sgml/ref/truncate.sgml @@ -48,9 +48,9 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ The name (optionally schema-qualified) of a table to truncate. - If ONLY is specified before the table name, only that table - is truncated. If ONLY is not specified, the table and all - its descendant tables (if any) are truncated. Optionally, * + If ONLY is specified before the table name, only that table + is truncated. If ONLY is not specified, the table and all + its descendant tables (if any) are truncated. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -108,29 +108,29 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ - TRUNCATE acquires an ACCESS EXCLUSIVE lock on each + TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates on, which blocks all other concurrent operations - on the table. When RESTART IDENTITY is specified, any + on the table. When RESTART IDENTITY is specified, any sequences that are to be restarted are likewise locked exclusively. If concurrent access to a table is required, then - the DELETE command should be used instead. + the DELETE command should be used instead. - TRUNCATE cannot be used on a table that has foreign-key + TRUNCATE cannot be used on a table that has foreign-key references from other tables, unless all such tables are also truncated in the same command. Checking validity in such cases would require table - scans, and the whole point is not to do one. The CASCADE + scans, and the whole point is not to do one. The CASCADE option can be used to automatically include all dependent tables — but be very careful when using this option, or else you might lose data you did not intend to! - TRUNCATE will not fire any ON DELETE + TRUNCATE will not fire any ON DELETE triggers that might exist for the tables. But it will fire ON TRUNCATE triggers. - If ON TRUNCATE triggers are defined for any of + If ON TRUNCATE triggers are defined for any of the tables, then all BEFORE TRUNCATE triggers are fired before any truncation happens, and all AFTER TRUNCATE triggers are fired after the last truncation is @@ -141,36 +141,36 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ - TRUNCATE is not MVCC-safe. After truncation, the table will + TRUNCATE is not MVCC-safe. After truncation, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the truncation occurred. See for more details. - TRUNCATE is transaction-safe with respect to the data + TRUNCATE is transaction-safe with respect to the data in the tables: the truncation will be safely rolled back if the surrounding transaction does not commit. - When RESTART IDENTITY is specified, the implied - ALTER SEQUENCE RESTART operations are also done + When RESTART IDENTITY is specified, the implied + ALTER SEQUENCE RESTART operations are also done transactionally; that is, they will be rolled back if the surrounding transaction does not commit. This is unlike the normal behavior of - ALTER SEQUENCE RESTART. Be aware that if any additional + ALTER SEQUENCE RESTART. Be aware that if any additional sequence operations are done on the restarted sequences before the transaction rolls back, the effects of these operations on the sequences - will be rolled back, but not their effects on currval(); - that is, after the transaction currval() will continue to + will be rolled back, but not their effects on currval(); + that is, after the transaction currval() will continue to reflect the last sequence value obtained inside the failed transaction, even though the sequence itself may no longer be consistent with that. - This is similar to the usual behavior of currval() after + This is similar to the usual behavior of currval() after a failed transaction. - TRUNCATE is not currently supported for foreign tables. + TRUNCATE is not currently supported for foreign tables. This implies that if a specified table has any descendant tables that are foreign, the command will fail. diff --git a/doc/src/sgml/ref/unlisten.sgml b/doc/src/sgml/ref/unlisten.sgml index 622e1cf154..1ea9aa3a0b 100644 --- a/doc/src/sgml/ref/unlisten.sgml +++ b/doc/src/sgml/ref/unlisten.sgml @@ -104,7 +104,7 @@ Asynchronous notification "virtual" received from server process with PID 8448. - Once UNLISTEN has been executed, further NOTIFY + Once UNLISTEN has been executed, further NOTIFY messages will be ignored: diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index 9dcbbd0e28..1ede52384f 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -52,13 +52,13 @@ UPDATE [ ONLY ] table_name [ * ] [ - The optional RETURNING clause causes UPDATE + The optional RETURNING clause causes UPDATE to compute and return value(s) based on each row actually updated. Any expression using the table's columns, and/or columns of other tables mentioned in FROM, can be computed. The new (post-update) values of the table's columns are used. - The syntax of the RETURNING list is identical to that of the - output list of SELECT. + The syntax of the RETURNING list is identical to that of the + output list of SELECT. @@ -80,7 +80,7 @@ UPDATE [ ONLY ] table_name [ * ] [ The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the UPDATE + subqueries that can be referenced by name in the UPDATE query. See and for details. @@ -92,10 +92,10 @@ UPDATE [ ONLY ] table_name [ * ] [ The name (optionally schema-qualified) of the table to update. - If ONLY is specified before the table name, matching rows - are updated in the named table only. If ONLY is not + If ONLY is specified before the table name, matching rows + are updated in the named table only. If ONLY is not specified, matching rows are also updated in any tables inheriting from - the named table. Optionally, * can be specified after the + the named table. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -107,9 +107,9 @@ UPDATE [ ONLY ] table_name [ * ] [ A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For - example, given UPDATE foo AS f, the remainder of the + example, given UPDATE foo AS f, the remainder of the UPDATE statement must refer to this table as - f not foo. + f not foo. @@ -123,7 +123,7 @@ UPDATE [ ONLY ] table_name [ * ] [ The column name can be qualified with a subfield name or array subscript, if needed. Do not include the table's name in the specification of a target column — for example, - UPDATE table_name SET table_name.col = 1 is invalid. + UPDATE table_name SET table_name.col = 1 is invalid. @@ -152,7 +152,7 @@ UPDATE [ ONLY ] table_name [ * ] [ sub-SELECT - A SELECT sub-query that produces as many output columns + A SELECT sub-query that produces as many output columns as are listed in the parenthesized column list preceding it. The sub-query must yield no more than one row when executed. If it yields one row, its column values are assigned to the target columns; @@ -168,13 +168,13 @@ UPDATE [ ONLY ] table_name [ * ] [ A list of table expressions, allowing columns from other tables - to appear in the WHERE condition and the update + to appear in the WHERE condition and the update expressions. This is similar to the list of tables that can be specified in the of a SELECT statement. Note that the target table must not appear in the - from_list, unless you intend a self-join (in which - case it must appear with an alias in the from_list). + from_list, unless you intend a self-join (in which + case it must appear with an alias in the from_list). @@ -184,7 +184,7 @@ UPDATE [ ONLY ] table_name [ * ] [ An expression that returns a value of type boolean. - Only rows for which this expression returns true + Only rows for which this expression returns true will be updated. @@ -194,15 +194,15 @@ UPDATE [ ONLY ] table_name [ * ] [ cursor_name - The name of the cursor to use in a WHERE CURRENT OF + The name of the cursor to use in a WHERE CURRENT OF condition. The row to be updated is the one most recently fetched from this cursor. The cursor must be a non-grouping - query on the UPDATE's target table. - Note that WHERE CURRENT OF cannot be + query on the UPDATE's target table. + Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See for more information about using cursors with - WHERE CURRENT OF. + WHERE CURRENT OF. @@ -211,11 +211,11 @@ UPDATE [ ONLY ] table_name [ * ] [ output_expression - An expression to be computed and returned by the UPDATE + An expression to be computed and returned by the UPDATE command after each row is updated. The expression can use any column names of the table named by table_name - or table(s) listed in FROM. - Write * to return all columns. + or table(s) listed in FROM. + Write * to return all columns. @@ -235,7 +235,7 @@ UPDATE [ ONLY ] table_name [ * ] [ Outputs - On successful completion, an UPDATE command returns a command + On successful completion, an UPDATE command returns a command tag of the form UPDATE count @@ -244,16 +244,16 @@ UPDATE count of rows updated, including matched rows whose values did not change. Note that the number may be less than the number of rows that matched the condition when - updates were suppressed by a BEFORE UPDATE trigger. If + updates were suppressed by a BEFORE UPDATE trigger. If count is 0, no rows were updated by the query (this is not considered an error). - If the UPDATE command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the UPDATE command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) updated by the + RETURNING list, computed over the row(s) updated by the command. @@ -262,11 +262,11 @@ UPDATE count Notes - When a FROM clause is present, what essentially happens + When a FROM clause is present, what essentially happens is that the target table is joined to the tables mentioned in the from_list, and each output row of the join represents an update operation for the target table. When using - FROM you should ensure that the join + FROM you should ensure that the join produces at most one output row for each row to be modified. In other words, a target row shouldn't join to more than one row from the other table(s). If it does, then only one of the join rows @@ -293,8 +293,8 @@ UPDATE count Examples - Change the word Drama to Dramatic in the - column kind of the table films: + Change the word Drama to Dramatic in the + column kind of the table films: UPDATE films SET kind = 'Dramatic' WHERE kind = 'Drama'; @@ -364,10 +364,10 @@ UPDATE accounts SET contact_first_name = first_name, FROM salesmen WHERE salesmen.id = accounts.sales_id; However, the second query may give unexpected results - if salesmen.id is not a unique key, whereas + if salesmen.id is not a unique key, whereas the first query is guaranteed to raise an error if there are multiple - id matches. Also, if there is no match for a particular - accounts.sales_id entry, the first query + id matches. Also, if there is no match for a particular + accounts.sales_id entry, the first query will set the corresponding name fields to NULL, whereas the second query will not update that row at all. @@ -400,9 +400,9 @@ COMMIT; - Change the kind column of the table + Change the kind column of the table films in the row on which the cursor - c_films is currently positioned: + c_films is currently positioned: UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; @@ -413,16 +413,16 @@ UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; This command conforms to the SQL standard, except - that the FROM and RETURNING clauses + that the FROM and RETURNING clauses are PostgreSQL extensions, as is the ability - to use WITH with UPDATE. + to use WITH with UPDATE. - Some other database systems offer a FROM option in which - the target table is supposed to be listed again within FROM. + Some other database systems offer a FROM option in which + the target table is supposed to be listed again within FROM. That is not how PostgreSQL interprets - FROM. Be careful when porting applications that use this + FROM. Be careful when porting applications that use this extension. @@ -431,9 +431,9 @@ UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; target column names can be any row-valued expression yielding the correct number of columns. PostgreSQL only allows the source value to be a row - constructor or a sub-SELECT. An individual column's - updated value can be specified as DEFAULT in the - row-constructor case, but not inside a sub-SELECT. + constructor or a sub-SELECT. An individual column's + updated value can be specified as DEFAULT in the + row-constructor case, but not inside a sub-SELECT. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index f5bc87e290..2e205668c1 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -66,7 +66,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ and parameters set to zero. Aggressive freezing is always performed when the - table is rewritten, so this option is redundant when FULL + table is rewritten, so this option is redundant when FULL is specified. @@ -145,8 +145,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ visibility map. Pages where + Normally, VACUUM will skip pages based on the visibility map. Pages where all tuples are known to be frozen can always be skipped, and those where all tuples are known to be visible to all transactions may be skipped except when performing an aggressive vacuum. Furthermore, @@ -176,7 +176,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ Outputs - When VERBOSE is specified, VACUUM emits + When VERBOSE is specified, VACUUM emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well. @@ -202,19 +202,19 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ for details. @@ -247,7 +247,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ . diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml index 4f6fa0d708..277c231687 100644 --- a/doc/src/sgml/ref/vacuumdb.sgml +++ b/doc/src/sgml/ref/vacuumdb.sgml @@ -88,8 +88,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be cleaned or analyzed. @@ -103,8 +103,8 @@ PostgreSQL documentation - - + + Echo the commands that vacuumdb generates @@ -158,8 +158,8 @@ PostgreSQL documentation - - + + Do not display progress messages. @@ -176,7 +176,7 @@ PostgreSQL documentation Column names can be specified only in conjunction with the or options. Multiple tables can be vacuumed by writing multiple - switches. @@ -198,8 +198,8 @@ PostgreSQL documentation - - + + Print the vacuumdb version and exit. @@ -248,8 +248,8 @@ PostgreSQL documentation - - + + Show help about vacuumdb command line @@ -266,8 +266,8 @@ PostgreSQL documentation the following command-line arguments for connection parameters: - - + + Specifies the host name of the machine on which the server @@ -278,8 +278,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -290,8 +290,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -300,8 +300,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -315,8 +315,8 @@ PostgreSQL documentation - - + + Force vacuumdb to prompt for a @@ -329,14 +329,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, vacuumdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -370,8 +370,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -401,7 +401,7 @@ PostgreSQL documentation vacuumdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. diff --git a/doc/src/sgml/ref/values.sgml b/doc/src/sgml/ref/values.sgml index 9baeade551..75a594725b 100644 --- a/doc/src/sgml/ref/values.sgml +++ b/doc/src/sgml/ref/values.sgml @@ -35,7 +35,7 @@ VALUES ( expression [, ...] ) [, .. VALUES computes a row value or set of row values specified by value expressions. It is most commonly used to generate - a constant table within a larger command, but it can be + a constant table within a larger command, but it can be used on its own. @@ -43,18 +43,18 @@ VALUES ( expression [, ...] ) [, .. When more than one row is specified, all the rows must have the same number of elements. The data types of the resulting table's columns are determined by combining the explicit or inferred types of the expressions - appearing in that column, using the same rules as for UNION + appearing in that column, using the same rules as for UNION (see ). - Within larger commands, VALUES is syntactically allowed - anywhere that SELECT is. Because it is treated like a - SELECT by the grammar, it is possible to use - the ORDER BY, LIMIT (or + Within larger commands, VALUES is syntactically allowed + anywhere that SELECT is. Because it is treated like a + SELECT by the grammar, it is possible to use + the ORDER BY, LIMIT (or equivalently FETCH FIRST), - and OFFSET clauses with a - VALUES command. + and OFFSET clauses with a + VALUES command. @@ -67,12 +67,12 @@ VALUES ( expression [, ...] ) [, .. A constant or expression to compute and insert at the indicated place - in the resulting table (set of rows). In a VALUES list - appearing at the top level of an INSERT, an + in the resulting table (set of rows). In a VALUES list + appearing at the top level of an INSERT, an expression can be replaced by DEFAULT to indicate that the destination column's default value should be inserted. DEFAULT cannot - be used when VALUES appears in other contexts. + be used when VALUES appears in other contexts. @@ -83,7 +83,7 @@ VALUES ( expression [, ...] ) [, .. An expression or integer constant indicating how to sort the result rows. This expression can refer to the columns of the - VALUES result as column1, column2, + VALUES result as column1, column2, etc. For more details see . @@ -127,11 +127,11 @@ VALUES ( expression [, ...] ) [, .. Notes - VALUES lists with very large numbers of rows should be avoided, + VALUES lists with very large numbers of rows should be avoided, as you might encounter out-of-memory failures or poor performance. - VALUES appearing within INSERT is a special case - (because the desired column types are known from the INSERT's - target table, and need not be inferred by scanning the VALUES + VALUES appearing within INSERT is a special case + (because the desired column types are known from the INSERT's + target table, and need not be inferred by scanning the VALUES list), so it can handle larger lists than are practical in other contexts. @@ -140,7 +140,7 @@ VALUES ( expression [, ...] ) [, .. Examples - A bare VALUES command: + A bare VALUES command: VALUES (1, 'one'), (2, 'two'), (3, 'three'); @@ -160,8 +160,8 @@ SELECT 3, 'three'; - More usually, VALUES is used within a larger SQL command. - The most common use is in INSERT: + More usually, VALUES is used within a larger SQL command. + The most common use is in INSERT: INSERT INTO films (code, title, did, date_prod, kind) @@ -170,7 +170,7 @@ INSERT INTO films (code, title, did, date_prod, kind) - In the context of INSERT, entries of a VALUES list + In the context of INSERT, entries of a VALUES list can be DEFAULT to indicate that the column default should be used here instead of specifying a value: @@ -182,8 +182,8 @@ INSERT INTO films VALUES - VALUES can also be used where a sub-SELECT might - be written, for example in a FROM clause: + VALUES can also be used where a sub-SELECT might + be written, for example in a FROM clause: SELECT f.* @@ -195,17 +195,17 @@ UPDATE employees SET salary = salary * v.increase WHERE employees.depno = v.depno AND employees.sales >= v.target; - Note that an AS clause is required when VALUES - is used in a FROM clause, just as is true for - SELECT. It is not required that the AS clause + Note that an AS clause is required when VALUES + is used in a FROM clause, just as is true for + SELECT. It is not required that the AS clause specify names for all the columns, but it's good practice to do so. - (The default column names for VALUES are column1, - column2, etc in PostgreSQL, but + (The default column names for VALUES are column1, + column2, etc in PostgreSQL, but these names might be different in other database systems.) - When VALUES is used in INSERT, the values are all + When VALUES is used in INSERT, the values are all automatically coerced to the data type of the corresponding destination column. When it's used in other contexts, it might be necessary to specify the correct data type. If the entries are all quoted literal constants, @@ -218,9 +218,9 @@ WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), ('192.168.1. - For simple IN tests, it's better to rely on the + For simple IN tests, it's better to rely on the list-of-scalars - form of IN than to write a VALUES + form of IN than to write a VALUES query as shown above. The list of scalars method requires less writing and is often more efficient. diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 14747e5f3b..e83edf96ec 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -53,7 +53,7 @@ make check or otherwise a note about which tests failed. See below before assuming that a - failure represents a serious problem. + failure represents a serious problem. @@ -66,12 +66,12 @@ make check If you have configured PostgreSQL to install into a location where an older PostgreSQL - installation already exists, and you perform make check + installation already exists, and you perform make check before installing the new version, you might find that the tests fail because the new programs try to use the already-installed shared libraries. (Typical symptoms are complaints about undefined symbols.) If you wish to run the tests before overwriting the old installation, - you'll need to build with configure --disable-rpath. + you'll need to build with configure --disable-rpath. It is not recommended that you use this option for the final installation, however. @@ -80,12 +80,12 @@ make check The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means forty processes: there's a server process and a - psql process for each test script. + psql process for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least fifty or so, else you might get random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut down the degree of parallelism - by setting the MAX_CONNECTIONS parameter. For example: + by setting the MAX_CONNECTIONS parameter. For example: make MAX_CONNECTIONS=10 check @@ -110,14 +110,14 @@ make installcheck-parallel The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by PGHOST and PGPORT environment variables. The tests will be run in a - database named regression; any existing database by this name + database named regression; any existing database by this name will be dropped. The tests will also transiently create some cluster-wide objects, such as roles and tablespaces. These objects will have names beginning with - regress_. Beware of using installcheck + regress_. Beware of using installcheck mode in installations that have any actual users or tablespaces named that way. @@ -127,9 +127,9 @@ make installcheck-parallel Additional Test Suites - The make check and make installcheck commands - run only the core regression tests, which test built-in - functionality of the PostgreSQL server. The source + The make check and make installcheck commands + run only the core regression tests, which test built-in + functionality of the PostgreSQL server. The source distribution also contains additional test suites, most of them having to do with add-on functionality such as optional procedural languages. @@ -144,18 +144,18 @@ make installcheck-world These commands run the tests using temporary servers or an already-installed server, respectively, just as previously explained - for make check and make installcheck. Other + for make check and make installcheck. Other considerations are the same as previously explained for each method. - Note that make check-world builds a separate temporary + Note that make check-world builds a separate temporary installation tree for each tested module, so it requires a great deal - more time and disk space than make installcheck-world. + more time and disk space than make installcheck-world. Alternatively, you can run individual test suites by typing - make check or make installcheck in the appropriate + make check or make installcheck in the appropriate subdirectory of the build tree. Keep in mind that make - installcheck assumes you've installed the relevant module(s), not + installcheck assumes you've installed the relevant module(s), not only the core server. @@ -167,27 +167,27 @@ make installcheck-world Regression tests for optional procedural languages (other than - PL/pgSQL, which is tested by the core tests). - These are located under src/pl. + PL/pgSQL, which is tested by the core tests). + These are located under src/pl. - Regression tests for contrib modules, - located under contrib. - Not all contrib modules have tests. + Regression tests for contrib modules, + located under contrib. + Not all contrib modules have tests. Regression tests for the ECPG interface library, - located in src/interfaces/ecpg/test. + located in src/interfaces/ecpg/test. Tests stressing behavior of concurrent sessions, - located in src/test/isolation. + located in src/test/isolation. @@ -199,11 +199,11 @@ make installcheck-world - When using installcheck mode, these tests will destroy any - existing databases named pl_regression, - contrib_regression, isolation_regression, - ecpg1_regression, or ecpg2_regression, as well as - regression. + When using installcheck mode, these tests will destroy any + existing databases named pl_regression, + contrib_regression, isolation_regression, + ecpg1_regression, or ecpg2_regression, as well as + regression. @@ -272,7 +272,7 @@ make check EXTRA_TESTS=numeric_big make check EXTRA_TESTS='collate.icu.utf8 collate.linux.utf8' LANG=en_US.utf8 - The collate.linux.utf8 test works only on Linux/glibc + The collate.linux.utf8 test works only on Linux/glibc platforms. The collate.icu.utf8 test only works when support for ICU was built. Both tests will only succeed when run in a database that uses UTF-8 encoding. @@ -294,7 +294,7 @@ make check EXTRA_TESTS='collate.icu.utf8 collate.linux.utf8' LANG=en_US.utf8 To run the Hot Standby tests, first create a database - called regression on the primary: + called regression on the primary: psql -h primary -c "CREATE DATABASE regression" @@ -311,7 +311,7 @@ psql -h primary -f src/test/regress/sql/hs_primary_setup.sql regression Now arrange for the default database connection to be to the standby server under test (for example, by setting the PGHOST and PGPORT environment variables). - Finally, run make standbycheck in the regression directory: + Finally, run make standbycheck in the regression directory: cd src/test/regress make standbycheck @@ -355,7 +355,7 @@ make standbycheck src/test/regress/regression.diffs. (When running a test suite other than the core tests, these files of course appear in the relevant subdirectory, - not src/test/regress.) + not src/test/regress.) @@ -367,7 +367,7 @@ make standbycheck - If for some reason a particular platform generates a failure + If for some reason a particular platform generates a failure for a given test, but inspection of the output convinces you that the result is valid, you can add a new comparison file to silence the failure report in future test runs. See @@ -457,8 +457,8 @@ make check NO_LOCALE=1 Some of the tests involve computing 64-bit floating-point numbers (double precision) from table columns. Differences in results involving mathematical functions of double - precision columns have been observed. The float8 and - geometry tests are particularly prone to small differences + precision columns have been observed. The float8 and + geometry tests are particularly prone to small differences across platforms, or even with different compiler optimization settings. Human eyeball comparison is needed to determine the real significance of these differences which are usually 10 places to @@ -466,8 +466,8 @@ make check NO_LOCALE=1 - Some systems display minus zero as -0, while others - just show 0. + Some systems display minus zero as -0, while others + just show 0. @@ -485,23 +485,23 @@ make check NO_LOCALE=1 You might see differences in which the same rows are output in a different order than what appears in the expected file. In most cases this is not, strictly speaking, a bug. Most of the regression test -scripts are not so pedantic as to use an ORDER BY for every single -SELECT, and so their result row orderings are not well-defined +scripts are not so pedantic as to use an ORDER BY for every single +SELECT, and so their result row orderings are not well-defined according to the SQL specification. In practice, since we are looking at the same queries being executed on the same data by the same software, we usually get the same result ordering on all platforms, -so the lack of ORDER BY is not a problem. Some queries do exhibit +so the lack of ORDER BY is not a problem. Some queries do exhibit cross-platform ordering differences, however. When testing against an already-installed server, ordering differences can also be caused by non-C locale settings or non-default parameter settings, such as custom values -of work_mem or the planner cost parameters. +of work_mem or the planner cost parameters. Therefore, if you see an ordering difference, it's not something to -worry about, unless the query does have an ORDER BY that your +worry about, unless the query does have an ORDER BY that your result is violating. However, please report it anyway, so that we can add an -ORDER BY to that particular query to eliminate the bogus +ORDER BY to that particular query to eliminate the bogus failure in future releases. @@ -519,18 +519,18 @@ exclusion of those that don't. If the errors test results in a server crash - at the select infinite_recurse() command, it means that + at the select infinite_recurse() command, it means that the platform's limit on process stack size is smaller than the parameter indicates. This can be fixed by running the server under a higher stack size limit (4MB is recommended with the default value of - max_stack_depth). If you are unable to do that, an - alternative is to reduce the value of max_stack_depth. + max_stack_depth). If you are unable to do that, an + alternative is to reduce the value of max_stack_depth. - On platforms supporting getrlimit(), the server should - automatically choose a safe value of max_stack_depth; + On platforms supporting getrlimit(), the server should + automatically choose a safe value of max_stack_depth; so unless you've manually overridden this setting, a failure of this kind is a reportable bug. @@ -559,7 +559,7 @@ diff results/random.out expected/random.out parameter settings could cause the tests to fail. For example, changing parameters such as enable_seqscan or enable_indexscan could cause plan changes that would - affect the results of tests that use EXPLAIN. + affect the results of tests that use EXPLAIN. @@ -570,7 +570,7 @@ diff results/random.out expected/random.out Since some of the tests inherently produce environment-dependent - results, we have provided ways to specify alternate expected + results, we have provided ways to specify alternate expected result files. Each regression test can have several comparison files showing possible results on different platforms. There are two independent mechanisms for determining which comparison file is used @@ -597,7 +597,7 @@ testname:output:platformpattern=comparisonfilename standard regression tests, this is always out. The value corresponds to the file extension of the output file. The platform pattern is a pattern in the style of the Unix - tool expr (that is, a regular expression with an implicit + tool expr (that is, a regular expression with an implicit ^ anchor at the start). It is matched against the platform name as printed by config.guess. The comparison file name is the base name of the substitute result @@ -607,7 +607,7 @@ testname:output:platformpattern=comparisonfilename For example: some systems interpret very small floating-point values as zero, rather than reporting an underflow error. This causes a - few differences in the float8 regression test. + few differences in the float8 regression test. Therefore, we provide a variant comparison file, float8-small-is-zero.out, which includes the results to be expected on these systems. To silence the bogus @@ -619,30 +619,30 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out which will trigger on any machine where the output of config.guess matches i.86-.*-openbsd. Other lines - in resultmap select the variant comparison file for other + in resultmap select the variant comparison file for other platforms where it's appropriate. The second selection mechanism for variant comparison files is - much more automatic: it simply uses the best match among + much more automatic: it simply uses the best match among several supplied comparison files. The regression test driver script considers both the standard comparison file for a test, - testname.out, and variant files named - testname_digit.out - (where the digit is any single digit - 0-9). If any such file is an exact match, + testname.out, and variant files named + testname_digit.out + (where the digit is any single digit + 0-9). If any such file is an exact match, the test is considered to pass; otherwise, the one that generates the shortest diff is used to create the failure report. (If resultmap includes an entry for the particular - test, then the base testname is the substitute + test, then the base testname is the substitute name given in resultmap.) For example, for the char test, the comparison file char.out contains results that are expected - in the C and POSIX locales, while + in the C and POSIX locales, while the file char_1.out contains results sorted as they appear in many other locales. @@ -652,7 +652,7 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out results, but it can be used in any situation where the test results cannot be predicted easily from the platform name alone. A limitation of this mechanism is that the test driver cannot tell which variant is - actually correct for the current environment; it will just pick + actually correct for the current environment; it will just pick the variant that seems to work best. Therefore it is safest to use this mechanism only for variant results that you are willing to consider equally valid in all contexts. @@ -668,7 +668,7 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out under src/bin, use the Perl TAP tools and are run using the Perl testing program prove. You can pass command-line options to prove by setting - the make variable PROVE_FLAGS, for example: + the make variable PROVE_FLAGS, for example: make -C src/bin check PROVE_FLAGS='--timer' diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 9ef798183d..116f7224da 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -13,7 +13,7 @@ Overview - Major enhancements in PostgreSQL 10 include: + Major enhancements in PostgreSQL 10 include: @@ -58,14 +58,14 @@ 2017-08-04 [620b49a16] hash: Increase the number of possible overflow bitmaps b --> - Hash indexes must be rebuilt after pg_upgrade-ing - from any previous major PostgreSQL version (Mithun + Hash indexes must be rebuilt after pg_upgrade-ing + from any previous major PostgreSQL version (Mithun Cy, Robert Haas, Amit Kapila) Major hash index improvements necessitated this requirement. - pg_upgrade will create a script to assist with this. + pg_upgrade will create a script to assist with this. @@ -75,9 +75,9 @@ 2017-03-17 [88e66d193] Rename "pg_clog" directory to "pg_xact". --> - Rename write-ahead log directory pg_xlog - to pg_wal, and rename transaction - status directory pg_clog to pg_xact + Rename write-ahead log directory pg_xlog + to pg_wal, and rename transaction + status directory pg_clog to pg_xact (Michael Paquier) @@ -98,17 +98,17 @@ 2017-02-15 [0dfa89ba2] Replace reference to "xlog-method" with "wal-method" in --> - Rename SQL functions, tools, and options that reference - xlog to wal (Robert Haas) + Rename SQL functions, tools, and options that reference + xlog to wal (Robert Haas) - For example, pg_switch_xlog() becomes - pg_switch_wal(), pg_receivexlog - becomes pg_receivewal, and @@ -118,8 +118,8 @@ 2017-05-11 [d10c626de] Rename WAL-related functions and views to use "lsn" not --> - Rename WAL-related functions and views to use lsn - instead of location (David Rowley) + Rename WAL-related functions and views to use lsn + instead of location (David Rowley) @@ -136,20 +136,20 @@ --> Change the implementation of set-returning functions appearing in - a query's SELECT list (Andres Freund) + a query's SELECT list (Andres Freund) Set-returning functions are now evaluated before evaluation of scalar - expressions in the SELECT list, much as though they had - been placed in a LATERAL FROM-clause item. This allows + expressions in the SELECT list, much as though they had + been placed in a LATERAL FROM-clause item. This allows saner semantics for cases where multiple set-returning functions are present. If they return different numbers of rows, the shorter results are extended to match the longest result by adding nulls. Previously the results were cycled until they all terminated at the same time, producing a number of rows equal to the least common multiple of the functions' periods. In addition, set-returning functions are now - disallowed within CASE and COALESCE constructs. + disallowed within CASE and COALESCE constructs. For more information see . @@ -160,8 +160,8 @@ 2017-08-04 [c30f1770a] Apply ALTER ... SET NOT NULL recursively in ALTER ... AD --> - When ALTER TABLE ... ADD PRIMARY KEY marks - columns NOT NULL, that change now propagates to + When ALTER TABLE ... ADD PRIMARY KEY marks + columns NOT NULL, that change now propagates to inheritance child tables as well (Michael Paquier) @@ -179,9 +179,9 @@ Cases involving writable CTEs updating the same table updated by the containing statement, or by another writable CTE, fired BEFORE - STATEMENT or AFTER STATEMENT triggers more than once. + STATEMENT or AFTER STATEMENT triggers more than once. Also, if there were statement-level triggers on a table affected by a - foreign key enforcement action (such as ON DELETE CASCADE), + foreign key enforcement action (such as ON DELETE CASCADE), they could fire more than once per outer SQL statement. This is contrary to the SQL standard, so change it. @@ -197,20 +197,20 @@ --> Move sequences' metadata fields into a new pg_sequence + linkend="catalog-pg-sequence">pg_sequence system catalog (Peter Eisentraut) A sequence relation now stores only the fields that can be modified - by nextval(), that - is last_value, log_cnt, - and is_called. Other sequence properties, such as + by nextval(), that + is last_value, log_cnt, + and is_called. Other sequence properties, such as the starting value and increment, are kept in a corresponding row of - the pg_sequence catalog. - ALTER SEQUENCE updates are now fully transactional, + the pg_sequence catalog. + ALTER SEQUENCE updates are now fully transactional, implying that the sequence is locked until commit. - The nextval() and setval() functions + The nextval() and setval() functions remain nontransactional. @@ -218,14 +218,14 @@ The main incompatibility introduced by this change is that selecting from a sequence relation now returns only the three fields named above. To obtain the sequence's other properties, applications must - look into pg_sequence. The new system - view pg_sequences + look into pg_sequence. The new system + view pg_sequences can also be used for this purpose; it provides column names that are more compatible with existing code. - The output of psql's \d command for a + The output of psql's \d command for a sequence has been redesigned, too. @@ -235,17 +235,17 @@ 2017-01-04 [9a4d51077] Make wal streaming the default mode for pg_basebackup --> - Make stream the - WAL needed to restore the backup by default (Magnus + Make stream the + WAL needed to restore the backup by default (Magnus Hagander) - This changes pg_basebackup's - @@ -275,13 +275,13 @@ 2017-01-14 [05cd12ed5] pg_ctl: Change default to wait for all actions --> - Make all actions wait + Make all actions wait for completion by default (Peter Eisentraut) - Previously some pg_ctl actions didn't wait for - completion, and required the use of to do so. @@ -291,7 +291,7 @@ --> Change the default value of the - server parameter from pg_log to log + server parameter from pg_log to log (Andreas Karlsson) @@ -307,7 +307,7 @@ This replaces the hardcoded, undocumented file - name dh1024.pem. Note that dh1024.pem is + name dh1024.pem. Note that dh1024.pem is no longer examined by default; you must set this option if you want to use custom DH parameters. @@ -345,14 +345,14 @@ The server parameter - no longer supports off or plain. - The UNENCRYPTED option is no longer supported in - CREATE/ALTER USER ... PASSSWORD. Similarly, the - @@ -367,7 +367,7 @@ - These replace min_parallel_relation_size, which was + These replace min_parallel_relation_size, which was found to be too generic. @@ -394,14 +394,14 @@ 2016-12-23 [e13486eba] Remove sql_inheritance GUC. --> - Remove sql_inheritance server parameter (Robert Haas) + Remove sql_inheritance server parameter (Robert Haas) Changing this setting from the default value caused queries referencing - parent tables to not include child tables. The SQL + parent tables to not include child tables. The SQL standard requires them to be included, however, and this has been the - default since PostgreSQL 7.1. + default since PostgreSQL 7.1. @@ -420,10 +420,10 @@ This feature requires a backwards-incompatible change to the handling of arrays of composite types in PL/Python. Previously, you could return an array of composite values by writing, e.g., [[col1, - col2], [col1, col2]]; but now that is interpreted as a + col2], [col1, col2]]; but now that is interpreted as a two-dimensional array. Composite types in arrays must now be written as Python tuples, not lists, to resolve the ambiguity; that is, - write [(col1, col2), (col1, col2)] instead. + write [(col1, col2), (col1, col2)] instead. @@ -432,7 +432,7 @@ 2017-02-27 [817f2a586] Remove PL/Tcl's "module" facility. --> - Remove PL/Tcl's module auto-loading facility (Tom Lane) + Remove PL/Tcl's module auto-loading facility (Tom Lane) @@ -448,13 +448,13 @@ 2016-10-12 [64f3524e2] Remove pg_dump/pg_dumpall support for dumping from pre-8 --> - Remove pg_dump/pg_dumpall support + Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers (Tom Lane) Users needing to dump from pre-8.0 servers will need to use dump - programs from PostgreSQL 9.6 or earlier. The + programs from PostgreSQL 9.6 or earlier. The resulting output should still load successfully into newer servers. @@ -468,9 +468,9 @@ - This removes configure's option. Floating-point timestamps have few advantages and have not - been the default since PostgreSQL 8.3. + been the default since PostgreSQL 8.3. @@ -484,7 +484,7 @@ This protocol hasn't had client support - since PostgreSQL 6.3. + since PostgreSQL 6.3. @@ -493,12 +493,12 @@ 2017-02-13 [7ada2d31f] Remove contrib/tsearch2. --> - Remove contrib/tsearch2 module (Robert Haas) + Remove contrib/tsearch2 module (Robert Haas) This module provided compatibility with the version of full text - search that shipped in pre-8.3 PostgreSQL releases. + search that shipped in pre-8.3 PostgreSQL releases. @@ -507,14 +507,14 @@ 2017-03-23 [50c956add] Remove createlang and droplang --> - Remove createlang and droplang + Remove createlang and droplang command-line applications (Peter Eisentraut) - These had been deprecated since PostgreSQL 9.1. - Instead, use CREATE EXTENSION and DROP - EXTENSION directly. + These had been deprecated since PostgreSQL 9.1. + Instead, use CREATE EXTENSION and DROP + EXTENSION directly. @@ -686,8 +686,8 @@ 2016-08-23 [77e290682] Create an SP-GiST opclass for inet/cidr. --> - Add SP-GiST index support for INET and - CIDR data types (Emre Hasegeli) + Add SP-GiST index support for INET and + CIDR data types (Emre Hasegeli) @@ -696,14 +696,14 @@ 2017-04-01 [7526e1022] BRIN auto-summarization --> - Add option to allow BRIN index summarization to happen + Add option to allow BRIN index summarization to happen more aggressively (Álvaro Herrera) A new CREATE - INDEX option enables auto-summarization of the - previous BRIN page range when a new page + INDEX option enables auto-summarization of the + previous BRIN page range when a new page range is created. @@ -713,18 +713,18 @@ 2017-04-01 [c655899ba] BRIN de-summarization --> - Add functions to remove and re-add BRIN - summarization for BRIN index ranges (Álvaro + Add functions to remove and re-add BRIN + summarization for BRIN index ranges (Álvaro Herrera) - The new SQL function brin_summarize_range() - updates BRIN index summarization for a specified - range and brin_desummarize_range() removes it. + The new SQL function brin_summarize_range() + updates BRIN index summarization for a specified + range and brin_desummarize_range() removes it. This is helpful to update summarization of a range that is now - smaller due to UPDATEs and DELETEs. + smaller due to UPDATEs and DELETEs. @@ -733,7 +733,7 @@ 2017-04-06 [7e534adcd] Fix BRIN cost estimation --> - Improve accuracy in determining if a BRIN index scan + Improve accuracy in determining if a BRIN index scan is beneficial (David Rowley, Emre Hasegeli) @@ -743,7 +743,7 @@ 2016-09-09 [b1328d78f] Invent PageIndexTupleOverwrite, and teach BRIN and GiST --> - Allow faster GiST inserts and updates by reusing + Allow faster GiST inserts and updates by reusing index space more efficiently (Andrey Borodin) @@ -753,7 +753,7 @@ 2017-03-23 [218f51584] Reduce page locking in GIN vacuum --> - Reduce page locking during vacuuming of GIN indexes + Reduce page locking during vacuuming of GIN indexes (Andrey Borodin) @@ -825,9 +825,9 @@ New commands are CREATE STATISTICS, - ALTER STATISTICS, and - DROP STATISTICS. + linkend="SQL-CREATESTATISTICS">CREATE STATISTICS, + ALTER STATISTICS, and + DROP STATISTICS. This feature is helpful in estimating query memory usage and when combining the statistics from individual columns. @@ -864,9 +864,9 @@ --> Speed up aggregate functions that calculate a running sum - using numeric-type arithmetic, including some variants - of SUM(), AVG(), - and STDDEV() (Heikki Linnakangas) + using numeric-type arithmetic, including some variants + of SUM(), AVG(), + and STDDEV() (Heikki Linnakangas) @@ -950,14 +950,14 @@ --> Allow explicit control - over EXPLAIN's display + over EXPLAIN's display of planning and execution time (Ashutosh Bapat) By default planning and execution time are displayed by - EXPLAIN ANALYZE and are not displayed in other cases. - The new EXPLAIN option SUMMARY allows + EXPLAIN ANALYZE and are not displayed in other cases. + The new EXPLAIN option SUMMARY allows explicit control of this. @@ -971,8 +971,8 @@ - New roles pg_monitor, pg_read_all_settings, - pg_read_all_stats, and pg_stat_scan_tables + New roles pg_monitor, pg_read_all_settings, + pg_read_all_stats, and pg_stat_scan_tables allow simplified permission configuration. @@ -984,7 +984,7 @@ Properly update the statistics collector during REFRESH MATERIALIZED - VIEW (Jim Mlodgenski) + VIEW (Jim Mlodgenski) @@ -1015,14 +1015,14 @@ 2017-03-16 [befd73c50] Add pg_ls_logdir() and pg_ls_waldir() functions. --> - Add functions to return the log and WAL directory + Add functions to return the log and WAL directory contents (Dave Page) The new functions - are pg_ls_logdir() - and pg_ls_waldir() + are pg_ls_logdir() + and pg_ls_waldir() and can be executed by non-superusers with the proper permissions. @@ -1034,7 +1034,7 @@ --> Add function pg_current_logfile() + linkend="functions-info-session-table">pg_current_logfile() to read logging collector's current stderr and csvlog output file names (Gilles Darold) @@ -1066,7 +1066,7 @@ - These are now DEBUG1-level messages. + These are now DEBUG1-level messages. @@ -1091,7 +1091,7 @@ - <link linkend="pg-stat-activity-view"><structname>pg_stat_activity</></link> + <link linkend="pg-stat-activity-view"><structname>pg_stat_activity</structname></link> @@ -1101,7 +1101,7 @@ 2017-03-18 [249cf070e] Create and use wait events for read, write, and fsync op --> - Add pg_stat_activity reporting of low-level wait + Add pg_stat_activity reporting of low-level wait states (Michael Paquier, Robert Haas, Rushabh Lathia) @@ -1119,13 +1119,13 @@ --> Show auxiliary processes, background workers, and walsender - processes in pg_stat_activity (Kuntal Ghosh, + processes in pg_stat_activity (Kuntal Ghosh, Michael Paquier) This simplifies monitoring. A new - column backend_type identifies the process type. + column backend_type identifies the process type. @@ -1134,7 +1134,7 @@ 2017-02-22 [4c728f382] Pass the source text for a parallel query to the workers --> - Allow pg_stat_activity to show the SQL query + Allow pg_stat_activity to show the SQL query being executed by parallel workers (Rafia Sabih) @@ -1145,9 +1145,9 @@ --> Rename - pg_stat_activity.wait_event_type - values LWLockTranche and - LWLockNamed to LWLock (Robert Haas) + pg_stat_activity.wait_event_type + values LWLockTranche and + LWLockNamed to LWLock (Robert Haas) @@ -1161,7 +1161,7 @@ - <acronym>Authentication</> + <acronym>Authentication</acronym> @@ -1173,13 +1173,13 @@ 2017-04-18 [c727f120f] Rename "scram" to "scram-sha-256" in pg_hba.conf and pas --> - Add SCRAM-SHA-256 + Add SCRAM-SHA-256 support for password negotiation and storage (Michael Paquier, Heikki Linnakangas) - This provides better security than the existing md5 + This provides better security than the existing md5 negotiation and storage method. @@ -1190,7 +1190,7 @@ --> Change the server parameter - from boolean to enum (Michael Paquier) + from boolean to enum (Michael Paquier) @@ -1204,8 +1204,8 @@ --> Add view pg_hba_file_rules - to display the contents of pg_hba.conf (Haribabu + linkend="view-pg-hba-file-rules">pg_hba_file_rules + to display the contents of pg_hba.conf (Haribabu Kommi) @@ -1219,11 +1219,11 @@ 2017-03-22 [6b76f1bb5] Support multiple RADIUS servers --> - Support multiple RADIUS servers (Magnus Hagander) + Support multiple RADIUS servers (Magnus Hagander) - All the RADIUS related parameters are now plural and + All the RADIUS related parameters are now plural and support a comma-separated list of servers. @@ -1244,16 +1244,16 @@ 2017-01-04 [6667d9a6d] Re-allow SSL passphrase prompt at server start, but not --> - Allow SSL configuration to be updated during + Allow SSL configuration to be updated during configuration reload (Andreas Karlsson, Tom Lane) - This allows SSL to be reconfigured without a server - restart, by using pg_ctl reload, SELECT - pg_reload_conf(), or sending a SIGHUP signal. - However, reloading the SSL configuration does not work - if the server's SSL key requires a passphrase, as there + This allows SSL to be reconfigured without a server + restart, by using pg_ctl reload, SELECT + pg_reload_conf(), or sending a SIGHUP signal. + However, reloading the SSL configuration does not work + if the server's SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case. @@ -1297,7 +1297,7 @@ - <link linkend="wal">Write-Ahead Log</> (<acronym>WAL</>) + <link linkend="wal">Write-Ahead Log</link> (<acronym>WAL</acronym>) @@ -1306,7 +1306,7 @@ 2016-12-22 [6ef2eba3f] Skip checkpoints, archiving on idle systems. --> - Prevent unnecessary checkpoints and WAL archiving on + Prevent unnecessary checkpoints and WAL archiving on otherwise-idle systems (Michael Paquier) @@ -1318,7 +1318,7 @@ --> Add server parameter - to add details to WAL that can be sanity-checked on + to add details to WAL that can be sanity-checked on the standby (Kuntal Ghosh, Robert Haas) @@ -1332,14 +1332,14 @@ 2017-04-05 [00b6b6feb] Allow -\-with-wal-segsize=n up to n=1024MB --> - Increase the maximum configurable WAL segment size + Increase the maximum configurable WAL segment size to one gigabyte (Beena Emerson) - A larger WAL segment size allows for fewer + A larger WAL segment size allows for fewer invocations and fewer - WAL files to manage. + WAL files to manage. @@ -1364,13 +1364,13 @@ --> Add the ability to logically - replicate tables to standby servers (Petr Jelinek) + replicate tables to standby servers (Petr Jelinek) Logical replication allows more flexibility than physical replication does, including replication between different major - versions of PostgreSQL and selective + versions of PostgreSQL and selective replication. @@ -1387,8 +1387,8 @@ Previously the server always waited for the active standbys that - appeared first in synchronous_standby_names. The new - synchronous_standby_names keyword ANY allows + appeared first in synchronous_standby_names. The new + synchronous_standby_names keyword ANY allows waiting for any number of standbys irrespective of their ordering. This is known as quorum commit. @@ -1419,14 +1419,14 @@ --> Enable replication from localhost connections by default in - pg_hba.conf + pg_hba.conf (Michael Paquier) - Previously pg_hba.conf's replication connection + Previously pg_hba.conf's replication connection lines were commented out by default. This is particularly useful for - . + . @@ -1436,13 +1436,13 @@ --> Add columns to pg_stat_replication + linkend="monitoring-stats-views-table">pg_stat_replication to report replication delay times (Thomas Munro) - The new columns are write_lag, - flush_lag, and replay_lag. + The new columns are write_lag, + flush_lag, and replay_lag. @@ -1452,8 +1452,8 @@ --> Allow specification of the recovery stopping point by Log Sequence - Number (LSN) in - recovery.conf + Number (LSN) in + recovery.conf (Michael Paquier) @@ -1470,12 +1470,12 @@ --> Allow users to disable pg_stop_backup()'s - waiting for all WAL to be archived (David Steele) + linkend="functions-admin">pg_stop_backup()'s + waiting for all WAL to be archived (David Steele) - An optional second argument to pg_stop_backup() + An optional second argument to pg_stop_backup() controls that behavior. @@ -1486,7 +1486,7 @@ --> Allow creation of temporary replication slots + linkend="functions-replication-table">temporary replication slots (Petr Jelinek) @@ -1530,8 +1530,8 @@ --> Add XMLTABLE - function that converts XML-formatted data into a row set + linkend="functions-xml-processing-xmltable">XMLTABLE + function that converts XML-formatted data into a row set (Pavel Stehule, Álvaro Herrera) @@ -1542,17 +1542,17 @@ --> Allow standard row constructor syntax in UPDATE ... SET - (column_list) = row_constructor + (column_list) = row_constructor (Tom Lane) - The row_constructor can now begin with the - keyword ROW; previously that had to be omitted. Also, - an occurrence of table_name.* - within the row_constructor is now expanded into + The row_constructor can now begin with the + keyword ROW; previously that had to be omitted. Also, + an occurrence of table_name.* + within the row_constructor is now expanded into multiple columns, as in other uses - of row_constructors. + of row_constructors. @@ -1562,13 +1562,13 @@ --> Fix regular expressions' character class handling for large character - codes, particularly Unicode characters above U+7FF + codes, particularly Unicode characters above U+7FF (Tom Lane) Previously, such characters were never recognized as belonging to - locale-dependent character classes such as [[:alpha:]]. + locale-dependent character classes such as [[:alpha:]]. @@ -1587,7 +1587,7 @@ --> Add table partitioning - syntax that automatically creates partition constraints and + syntax that automatically creates partition constraints and handles routing of tuple insertions and updates (Amit Langote) @@ -1603,7 +1603,7 @@ 2017-03-31 [597027163] Add transition table support to plpgsql. --> - Add AFTER trigger + Add AFTER trigger transition tables to record changed rows (Kevin Grittner, Thomas Munro) @@ -1620,7 +1620,7 @@ --> Allow restrictive row-level - security policies (Stephen Frost) + security policies (Stephen Frost) @@ -1636,16 +1636,16 @@ --> When creating a foreign-key constraint, check - for REFERENCES permission on only the referenced table + for REFERENCES permission on only the referenced table (Tom Lane) - Previously REFERENCES permission on the referencing + Previously REFERENCES permission on the referencing table was also required. This appears to have stemmed from a misreading of the SQL standard. Since creating a foreign key (or any other type of) constraint requires ownership privilege on the - constrained table, additionally requiring REFERENCES + constrained table, additionally requiring REFERENCES permission seems rather pointless. @@ -1656,11 +1656,11 @@ --> Allow default - permissions on schemas (Matheus Oliveira) + permissions on schemas (Matheus Oliveira) - This is done using the ALTER DEFAULT PRIVILEGES command. + This is done using the ALTER DEFAULT PRIVILEGES command. @@ -1670,7 +1670,7 @@ --> Add CREATE SEQUENCE - AS command to create a sequence matching an integer data type + AS command to create a sequence matching an integer data type (Peter Eisentraut) @@ -1685,13 +1685,13 @@ 2016-11-10 [279c439c7] Support "COPY view FROM" for views with INSTEAD OF INSER --> - Allow COPY view - FROM source on views with INSTEAD - INSERT triggers (Haribabu Kommi) + Allow COPY view + FROM source on views with INSTEAD + INSERT triggers (Haribabu Kommi) - The triggers are fed the data rows read by COPY. + The triggers are fed the data rows read by COPY. @@ -1701,14 +1701,14 @@ --> Allow the specification of a function name without arguments in - DDL commands, if it is unique (Peter Eisentraut) + DDL commands, if it is unique (Peter Eisentraut) For example, allow DROP - FUNCTION on a function name without arguments if there + FUNCTION on a function name without arguments if there is only one function with that name. This behavior is required by the - SQL standard. + SQL standard. @@ -1718,7 +1718,7 @@ --> Allow multiple functions, operators, and aggregates to be dropped - with a single DROP command (Peter Eisentraut) + with a single DROP command (Peter Eisentraut) @@ -1728,10 +1728,10 @@ 2017-03-20 [b6fb534f1] Add IF NOT EXISTS for CREATE SERVER and CREATE USER MAPP --> - Support IF NOT EXISTS - in CREATE SERVER, - CREATE USER MAPPING, - and CREATE COLLATION + Support IF NOT EXISTS + in CREATE SERVER, + CREATE USER MAPPING, + and CREATE COLLATION (Anastasia Lubennikova, Peter Eisentraut) @@ -1742,7 +1742,7 @@ 2017-03-03 [9eb344faf] Allow vacuums to report oldestxmin --> - Make VACUUM VERBOSE report + Make VACUUM VERBOSE report the number of skipped frozen pages and oldest xmin (Masahiko Sawada, Simon Riggs) @@ -1758,7 +1758,7 @@ 2017-01-23 [7e26e02ee] Prefetch blocks during lazy vacuum's truncation scan --> - Improve speed of VACUUM's removal of trailing empty + Improve speed of VACUUM's removal of trailing empty heap pages (Claudio Freire, Álvaro Herrera) @@ -1777,13 +1777,13 @@ 2017-03-31 [e306df7f9] Full Text Search support for JSON and JSONB --> - Add full text search support for JSON and JSONB + Add full text search support for JSON and JSONB (Dmitry Dolgov) - The functions ts_headline() and - to_tsvector() can now be used on these data types. + The functions ts_headline() and + to_tsvector() can now be used on these data types. @@ -1792,15 +1792,15 @@ 2017-03-15 [c7a9fa399] Add support for EUI-64 MAC addresses as macaddr8 --> - Add support for EUI-64 MAC addresses, as a - new data type macaddr8 + Add support for EUI-64 MAC addresses, as a + new data type macaddr8 (Haribabu Kommi) This complements the existing support - for EUI-48 MAC addresses - (type macaddr). + for EUI-48 MAC addresses + (type macaddr). @@ -1809,13 +1809,13 @@ 2017-04-06 [321732705] Identity columns --> - Add identity columns for + Add identity columns for assigning a numeric value to columns on insert (Peter Eisentraut) - These are similar to SERIAL columns, but are - SQL standard compliant. + These are similar to SERIAL columns, but are + SQL standard compliant. @@ -1824,13 +1824,13 @@ 2016-09-07 [0ab9c56d0] Support renaming an existing value of an enum type. --> - Allow ENUM values to be + Allow ENUM values to be renamed (Dagfinn Ilmari Mannsåker) This uses the syntax ALTER - TYPE ... RENAME VALUE. + TYPE ... RENAME VALUE. @@ -1840,14 +1840,14 @@ --> Properly treat array pseudotypes - (anyarray) as arrays in to_json() - and to_jsonb() (Andrew Dunstan) + (anyarray) as arrays in to_json() + and to_jsonb() (Andrew Dunstan) - Previously columns declared as anyarray (particularly those - in the pg_stats view) were converted to JSON + Previously columns declared as anyarray (particularly those + in the pg_stats view) were converted to JSON strings rather than arrays. @@ -1858,16 +1858,16 @@ --> Add operators for multiplication and division - of money values - with int8 values (Peter Eisentraut) + of money values + with int8 values (Peter Eisentraut) - Previously such cases would result in converting the int8 - values to float8 and then using - the money-and-float8 operators. The new behavior + Previously such cases would result in converting the int8 + values to float8 and then using + the money-and-float8 operators. The new behavior avoids possible precision loss. But note that division - of money by int8 now truncates the quotient, like + of money by int8 now truncates the quotient, like other integer-division cases, while the previous behavior would have rounded. @@ -1878,7 +1878,7 @@ 2016-09-14 [656df624c] Add overflow checks to money type input function --> - Check for overflow in the money type's input function + Check for overflow in the money type's input function (Peter Eisentraut) @@ -1898,12 +1898,12 @@ --> Add simplified regexp_match() + linkend="functions-posix-regexp">regexp_match() function (Emre Hasegeli) - This is similar to regexp_matches(), but it only + This is similar to regexp_matches(), but it only returns results from the first match so it does not need to return a set, making it easier to use for simple cases. @@ -1914,8 +1914,8 @@ 2017-01-18 [d00ca333c] Implement array version of jsonb_delete and operator --> - Add a version of jsonb's delete operator that takes + Add a version of jsonb's delete operator that takes an array of keys to delete (Magnus Hagander) @@ -1925,7 +1925,7 @@ 2017-04-06 [cf35346e8] Make json_populate_record and friends operate recursivel --> - Make json_populate_record() + Make json_populate_record() and related functions process JSON arrays and objects recursively (Nikita Glukhov) @@ -1935,7 +1935,7 @@ properly converted from JSON arrays, and composite-type fields are properly converted from JSON objects. Previously, such cases would fail because the text representation of the JSON value would be fed - to array_in() or record_in(), and its + to array_in() or record_in(), and its syntax would not match what those input functions expect. @@ -1946,14 +1946,14 @@ --> Add function txid_current_if_assigned() - to return the current transaction ID or NULL if no + linkend="functions-txid-snapshot">txid_current_if_assigned() + to return the current transaction ID or NULL if no transaction ID has been assigned (Craig Ringer) This is different from txid_current(), + linkend="functions-txid-snapshot">txid_current(), which always returns a transaction ID, assigning one if necessary. Unlike that function, this function can be run on standby servers. @@ -1965,7 +1965,7 @@ --> Add function txid_status() + linkend="functions-txid-snapshot">txid_status() to check if a transaction was committed (Craig Ringer) @@ -1982,8 +1982,8 @@ --> Allow make_date() - to interpret negative years as BC years (Álvaro + linkend="functions-formatting-table">make_date() + to interpret negative years as BC years (Álvaro Herrera) @@ -1993,14 +1993,14 @@ 2016-09-28 [d3cd36a13] Make to_timestamp() and to_date() range-check fields of --> - Make to_timestamp() and to_date() reject + Make to_timestamp() and to_date() reject out-of-range input fields (Artur Zakirov) For example, - previously to_date('2009-06-40','YYYY-MM-DD') was - accepted and returned 2009-07-10. It will now generate + previously to_date('2009-06-40','YYYY-MM-DD') was + accepted and returned 2009-07-10. It will now generate an error. @@ -2019,7 +2019,7 @@ 2017-03-27 [70ec3f1f8] PL/Python: Add cursor and execute methods to plan object --> - Allow PL/Python's cursor() and execute() + Allow PL/Python's cursor() and execute() functions to be called as methods of their plan-object arguments (Peter Eisentraut) @@ -2034,7 +2034,7 @@ 2016-12-13 [55caaaeba] Improve handling of array elements as getdiag_targets an --> - Allow PL/pgSQL's GET DIAGNOSTICS statement to retrieve + Allow PL/pgSQL's GET DIAGNOSTICS statement to retrieve values into array elements (Tom Lane) @@ -2047,7 +2047,7 @@ - <link linkend="pltcl">PL/Tcl</> + <link linkend="pltcl">PL/Tcl</link> @@ -2104,7 +2104,7 @@ --> Allow specification of multiple - host names or addresses in libpq connection strings and URIs + host names or addresses in libpq connection strings and URIs (Robert Haas, Heikki Linnakangas) @@ -2119,7 +2119,7 @@ --> Allow libpq connection strings and URIs to request a read/write host, + linkend="libpq-connect-target-session-attrs">read/write host, that is a master server rather than a standby server (Victor Wagner, Mithun Cy) @@ -2127,7 +2127,7 @@ This is useful when multiple host names are specified. It is controlled by libpq connection parameter - . @@ -2136,7 +2136,7 @@ 2017-01-24 [ba005f193] Allow password file name to be specified as a libpq conn --> - Allow the password file name + Allow the password file name to be specified as a libpq connection parameter (Julian Markwort) @@ -2151,17 +2151,17 @@ --> Add function PQencryptPasswordConn() + linkend="libpq-pqencryptpasswordconn">PQencryptPasswordConn() to allow creation of more types of encrypted passwords on the client side (Michael Paquier, Heikki Linnakangas) - Previously only MD5-encrypted passwords could be created + Previously only MD5-encrypted passwords could be created using PQencryptPassword(). + linkend="libpq-pqencryptpassword">PQencryptPassword(). This new function can also create SCRAM-SHA-256-encrypted + linkend="auth-pg-hba-conf">SCRAM-SHA-256-encrypted passwords. @@ -2171,13 +2171,13 @@ 2016-08-16 [a7b5573d6] Remove separate version numbering for ecpg preprocessor. --> - Change ecpg preprocessor version from 4.12 to 10 + Change ecpg preprocessor version from 4.12 to 10 (Tom Lane) - Henceforth the ecpg version will match - the PostgreSQL distribution version number. + Henceforth the ecpg version will match + the PostgreSQL distribution version number. @@ -2200,14 +2200,14 @@ 2017-04-02 [68dba97a4] Document psql's behavior of recalling the previously exe --> - Add conditional branch support to psql (Corey + Add conditional branch support to psql (Corey Huinker) - This feature adds psql - meta-commands \if, \elif, \else, - and \endif. This is primarily helpful for scripting. + This feature adds psql + meta-commands \if, \elif, \else, + and \endif. This is primarily helpful for scripting. @@ -2216,8 +2216,8 @@ 2017-03-07 [b2678efd4] psql: Add \gx command --> - Add psql \gx meta-command to execute - (\g) a query in expanded mode (\x) + Add psql \gx meta-command to execute + (\g) a query in expanded mode (\x) (Christoph Berg) @@ -2227,12 +2227,12 @@ 2017-04-01 [f833c847b] Allow psql variable substitution to occur in backtick co --> - Expand psql variable references in + Expand psql variable references in backtick-executed strings (Tom Lane) - This is particularly useful in the new psql + This is particularly useful in the new psql conditional branch commands. @@ -2244,23 +2244,23 @@ 2017-02-02 [fd6cd6980] Clean up psql's behavior for a few more control variable --> - Prevent psql's special variables from being set to + Prevent psql's special variables from being set to invalid values (Daniel Vérité, Tom Lane) - Previously, setting one of psql's special variables + Previously, setting one of psql's special variables to an invalid value silently resulted in the default behavior. - \set on a special variable now fails if the proposed - new value is invalid. As a special exception, \set + \set on a special variable now fails if the proposed + new value is invalid. As a special exception, \set with an empty or omitted new value, on a boolean-valued special variable, still has the effect of setting the variable - to on; but now it actually acquires that value rather - than an empty string. \unset on a special variable now + to on; but now it actually acquires that value rather + than an empty string. \unset on a special variable now explicitly sets the variable to its default value, which is also the value it acquires at startup. In sum, a control variable now always has a displayable value that reflects - what psql is actually doing. + what psql is actually doing. @@ -2269,7 +2269,7 @@ 2017-09-06 [a6c678f01] Add psql variables showing server version and psql versi --> - Add variables showing server version and psql version + Add variables showing server version and psql version (Fabien Coelho) @@ -2279,14 +2279,14 @@ 2016-11-03 [a0f357e57] psql: Split up "Modifiers" column in \d and \dD --> - Improve psql's \d (display relation) - and \dD (display domain) commands to show collation, + Improve psql's \d (display relation) + and \dD (display domain) commands to show collation, nullable, and default properties in separate columns (Peter Eisentraut) - Previously they were shown in a single Modifiers column. + Previously they were shown in a single Modifiers column. @@ -2295,7 +2295,7 @@ 2017-07-27 [77cb4a1d6] Standardize describe.c's behavior for no-matching-object --> - Make the various \d commands handle no-matching-object + Make the various \d commands handle no-matching-object cases more consistently (Daniel Gustafsson) @@ -2319,7 +2319,7 @@ 2017-03-16 [d7d77f382] psql: Add completion for \help DROP|ALTER --> - Improve psql's tab completion (Jeff Janes, + Improve psql's tab completion (Jeff Janes, Ian Barwick, Andreas Karlsson, Sehrope Sarkuni, Thomas Munro, Kevin Grittner, Dagfinn Ilmari Mannsåker) @@ -2339,7 +2339,7 @@ 2016-11-09 [41124a91e] pgbench: Allow the transaction log file prefix to be cha --> - Add pgbench option to control the log file prefix (Masahiko Sawada) @@ -2349,7 +2349,7 @@ 2017-01-20 [cdc2a7047] Allow backslash line continuations in pgbench's meta com --> - Allow pgbench's meta-commands to span multiple + Allow pgbench's meta-commands to span multiple lines (Fabien Coelho) @@ -2364,7 +2364,7 @@ 2017-08-11 [796818442] Remove pgbench's restriction on placement of -M switch. --> - Remove restriction on placement of option relative to other command line options (Tom Lane) @@ -2386,8 +2386,8 @@ --> Add pg_receivewal - option / to specify compression (Michael Paquier) @@ -2398,12 +2398,12 @@ --> Add pg_recvlogical option - to specify the ending position (Craig Ringer) - This complements the existing option. @@ -2412,9 +2412,9 @@ 2016-10-19 [5d58c07a4] initdb pg_basebackup: Rename -\-noxxx options to -\-no-x --> - Rename initdb - options and to be spelled + and (Vik Fearing, Peter Eisentraut) @@ -2426,9 +2426,9 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</></>, - <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</></>, - <link linkend="APP-PGRESTORE"><application>pg_restore</></> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link>, + <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</application></link>, + <link linkend="APP-PGRESTORE"><application>pg_restore</application></link> @@ -2437,11 +2437,11 @@ 2016-09-20 [46b55e7f8] pg_restore: Add -N option to exclude schemas --> - Allow pg_restore to exclude schemas (Michael Banck) + Allow pg_restore to exclude schemas (Michael Banck) - This adds a new / option. @@ -2450,8 +2450,8 @@ 2016-11-29 [4fafa579b] Add -\-no-blobs option to pg_dump --> - Add @@ -2464,13 +2464,13 @@ 2017-03-07 [9a83d56b3] Allow pg_dumpall to dump roles w/o user passwords --> - Add pg_dumpall option - to omit role passwords (Robins Tharakan, Simon Riggs) - This allows use of pg_dumpall by non-superusers; + This allows use of pg_dumpall by non-superusers; without this option, it fails due to inability to read passwords. @@ -2490,15 +2490,15 @@ 2017-03-22 [96a7128b7] Sync pg_dump and pg_dumpall output --> - Issue fsync() on the output files generated by - pg_dump and - pg_dumpall (Michael Paquier) + Issue fsync() on the output files generated by + pg_dump and + pg_dumpall (Michael Paquier) This provides more security that the output is safely stored on disk before the program exits. This can be disabled with - the new option. @@ -2518,12 +2518,12 @@ 2016-12-21 [ecbdc4c55] Forbid invalid combination of options in pg_basebackup. --> - Allow pg_basebackup to stream write-ahead log in + Allow pg_basebackup to stream write-ahead log in tar mode (Magnus Hagander) - The WAL will be stored in a separate tar file from + The WAL will be stored in a separate tar file from the base backup. @@ -2533,13 +2533,13 @@ 2017-01-16 [e7b020f78] Make pg_basebackup use temporary replication slots --> - Make pg_basebackup use temporary replication slots + Make pg_basebackup use temporary replication slots (Magnus Hagander) Temporary replication slots will be used by default when - pg_basebackup uses WAL streaming with default + pg_basebackup uses WAL streaming with default options. @@ -2550,8 +2550,8 @@ --> Be more careful about fsync'ing in all required places - in pg_basebackup and - pg_receivewal (Michael Paquier) + in pg_basebackup and + pg_receivewal (Michael Paquier) @@ -2561,7 +2561,7 @@ 2016-10-19 [5d58c07a4] initdb pg_basebackup: Rename -\-noxxx options to -\-no-x --> - Add pg_basebackup option to disable fsync (Michael Paquier) @@ -2571,7 +2571,7 @@ 2016-09-28 [6ad8ac602] Exclude additional directories in pg_basebackup --> - Improve pg_basebackup's handling of which + Improve pg_basebackup's handling of which directories to skip (David Steele) @@ -2581,7 +2581,7 @@ - <application><xref linkend="app-pg-ctl"></> + <application><xref linkend="app-pg-ctl"></application> @@ -2590,7 +2590,7 @@ 2016-09-21 [e7010ce47] pg_ctl: Add wait option to promote action --> - Add wait option for 's + Add wait option for 's promote operation (Peter Eisentraut) @@ -2600,8 +2600,8 @@ 2016-10-19 [0be22457d] pg_ctl: Add long options for -w and -W --> - Add long options for pg_ctl wait () + and no-wait () (Vik Fearing) @@ -2610,8 +2610,8 @@ 2016-10-19 [caf936b09] pg_ctl: Add long option for -o --> - Add long option for pg_ctl server options - () (Peter Eisentraut) @@ -2620,14 +2620,14 @@ 2017-06-28 [f13ea95f9] Change pg_ctl to detect server-ready by watching status --> - Make pg_ctl start --wait detect server-ready by - watching postmaster.pid, not by attempting connections + Make pg_ctl start --wait detect server-ready by + watching postmaster.pid, not by attempting connections (Tom Lane) The postmaster has been changed to report its ready-for-connections - status in postmaster.pid, and pg_ctl + status in postmaster.pid, and pg_ctl now examines that file to detect whether startup is complete. This is more efficient and reliable than the old method, and it eliminates postmaster log entries about rejected connection @@ -2640,12 +2640,12 @@ 2017-06-26 [c61559ec3] Reduce pg_ctl's reaction time when waiting for postmaste --> - Reduce pg_ctl's reaction time when waiting for + Reduce pg_ctl's reaction time when waiting for postmaster start/stop (Tom Lane) - pg_ctl now probes ten times per second when waiting + pg_ctl now probes ten times per second when waiting for a postmaster state change, rather than once per second. @@ -2655,14 +2655,14 @@ 2017-07-05 [1bac5f552] pg_ctl: Make failure to complete operation a nonzero exi --> - Ensure that pg_ctl exits with nonzero status if an + Ensure that pg_ctl exits with nonzero status if an operation being waited for does not complete within the timeout (Peter Eisentraut) - The start and promote operations now return - exit status 1, not 0, in such cases. The stop operation + The start and promote operations now return + exit status 1, not 0, in such cases. The stop operation has always done that. @@ -2687,14 +2687,14 @@ - Release numbers will now have two parts (e.g., 10.1) - rather than three (e.g., 9.6.3). + Release numbers will now have two parts (e.g., 10.1) + rather than three (e.g., 9.6.3). Major versions will now increase just the first number, and minor releases will increase just the second number. Release branches will be referred to by single numbers - (e.g., 10 rather than 9.6). + (e.g., 10 rather than 9.6). This change is intended to reduce user confusion about what is a - major or minor release of PostgreSQL. + major or minor release of PostgreSQL. @@ -2708,12 +2708,12 @@ 2017-06-21 [81f056c72] Remove entab and associated detritus. --> - Improve behavior of pgindent + Improve behavior of pgindent (Piotr Stefaniak, Tom Lane) - We have switched to a new version of pg_bsd_indent + We have switched to a new version of pg_bsd_indent based on recent improvements made by the FreeBSD project. This fixes numerous small bugs that led to odd C code formatting decisions. Most notably, lines within parentheses (such as in a @@ -2728,14 +2728,14 @@ 2017-03-23 [eccfef81e] ICU support --> - Allow the ICU library to + Allow the ICU library to optionally be used for collation support (Peter Eisentraut) - The ICU library has versioning that allows detection + The ICU library has versioning that allows detection of collation changes between versions. It is enabled via configure - option . The default still uses the operating system's native collation library. @@ -2746,14 +2746,14 @@ --> Automatically mark all PG_FUNCTION_INFO_V1 functions - as DLLEXPORT-ed on - Windows (Laurenz Albe) + linkend="xfunc-c">PG_FUNCTION_INFO_V1 functions + as DLLEXPORT-ed on + Windows (Laurenz Albe) - If third-party code is using extern function - declarations, they should also add DLLEXPORT markers + If third-party code is using extern function + declarations, they should also add DLLEXPORT markers to those declarations. @@ -2763,10 +2763,10 @@ 2016-11-08 [1833f1a1c] Simplify code by getting rid of SPI_push, SPI_pop, SPI_r --> - Remove SPI functions SPI_push(), - SPI_pop(), SPI_push_conditional(), - SPI_pop_conditional(), - and SPI_restore_connection() as unnecessary (Tom Lane) + Remove SPI functions SPI_push(), + SPI_pop(), SPI_push_conditional(), + SPI_pop_conditional(), + and SPI_restore_connection() as unnecessary (Tom Lane) @@ -2776,9 +2776,9 @@ - A side effect of this change is that SPI_palloc() and + A side effect of this change is that SPI_palloc() and allied functions now require an active SPI connection; they do not - degenerate to simple palloc() if there is none. That + degenerate to simple palloc() if there is none. That previous behavior was not very useful and posed risks of unexpected memory leaks. @@ -2811,9 +2811,9 @@ 2016-10-09 [ecb0d20a9] Use unnamed POSIX semaphores, if available, on Linux and --> - Use POSIX semaphores rather than SysV semaphores - on Linux and FreeBSD (Tom Lane) + Use POSIX semaphores rather than SysV semaphores + on Linux and FreeBSD (Tom Lane) @@ -2835,7 +2835,7 @@ 2017-03-10 [f8f1430ae] Enable 64 bit atomics on ARM64. --> - Enable 64-bit atomic operations on ARM64 (Roman + Enable 64-bit atomic operations on ARM64 (Roman Shaposhnik) @@ -2845,13 +2845,13 @@ 2017-01-02 [1d63f7d2d] Use clock_gettime(), if available, in instr_time measure --> - Switch to using clock_gettime(), if available, for + Switch to using clock_gettime(), if available, for duration measurements (Tom Lane) - gettimeofday() is still used - if clock_gettime() is not available. + gettimeofday() is still used + if clock_gettime() is not available. @@ -2868,9 +2868,9 @@ If no strong random number generator can be - found, configure will fail unless - the @@ -2880,7 +2880,7 @@ 2017-08-15 [d7ab908fb] Distinguish wait-for-connection from wait-for-write-read --> - Allow WaitLatchOrSocket() to wait for socket + Allow WaitLatchOrSocket() to wait for socket connection on Windows (Andres Freund) @@ -2890,7 +2890,7 @@ 2017-04-06 [3f902354b] Clean up after insufficiently-researched optimization of --> - tupconvert.c functions no longer convert tuples just to + tupconvert.c functions no longer convert tuples just to embed a different composite-type OID in them (Ashutosh Bapat, Tom Lane) @@ -2906,8 +2906,8 @@ 2016-10-11 [2b860f52e] Remove "sco" and "unixware" ports. --> - Remove SCO and Unixware ports (Tom Lane) + Remove SCO and Unixware ports (Tom Lane) @@ -2918,7 +2918,7 @@ --> Overhaul documentation build - process (Alexander Lakhin) + process (Alexander Lakhin) @@ -2927,13 +2927,13 @@ 2017-04-06 [510074f9f] Remove use of Jade and DSSSL --> - Use XSLT to build the PostgreSQL + Use XSLT to build the PostgreSQL documentation (Peter Eisentraut) - Previously Jade, DSSSL, and - JadeTex were used. + Previously Jade, DSSSL, and + JadeTex were used. @@ -2942,7 +2942,7 @@ 2016-11-15 [e36ddab11] Build HTML documentation using XSLT stylesheets by defau --> - Build HTML documentation using XSLT + Build HTML documentation using XSLT stylesheets by default (Peter Eisentraut) @@ -2961,7 +2961,7 @@ 2016-09-29 [8e91e12bc] Allow contrib/file_fdw to read from a program, like COPY --> - Allow file_fdw to read + Allow file_fdw to read from program output as well as files (Corey Huinker, Adam Gomaa) @@ -2971,7 +2971,7 @@ 2016-10-21 [7012b132d] postgres_fdw: Push down aggregates to remote servers. --> - In postgres_fdw, + In postgres_fdw, push aggregate functions to the remote server, when possible (Jeevan Chalke, Ashutosh Bapat) @@ -2988,7 +2988,7 @@ 2017-04-24 [332bec1e6] postgres_fdw: Fix join push down with extensions --> - In postgres_fdw, push joins to the remote server in + In postgres_fdw, push joins to the remote server in more cases (David Rowley, Ashutosh Bapat, Etsuro Fujita) @@ -2998,12 +2998,12 @@ 2016-08-26 [ae025a159] Support OID system column in postgres_fdw. --> - Properly support OID columns in - postgres_fdw tables (Etsuro Fujita) + Properly support OID columns in + postgres_fdw tables (Etsuro Fujita) - Previously OID columns always returned zeros. + Previously OID columns always returned zeros. @@ -3012,8 +3012,8 @@ 2017-03-21 [f7946a92b] Add btree_gist support for enum types. --> - Allow btree_gist - and btree_gin to + Allow btree_gist + and btree_gin to index enum types (Andrew Dunstan) @@ -3027,8 +3027,8 @@ 2016-11-29 [11da83a0e] Add uuid to the set of types supported by contrib/btree_ --> - Add indexing support to btree_gist for the - UUID data type (Paul Jungwirth) + Add indexing support to btree_gist for the + UUID data type (Paul Jungwirth) @@ -3037,7 +3037,7 @@ 2017-03-09 [3717dc149] Add amcheck extension to contrib. --> - Add amcheck which can + Add amcheck which can check the validity of B-tree indexes (Peter Geoghegan) @@ -3047,10 +3047,10 @@ 2017-03-27 [a6f22e835] Show ignored constants as "$N" rather than "?" in pg_sta --> - Show ignored constants as $N rather than ? + Show ignored constants as $N rather than ? in pg_stat_statements + linkend="pgstatstatements">pg_stat_statements (Lukas Fittl) @@ -3060,13 +3060,13 @@ 2016-09-27 [f31a931fa] Improve contrib/cube's handling of zero-D cubes, infinit --> - Improve cube's handling + Improve cube's handling of zero-dimensional cubes (Tom Lane) - This also improves handling of infinite and - NaN values. + This also improves handling of infinite and + NaN values. @@ -3076,7 +3076,7 @@ --> Allow pg_buffercache to run + linkend="pgbuffercache">pg_buffercache to run with fewer locks (Ivan Kartyshov) @@ -3090,8 +3090,8 @@ 2017-02-03 [e759854a0] pgstattuple: Add pgstathashindex. --> - Add pgstattuple - function pgstathashindex() to view hash index + Add pgstattuple + function pgstathashindex() to view hash index statistics (Ashutosh Sharma) @@ -3101,8 +3101,8 @@ 2016-09-29 [fd321a1df] Remove superuser checks in pgstattuple --> - Use GRANT permissions to - control pgstattuple function usage (Stephen Frost) + Use GRANT permissions to + control pgstattuple function usage (Stephen Frost) @@ -3115,7 +3115,7 @@ 2016-10-28 [d4b5d4cad] pgstattuple: Don't take heavyweight locks when examining --> - Reduce locking when pgstattuple examines hash + Reduce locking when pgstattuple examines hash indexes (Amit Kapila) @@ -3125,8 +3125,8 @@ 2017-03-17 [fef2bcdcb] pageinspect: Add page_checksum function --> - Add pageinspect - function page_checksum() to show a page's checksum + Add pageinspect + function page_checksum() to show a page's checksum (Tomas Vondra) @@ -3136,8 +3136,8 @@ 2017-04-04 [193f5f9e9] pageinspect: Add bt_page_items function with bytea argum --> - Add pageinspect - function bt_page_items() to print page items from a + Add pageinspect + function bt_page_items() to print page items from a page image (Tomas Vondra) @@ -3147,7 +3147,7 @@ 2017-02-02 [08bf6e529] pageinspect: Support hash indexes. --> - Add hash index support to pageinspect (Jesper + Add hash index support to pageinspect (Jesper Pedersen, Ashutosh Sharma) diff --git a/doc/src/sgml/release-7.4.sgml b/doc/src/sgml/release-7.4.sgml index bc4f4e18d0..bdbfe8e006 100644 --- a/doc/src/sgml/release-7.4.sgml +++ b/doc/src/sgml/release-7.4.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 7.4.X series. Users are encouraged to update to a newer release branch soon. @@ -47,7 +47,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -76,7 +76,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -97,7 +97,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -111,7 +111,7 @@ - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -119,7 +119,7 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) @@ -150,7 +150,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 7.4.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -173,19 +173,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -194,19 +194,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -219,10 +219,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -230,7 +230,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -242,7 +242,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -255,7 +255,7 @@ - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -263,7 +263,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -294,7 +294,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 7.4.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -317,7 +317,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -332,8 +332,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -351,17 +351,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -369,7 +369,7 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) @@ -381,14 +381,14 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) @@ -460,14 +460,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -486,7 +486,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -498,7 +498,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -507,7 +507,7 @@ - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -537,8 +537,8 @@ A dump/restore is not required for those running 7.4.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 7.4.26. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 7.4.26. Also, if you are upgrading from a version earlier than 7.4.11, see . @@ -552,14 +552,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -573,21 +573,21 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -604,7 +604,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -612,7 +612,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -631,8 +631,8 @@ - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -687,7 +687,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -698,7 +698,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -711,14 +711,14 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -760,13 +760,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -781,30 +781,30 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Fix bug in to_char()'s handling of TH + Fix bug in to_char()'s handling of TH format codes (Andreas Scherbaum) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) @@ -852,7 +852,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -868,14 +868,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -889,7 +889,7 @@ - Fix ecpg's parsing of CREATE USER (Michael) + Fix ecpg's parsing of CREATE USER (Michael) @@ -944,27 +944,27 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) @@ -1006,18 +1006,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1061,7 +1061,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1076,7 +1076,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1084,36 +1084,36 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1121,21 +1121,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1144,8 +1144,8 @@ - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1157,7 +1157,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -1207,7 +1207,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -1218,18 +1218,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -1249,13 +1249,13 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 7.4.18 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) @@ -1263,13 +1263,13 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) @@ -1282,42 +1282,42 @@ - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -1360,40 +1360,40 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) - Prevent CLUSTER from failing + Prevent CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -1437,28 +1437,28 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -1529,7 +1529,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -1577,7 +1577,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -1588,8 +1588,8 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) @@ -1601,20 +1601,20 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -1625,7 +1625,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -1665,12 +1665,12 @@ Fix core dump when an untyped literal is taken as ANYARRAY -Fix string_to_array() to handle overlapping +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) Fix backslash escaping in /contrib/dbmirror @@ -1712,9 +1712,9 @@ ANYARRAY into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -1724,48 +1724,48 @@ ANYARRAY Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix bug that sometimes caused OR'd index scans to @@ -1774,8 +1774,8 @@ miss rows they should have returned Fix WAL replay for case where a btree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) Fix server to use custom DH SSL parameters correctly (Michael Fuhr) @@ -1818,7 +1818,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -1833,18 +1833,18 @@ created in 7.4.9 and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog file creation +exists errors during pg_clog file creation (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) Fix to allow restoring dumps that have cross-schema references to custom operators (Tom) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -1872,9 +1872,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.8, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -1888,28 +1888,28 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Fix longstanding bug in strpos() and regular expression handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -1956,15 +1956,15 @@ corruption. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) AIX and HPUX compile fixes (Tom) Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped @@ -1999,41 +1999,41 @@ table has been dropped Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Fix the sense of the test for read-only transaction -in COPY -The code formerly prohibited COPY TO, where it should -prohibit COPY FROM. +in COPY +The code formerly prohibited COPY TO, where it should +prohibit COPY FROM. Fix planning problem with outer-join ON clauses that reference only the inner-side relation -Further fixes for x FULL JOIN y ON true corner +Further fixes for x FULL JOIN y ON true corner cases -Make array_in and array_recv more +Make array_in and array_recv more paranoid about validating their OID parameter Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve robustness of datetime parsing Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled -Don't try to open more than max_files_per_process +Don't try to open more than max_files_per_process files during postmaster startup Various memory leakage fixes Various portability improvements -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type -Update contrib/tsearch2 to use current Snowball +Update contrib/tsearch2 to use current Snowball code @@ -2077,10 +2077,10 @@ code - The lesser problem is that the contrib/tsearch2 module + The lesser problem is that the contrib/tsearch2 module creates several functions that are misdeclared to return - internal when they do not accept internal arguments. - This breaks type safety for all functions using internal + internal when they do not accept internal arguments. + This breaks type safety for all functions using internal arguments. @@ -2106,7 +2106,7 @@ WHERE pronamespace = 11 AND pronargs = 5 COMMIT; - Next, if you have installed contrib/tsearch2, do: + Next, if you have installed contrib/tsearch2, do: BEGIN; @@ -2124,22 +2124,22 @@ COMMIT; If this command fails with a message like function - "dex_init(text)" does not exist, then either tsearch2 + "dex_init(text)" does not exist, then either tsearch2 is not installed in this database, or you already did the update. - The above procedures must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedures must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same errors. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same errors. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedures. Finally, do: -- re-freeze template0: @@ -2156,8 +2156,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Change encoding function signature to prevent misuse -Change contrib/tsearch2 to avoid unsafe use of -INTERNAL function results +Change contrib/tsearch2 to avoid unsafe use of +INTERNAL function results Repair ancient race condition that allowed a transaction to be seen as committed for some purposes (eg SELECT FOR UPDATE) slightly sooner than for other purposes @@ -2169,56 +2169,56 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. Ensure operations done during backend shutdown are counted by statistics collector -This is expected to resolve reports of pg_autovacuum +This is expected to resolve reports of pg_autovacuum not vacuuming the system catalogs often enough — it was not being told about catalog deletions caused by temporary table removal during backend exit. Additional buffer overrun checks in plpgsql (Neil) -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD -Prevent to_char(interval) from dumping core for +RECORD +Prevent to_char(interval) from dumping core for month-related formats -Prevent crash on COALESCE(NULL,NULL) -Fix array_map to call PL functions correctly -Fix permission checking in ALTER DATABASE RENAME -Fix ALTER LANGUAGE RENAME -Make RemoveFromWaitQueue clean up after itself +Prevent crash on COALESCE(NULL,NULL) +Fix array_map to call PL functions correctly +Fix permission checking in ALTER DATABASE RENAME +Fix ALTER LANGUAGE RENAME +Make RemoveFromWaitQueue clean up after itself This fixes a lock management error that would only be visible if a transaction was kicked out of a wait for a lock (typically by query cancel) and then the holder of the lock released it within a very narrow window. Fix problem with untyped parameter appearing in -INSERT ... SELECT -Fix CLUSTER failure after -ALTER TABLE SET WITHOUT OIDS +INSERT ... SELECT +Fix CLUSTER failure after +ALTER TABLE SET WITHOUT OIDS @@ -2251,11 +2251,11 @@ holder of the lock released it within a very narrow window. Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -2314,7 +2314,7 @@ GMT Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -2322,11 +2322,11 @@ it as a potential-data-loss bug. Very large left joins using a hash join plan could fail to output unmatched left-side rows given just the right data distribution. -Disallow running pg_ctl as root +Disallow running pg_ctl as root This is to guard against any possible security issues. -Avoid using temp files in /tmp in make_oidjoins_check +Avoid using temp files in /tmp in make_oidjoins_check This has been reported as a security issue, though it's hardly worthy of concern since there is no reason for non-developers to use this script anyway. @@ -2343,13 +2343,13 @@ This could lead to misbehavior in some of the system-statistics views. Fix small memory leak in postmaster Fix expected both swapped tables to have TOAST -tables bug +tables bug This could arise in cases such as CLUSTER after ALTER TABLE DROP COLUMN. -Prevent pg_ctl restart from adding -D multiple times +Prevent pg_ctl restart from adding -D multiple times Fix problem with NULL values in GiST indexes -:: is no longer interpreted as a variable in an +:: is no longer interpreted as a variable in an ECPG prepare statement @@ -2435,8 +2435,8 @@ aggregate plan Fix hashed crosstab for zero-rows case (Joe) Force cache update after renaming a column in a foreign key Pretty-print UNION queries correctly -Make psql handle \r\n newlines properly in COPY IN -pg_dump handled ACLs with grant options incorrectly +Make psql handle \r\n newlines properly in COPY IN +pg_dump handled ACLs with grant options incorrectly Fix thread support for macOS and Solaris Updated JDBC driver (build 215) with various fixes ECPG fixes @@ -2492,7 +2492,7 @@ large tables, unsigned oids, stability, temp tables, and debug mode Select-list aliases within the sub-select will now take precedence over names from outer query levels. -Do not generate NATURAL CROSS JOIN when decompiling rules (Tom) +Do not generate NATURAL CROSS JOIN when decompiling rules (Tom) Add checks for invalid field length in binary COPY (Tom) This fixes a difficult-to-exploit security hole. @@ -2531,29 +2531,29 @@ names from outer query levels. - The more severe of the two errors is that data type anyarray + The more severe of the two errors is that data type anyarray has the wrong alignment label; this is a problem because the - pg_statistic system catalog uses anyarray + pg_statistic system catalog uses anyarray columns. The mislabeling can cause planner misestimations and even - crashes when planning queries that involve WHERE clauses on - double-aligned columns (such as float8 and timestamp). + crashes when planning queries that involve WHERE clauses on + double-aligned columns (such as float8 and timestamp). It is strongly recommended that all installations repair this error, either by initdb or by following the manual repair procedure given below. - The lesser error is that the system view pg_settings + The lesser error is that the system view pg_settings ought to be marked as having public update access, to allow - UPDATE pg_settings to be used as a substitute for - SET. This can also be fixed either by initdb or manually, + UPDATE pg_settings to be used as a substitute for + SET. This can also be fixed either by initdb or manually, but it is not necessary to fix unless you want to use UPDATE - pg_settings. + pg_settings. If you wish not to do an initdb, the following procedure will work - for fixing pg_statistic. As the database superuser, + for fixing pg_statistic. As the database superuser, do: @@ -2573,28 +2573,28 @@ ANALYZE; This can be done in a live database, but beware that all backends running in the altered database must be restarted before it is safe to - repopulate pg_statistic. + repopulate pg_statistic. - To repair the pg_settings error, simply do: + To repair the pg_settings error, simply do: GRANT SELECT, UPDATE ON pg_settings TO PUBLIC; - The above procedures must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedures must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same errors. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same errors. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedures. Finally, do: -- re-freeze template0: @@ -2614,28 +2614,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; -Fix pg_statistic alignment bug that could crash optimizer +Fix pg_statistic alignment bug that could crash optimizer See above for details about this problem. -Allow non-super users to update pg_settings +Allow non-super users to update pg_settings Fix several optimizer bugs, most of which led to -variable not found in subplan target lists errors +variable not found in subplan target lists errors Avoid out-of-memory failure during startup of large multiple index scan Fix multibyte problem that could lead to out of -memory error during COPY IN -Fix problems with SELECT INTO / CREATE -TABLE AS from tables without OIDs -Fix problems with alter_table regression test +memory error during COPY IN +Fix problems with SELECT INTO / CREATE +TABLE AS from tables without OIDs +Fix problems with alter_table regression test during parallel testing Fix problems with hitting open file limit, especially on macOS (Tom) Partial fix for Turkish-locale issues initdb will succeed now in Turkish locale, but there are still some -inconveniences associated with the i/I problem. +inconveniences associated with the i/I problem. Make pg_dump set client encoding on restore Other minor pg_dump fixes Allow ecpg to again use C keywords as column names (Michael) -Added ecpg WHENEVER NOT_FOUND to -SELECT/INSERT/UPDATE/DELETE (Michael) +Added ecpg WHENEVER NOT_FOUND to +SELECT/INSERT/UPDATE/DELETE (Michael) Fix ecpg crash for queries calling set-returning functions (Michael) Various other ecpg fixes (Michael) Fixes for Borland compiler @@ -2810,7 +2810,7 @@ DROP SCHEMA information_schema CASCADE; without sorting, by accumulating results into a hash table with one entry per group. It will still use the sort technique, however, if the hash table is estimated to be too - large to fit in sort_mem. + large to fit in sort_mem. @@ -3125,16 +3125,16 @@ DROP SCHEMA information_schema CASCADE; Trailing spaces are now trimmed when converting from type - char(n) to - varchar(n) or text. + char(n) to + varchar(n) or text. This is what most people always expected to happen anyway. - The data type float(p) now - measures p in binary digits, not decimal + The data type float(p) now + measures p in binary digits, not decimal digits. The new behavior follows the SQL standard. @@ -3143,11 +3143,11 @@ DROP SCHEMA information_schema CASCADE; Ambiguous date values now must match the ordering specified by the datestyle setting. In prior releases, a - date specification of 10/20/03 was interpreted as a - date in October even if datestyle specified that + date specification of 10/20/03 was interpreted as a + date in October even if datestyle specified that the day should be first. 7.4 will throw an error if a date specification is invalid for the current setting of - datestyle. + datestyle. @@ -3167,28 +3167,28 @@ DROP SCHEMA information_schema CASCADE; no longer work as expected in column default expressions; they now cause the time of the table creation to be the default, not the time of the insertion. Functions such as - now(), current_timestamp, or + now(), current_timestamp, or current_date should be used instead. In previous releases, there was special code so that strings such as 'now' were interpreted at - INSERT time and not at table creation time, but + INSERT time and not at table creation time, but this work around didn't cover all cases. Release 7.4 now requires that defaults be defined properly using functions such - as now() or current_timestamp. These + as now() or current_timestamp. These will work in all situations. - The dollar sign ($) is no longer allowed in + The dollar sign ($) is no longer allowed in operator names. It can instead be a non-first character in identifiers. This was done to improve compatibility with other database systems, and to avoid syntax problems when parameter - placeholders ($n) are written + placeholders ($n) are written adjacent to operators. @@ -3333,14 +3333,14 @@ DROP SCHEMA information_schema CASCADE; - Allow IN/NOT IN to be handled via hash + Allow IN/NOT IN to be handled via hash tables (Tom) - Improve NOT IN (subquery) + Improve NOT IN (subquery) performance (Tom) @@ -3490,19 +3490,19 @@ DROP SCHEMA information_schema CASCADE; - Rename server parameter server_min_messages to log_min_messages (Bruce) + Rename server parameter server_min_messages to log_min_messages (Bruce) This was done so most parameters that control the server logs - begin with log_. + begin with log_. - Rename show_*_stats to log_*_stats (Bruce) - Rename show_source_port to log_source_port (Bruce) - Rename hostname_lookup to log_hostname (Bruce) + Rename show_*_stats to log_*_stats (Bruce) + Rename show_source_port to log_source_port (Bruce) + Rename hostname_lookup to log_hostname (Bruce) - Add checkpoint_warning to warn of excessive checkpointing (Bruce) + Add checkpoint_warning to warn of excessive checkpointing (Bruce) In prior releases, it was difficult to determine if checkpoint was happening too frequently. This feature adds a warning to the @@ -3514,8 +3514,8 @@ DROP SCHEMA information_schema CASCADE; - Change debug server log messages to output as DEBUG - rather than LOG (Bruce) + Change debug server log messages to output as DEBUG + rather than LOG (Bruce) @@ -3529,8 +3529,8 @@ DROP SCHEMA information_schema CASCADE; - log_min_messages/client_min_messages now - controls debug_* output (Bruce) + log_min_messages/client_min_messages now + controls debug_* output (Bruce) This centralizes client debug information so all debug output @@ -3589,15 +3589,15 @@ DROP SCHEMA information_schema CASCADE; Add new columns in pg_settings: - context, type, source, - min_val, max_val (Joe) + context, type, source, + min_val, max_val (Joe) - Make default shared_buffers 1000 and - max_connections 100, if possible (Tom) + Make default shared_buffers 1000 and + max_connections 100, if possible (Tom) Prior versions defaulted to 64 shared buffers so PostgreSQL @@ -3612,7 +3612,7 @@ DROP SCHEMA information_schema CASCADE; New pg_hba.conf record type - hostnossl to prevent SSL connections (Jon + hostnossl to prevent SSL connections (Jon Jensen) @@ -3675,7 +3675,7 @@ DROP SCHEMA information_schema CASCADE; Add option to prevent auto-addition of tables referenced in query (Nigel J. Andrews) By default, tables mentioned in the query are automatically - added to the FROM clause if they are not already + added to the FROM clause if they are not already there. This is compatible with historic POSTGRES behavior but is contrary to the SQL standard. This option allows selecting @@ -3692,9 +3692,9 @@ DROP SCHEMA information_schema CASCADE; - Allow expressions to be used in LIMIT/OFFSET (Tom) + Allow expressions to be used in LIMIT/OFFSET (Tom) - In prior releases, LIMIT/OFFSET could + In prior releases, LIMIT/OFFSET could only use constants, not expressions. @@ -3780,7 +3780,7 @@ DROP SCHEMA information_schema CASCADE; Improve automatic type casting for domains (Rod, Tom) Allow dollar signs in identifiers, except as first character (Tom) - Disallow dollar signs in operator names, so x=$1 works (Tom) + Disallow dollar signs in operator names, so x=$1 works (Tom) @@ -3863,9 +3863,9 @@ DROP SCHEMA information_schema CASCADE; - Implement SQL-compatible options FIRST, - LAST, ABSOLUTE n, - RELATIVE n for + Implement SQL-compatible options FIRST, + LAST, ABSOLUTE n, + RELATIVE n for FETCH and MOVE (Tom) @@ -3888,18 +3888,18 @@ DROP SCHEMA information_schema CASCADE; Prevent CLUSTER on partial indexes (Tom) - Allow DOS and Mac line-endings in COPY files (Bruce) + Allow DOS and Mac line-endings in COPY files (Bruce) Disallow literal carriage return as a data value, - backslash-carriage-return and \r are still allowed + backslash-carriage-return and \r are still allowed (Bruce) - COPY changes (binary, \.) (Tom) + COPY changes (binary, \.) (Tom) @@ -3965,7 +3965,7 @@ DROP SCHEMA information_schema CASCADE; - Improve reliability of LISTEN/NOTIFY (Tom) + Improve reliability of LISTEN/NOTIFY (Tom) @@ -3976,8 +3976,8 @@ DROP SCHEMA information_schema CASCADE; requirement of a standalone session, which was necessary in previous releases. The only tables that now require a standalone session for reindexing are the global system tables - pg_database, pg_shadow, and - pg_group. + pg_database, pg_shadow, and + pg_group. @@ -4003,14 +4003,14 @@ DROP SCHEMA information_schema CASCADE; - Remove rarely used functions oidrand, - oidsrand, and userfntest functions + Remove rarely used functions oidrand, + oidsrand, and userfntest functions (Neil) - Add md5() function to main server, already in contrib/pgcrypto (Joe) + Add md5() function to main server, already in contrib/pgcrypto (Joe) An MD5 function was frequently requested. For more complex encryption capabilities, use @@ -4067,8 +4067,8 @@ DROP SCHEMA information_schema CASCADE; Allow WHERE qualification - expr op ANY/SOME/ALL - (array_expr) (Joe) + expr op ANY/SOME/ALL + (array_expr) (Joe) This allows arrays to behave like a list of values, for purposes @@ -4079,10 +4079,10 @@ DROP SCHEMA information_schema CASCADE; - New array functions array_append, - array_cat, array_lower, - array_prepend, array_to_string, - array_upper, string_to_array (Joe) + New array functions array_append, + array_cat, array_lower, + array_prepend, array_to_string, + array_upper, string_to_array (Joe) @@ -4107,14 +4107,14 @@ DROP SCHEMA information_schema CASCADE; Trim trailing spaces when char is cast to - varchar or text (Tom) + varchar or text (Tom) - Make float(p) measure the precision - p in binary digits, not decimal digits + Make float(p) measure the precision + p in binary digits, not decimal digits (Tom) @@ -4164,9 +4164,9 @@ DROP SCHEMA information_schema CASCADE; - Add new datestyle values MDY, - DMY, and YMD to set input field order; - honor US and European for backward + Add new datestyle values MDY, + DMY, and YMD to set input field order; + honor US and European for backward compatibility (Tom) @@ -4182,10 +4182,10 @@ DROP SCHEMA information_schema CASCADE; - Treat NaN as larger than any other value in min()/max() (Tom) + Treat NaN as larger than any other value in min()/max() (Tom) NaN was already sorted after ordinary numeric values for most - purposes, but min() and max() didn't + purposes, but min() and max() didn't get this right. @@ -4203,7 +4203,7 @@ DROP SCHEMA information_schema CASCADE; - Allow time to be specified as 040506 or 0405 (Tom) + Allow time to be specified as 040506 or 0405 (Tom) @@ -4275,7 +4275,7 @@ DROP SCHEMA information_schema CASCADE; - Add new parameter $0 in PL/pgSQL representing the + Add new parameter $0 in PL/pgSQL representing the function's actual return type (Joe) @@ -4310,12 +4310,12 @@ DROP SCHEMA information_schema CASCADE; Improve tab completion (Rod, Ross Reedstrom, Ian Barwick) - Reorder \? help into groupings (Harald Armin Massa, Bruce) + Reorder \? help into groupings (Harald Armin Massa, Bruce) Add backslash commands for listing schemas, casts, and conversions (Christopher) - \encoding now changes based on the server parameter + \encoding now changes based on the server parameter client_encoding (Tom) @@ -4328,7 +4328,7 @@ DROP SCHEMA information_schema CASCADE; Save editor buffer into readline history (Ross) - When \e is used to edit a query, the result is saved + When \e is used to edit a query, the result is saved in the readline history for retrieval using the up arrow. @@ -4373,14 +4373,14 @@ DROP SCHEMA information_schema CASCADE; - Have pg_dumpall use GRANT/REVOKE to dump database-level privileges (Tom) + Have pg_dumpall use GRANT/REVOKE to dump database-level privileges (Tom) - Allow pg_dumpall to support the options , + , of pg_dump (Tom) @@ -4565,7 +4565,7 @@ DROP SCHEMA information_schema CASCADE; Allow libpq to compile with Borland C++ compiler (Lester Godwin, Karl Waclawek) Use our own version of getopt_long() if needed (Peter) Convert administration scripts to C (Peter) - Bison >= 1.85 is now required to build the PostgreSQL grammar, if building from CVS + Bison >= 1.85 is now required to build the PostgreSQL grammar, if building from CVS Merge documentation into one book (Peter) Add Windows compatibility functions (Bruce) Allow client interfaces to compile under MinGW (Bruce) @@ -4605,16 +4605,16 @@ DROP SCHEMA information_schema CASCADE; Update btree_gist (Oleg) New tsearch2 full-text search module (Oleg, Teodor) Add hash-based crosstab function to tablefuncs (Joe) - Add serial column to order connectby() siblings in tablefuncs (Nabil Sayegh,Joe) + Add serial column to order connectby() siblings in tablefuncs (Nabil Sayegh,Joe) Add named persistent connections to dblink (Shridhar Daithanka) New pg_autovacuum allows automatic VACUUM (Matthew T. O'Connor) - Make pgbench honor environment variables PGHOST, PGPORT, PGUSER (Tatsuo) + Make pgbench honor environment variables PGHOST, PGPORT, PGUSER (Tatsuo) Improve intarray (Teodor Sigaev) Improve pgstattuple (Rod) Fix bug in metaphone() in fuzzystrmatch Improve adddepend (Rod) Update spi/timetravel (Böjthe Zoltán) - Fix dbase + Fix dbase option and improve non-ASCII handling (Thomas Behr, Márcio Smiderle) Remove array module because features now included by default (Joe) diff --git a/doc/src/sgml/release-8.0.sgml b/doc/src/sgml/release-8.0.sgml index 0f43e24b1d..46ca87e93a 100644 --- a/doc/src/sgml/release-8.0.sgml +++ b/doc/src/sgml/release-8.0.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.0.X series. Users are encouraged to update to a newer release branch soon. @@ -47,7 +47,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -76,7 +76,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -104,7 +104,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -130,7 +130,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -138,28 +138,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -167,13 +167,13 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) @@ -187,7 +187,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -220,7 +220,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.0.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -243,19 +243,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -264,19 +264,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -289,10 +289,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -300,7 +300,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -312,7 +312,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -325,14 +325,14 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -340,7 +340,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -353,7 +353,7 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. @@ -380,7 +380,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.0.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -403,7 +403,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -432,8 +432,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -459,7 +459,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -467,17 +467,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -485,7 +485,7 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) @@ -499,7 +499,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -511,28 +511,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -604,14 +604,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -630,7 +630,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -649,7 +649,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -664,20 +664,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -685,7 +685,7 @@ - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -716,8 +716,8 @@ A dump/restore is not required for those running 8.0.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.0.22. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.0.22. Also, if you are upgrading from a version earlier than 8.0.6, see . @@ -731,14 +731,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -752,32 +752,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -794,7 +794,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -802,7 +802,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -821,22 +821,22 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -849,7 +849,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -900,7 +900,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -911,7 +911,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -924,14 +924,14 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -973,13 +973,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -994,30 +994,30 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -1065,7 +1065,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -1095,14 +1095,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -1116,19 +1116,19 @@ - Fix ecpg's parsing of CREATE USER (Michael) + Fix ecpg's parsing of CREATE USER (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -1176,19 +1176,19 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -1210,21 +1210,21 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. @@ -1247,21 +1247,21 @@ - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -1304,18 +1304,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1358,7 +1358,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -1370,8 +1370,8 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) @@ -1379,7 +1379,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1394,7 +1394,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1402,24 +1402,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis, and Chile) @@ -1427,34 +1427,34 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1462,21 +1462,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1502,19 +1502,19 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -1522,20 +1522,20 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1547,7 +1547,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -1579,8 +1579,8 @@ - This is the last 8.0.X release for which the PostgreSQL - community will produce binary packages for Windows. + This is the last 8.0.X release for which the PostgreSQL + community will produce binary packages for Windows. Windows users are encouraged to move to 8.2.X or later, since there are Windows-specific fixes in 8.2.X that are impractical to back-port. 8.0.X will continue to @@ -1606,7 +1606,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -1617,18 +1617,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -1648,20 +1648,20 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.0.14 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -1669,14 +1669,14 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) Preserve the tablespace of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -1695,27 +1695,27 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -1727,49 +1727,49 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -1812,20 +1812,20 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -1838,7 +1838,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -1851,7 +1851,7 @@ - Prevent CLUSTER from failing + Prevent CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -1870,14 +1870,14 @@ - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -1921,28 +1921,28 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -2061,7 +2061,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -2109,7 +2109,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -2120,15 +2120,15 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -2146,7 +2146,7 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) @@ -2159,13 +2159,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -2176,7 +2176,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -2225,28 +2225,28 @@ Changes -Fix crash when referencing NEW row +Fix crash when referencing NEW row values in rule WHERE expressions (Tom) Fix core dump when an untyped literal is taken as ANYARRAY Fix mishandling of AFTER triggers when query contains a SQL function returning multiple rows (Tom) -Fix ALTER TABLE ... TYPE to recheck -NOT NULL for USING clause (Tom) -Fix string_to_array() to handle overlapping +Fix ALTER TABLE ... TYPE to recheck +NOT NULL for USING clause (Tom) +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) -Numerous robustness fixes in ecpg (Joachim +Numerous robustness fixes in ecpg (Joachim Wieland) Fix backslash escaping in /contrib/dbmirror Fix instability of statistics collection on Win32 (Tom, Andrew) -Fixes for AIX and -Intel compilers (Tom) +Fixes for AIX and +Intel compilers (Tom) @@ -2283,9 +2283,9 @@ Wieland) into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -2295,48 +2295,48 @@ Wieland) Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix bug that sometimes caused OR'd index scans to @@ -2345,10 +2345,10 @@ miss rows they should have returned Fix WAL replay for case where a btree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) -Fix SELECT INTO and CREATE TABLE AS to +Fix SELECT INTO and CREATE TABLE AS to create tables in the default tablespace, not the base directory (Kris Jurka) @@ -2396,7 +2396,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -2411,44 +2411,44 @@ created in 8.0.4, 7.4.9, and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog and pg_subtrans file creation +exists errors during pg_clog and pg_subtrans file creation (Tom) Fix cases that could lead to crashes if a cache-invalidation message arrives at just the wrong time (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) -Ensure ALTER COLUMN TYPE will process -FOREIGN KEY, UNIQUE, and PRIMARY KEY +Ensure ALTER COLUMN TYPE will process +FOREIGN KEY, UNIQUE, and PRIMARY KEY constraints in the proper order (Nakano Yoshihisa) Fixes to allow restoring dumps that have cross-schema references to custom operators or operator classes (Tom) -Allow pg_restore to continue properly after a -COPY failure; formerly it tried to treat the remaining -COPY data as SQL commands (Stephen Frost) +Allow pg_restore to continue properly after a +COPY failure; formerly it tried to treat the remaining +COPY data as SQL commands (Stephen Frost) -Fix pg_ctl unregister crash +Fix pg_ctl unregister crash when the data directory is not specified (Magnus) -Fix ecpg crash on AMD64 and PPC +Fix ecpg crash on AMD64 and PPC (Neil) Recover properly if error occurs during argument passing -in PL/Python (Neil) +in PL/Python (Neil) -Fix PL/Perl's handling of locales on +Fix PL/Perl's handling of locales on Win32 to match the backend (Andrew) -Fix crash when log_min_messages is set to -DEBUG3 or above in postgresql.conf on Win32 +Fix crash when log_min_messages is set to +DEBUG3 or above in postgresql.conf on Win32 (Bruce) -Fix pgxs -L library path +Fix pgxs -L library path specification for Win32, Cygwin, macOS, AIX (Bruce) Check that SID is enabled while checking for Win32 admin @@ -2457,8 +2457,8 @@ privileges (Magnus) Properly reject out-of-range date inputs (Kris Jurka) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -2486,9 +2486,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.3, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -2501,7 +2501,7 @@ and isinf during configure (Tom) than exit if there is no more room in ShmemBackendArray (Magnus) The previous behavior could lead to a denial-of-service situation if too many connection requests arrive close together. This applies -only to the Windows port. +only to the Windows port. Fix bug introduced in 8.0 that could allow ReadBuffer to return an already-used page as new, potentially causing loss of @@ -2512,16 +2512,16 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Allow more flexible relocation of installation @@ -2533,15 +2533,15 @@ directory paths were the same except for the last component. handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Various fixes for functions returning RECORDs +Various fixes for functions returning RECORDs (Tom) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -2597,35 +2597,35 @@ later VACUUM commands. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) AIX and HPUX compile fixes (Tom) Retry file reads and writes after Windows NO_SYSTEM_RESOURCES error (Qingqing Zhou) -Fix intermittent failure when log_line_prefix -includes %i +Fix intermittent failure when log_line_prefix +includes %i -Fix psql performance issue with long scripts +Fix psql performance issue with long scripts on Windows (Merlin Moncure) -Fix missing updates of pg_group flat +Fix missing updates of pg_group flat file Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions. Postpone timezone initialization until after -postmaster.pid is created +postmaster.pid is created This avoids confusing startup scripts that expect the pid file to appear quickly. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped -Fix problems with whole-row references (foo.*) +Fix problems with whole-row references (foo.*) to subquery results @@ -2660,69 +2660,69 @@ to subquery results Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Force a checkpoint before committing CREATE -DATABASE -This should fix recent reports of index is not a btree +DATABASE +This should fix recent reports of index is not a btree failures when a crash occurs shortly after CREATE -DATABASE. +DATABASE. Fix the sense of the test for read-only transaction -in COPY -The code formerly prohibited COPY TO, where it should -prohibit COPY FROM. +in COPY +The code formerly prohibited COPY TO, where it should +prohibit COPY FROM. -Handle consecutive embedded newlines in COPY +Handle consecutive embedded newlines in COPY CSV-mode input -Fix date_trunc(week) for dates near year +Fix date_trunc(week) for dates near year end Fix planning problem with outer-join ON clauses that reference only the inner-side relation -Further fixes for x FULL JOIN y ON true corner +Further fixes for x FULL JOIN y ON true corner cases Fix overenthusiastic optimization of x IN (SELECT -DISTINCT ...) and related cases -Fix mis-planning of queries with small LIMIT -values due to poorly thought out fuzzy cost +DISTINCT ...) and related cases +Fix mis-planning of queries with small LIMIT +values due to poorly thought out fuzzy cost comparison -Make array_in and array_recv more +Make array_in and array_recv more paranoid about validating their OID parameter Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve robustness of datetime parsing Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled Improve MIPS and M68K spinlock code -Don't try to open more than max_files_per_process +Don't try to open more than max_files_per_process files during postmaster startup Various memory leakage fixes Various portability improvements Update timezone data files Improve handling of DLL load failures on Windows Improve random-number generation on Windows -Make psql -f filename return a nonzero exit code +Make psql -f filename return a nonzero exit code when opening the file fails -Change pg_dump to handle inherited check +Change pg_dump to handle inherited check constraints more reliably -Fix password prompting in pg_restore on +Fix password prompting in pg_restore on Windows -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type -Fix PL/Perl %_SHARED so it's actually +Fix PL/Perl %_SHARED so it's actually shared -Fix contrib/pg_autovacuum to allow sleep +Fix contrib/pg_autovacuum to allow sleep intervals over 2000 sec -Update contrib/tsearch2 to use current Snowball +Update contrib/tsearch2 to use current Snowball code @@ -2766,10 +2766,10 @@ code - The lesser problem is that the contrib/tsearch2 module + The lesser problem is that the contrib/tsearch2 module creates several functions that are improperly declared to return - internal when they do not accept internal arguments. - This breaks type safety for all functions using internal + internal when they do not accept internal arguments. + This breaks type safety for all functions using internal arguments. @@ -2794,10 +2794,10 @@ code Change encoding function signature to prevent misuse -Change contrib/tsearch2 to avoid unsafe use of -INTERNAL function results +Change contrib/tsearch2 to avoid unsafe use of +INTERNAL function results Guard against incorrect second parameter to -record_out +record_out Repair ancient race condition that allowed a transaction to be seen as committed for some purposes (eg SELECT FOR UPDATE) slightly sooner than for other purposes @@ -2809,36 +2809,36 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD -Prevent crash on COALESCE(NULL,NULL) +RECORD +Prevent crash on COALESCE(NULL,NULL) Fix Borland makefile for libpq -Fix contrib/btree_gist for timetz type +Fix contrib/btree_gist for timetz type (Teodor) -Make pg_ctl check the PID found in -postmaster.pid to see if it is still a live +Make pg_ctl check the PID found in +postmaster.pid to see if it is still a live process -Fix pg_dump/pg_restore problems caused +Fix pg_dump/pg_restore problems caused by addition of dump timestamps Fix interaction between materializing holdable cursors and firing deferred triggers during transaction commit @@ -2883,51 +2883,51 @@ data types libraries (Bruce) This should have been done in 8.0.0. It is required so 7.4.X versions -of PostgreSQL client applications, like psql, +of PostgreSQL client applications, like psql, can be used on the same machine as 8.0.X applications. This might require re-linking user applications that use these libraries. -Add Windows-only wal_sync_method setting of - +Add Windows-only wal_sync_method setting of + (Magnus, Bruce) This setting causes PostgreSQL to write through any disk-drive write cache when writing to WAL. -This behavior was formerly called , but was +renamed because it acts quite differently from on other platforms. -Enable the wal_sync_method setting of - -Formerly the array would remain NULL, but now it becomes a +Formerly the array would remain NULL, but now it becomes a single-element array. The main SQL engine was changed to handle -UPDATE of a null array value this way in 8.0, but the similar +UPDATE of a null array value this way in 8.0, but the similar case in plpgsql was overlooked. -Convert \r\n and \r to \n +Convert \r\n and \r to \n in plpython function bodies (Michael Fuhr) This prevents syntax errors when plpython code is written on a Windows or @@ -2935,72 +2935,72 @@ in plpython function bodies (Michael Fuhr) Allow SPI cursors to handle utility commands that return rows, -such as EXPLAIN (Tom) -Fix CLUSTER failure after ALTER TABLE -SET WITHOUT OIDS (Tom) -Reduce memory usage of ALTER TABLE ADD COLUMN +such as EXPLAIN (Tom) +Fix CLUSTER failure after ALTER TABLE +SET WITHOUT OIDS (Tom) +Reduce memory usage of ALTER TABLE ADD COLUMN (Neil) -Fix ALTER LANGUAGE RENAME (Tom) -Document the Windows-only register and -unregister options of pg_ctl (Magnus) +Fix ALTER LANGUAGE RENAME (Tom) +Document the Windows-only register and +unregister options of pg_ctl (Magnus) Ensure operations done during backend shutdown are counted by statistics collector -This is expected to resolve reports of pg_autovacuum +This is expected to resolve reports of pg_autovacuum not vacuuming the system catalogs often enough — it was not being told about catalog deletions caused by temporary table removal during backend exit. Change the Windows default for configuration parameter -log_destination to +log_destination to (Magnus) By default, a server running on Windows will now send log output to the Windows event logger rather than standard error. Make Kerberos authentication work on Windows (Magnus) -Allow ALTER DATABASE RENAME by superusers +Allow ALTER DATABASE RENAME by superusers who aren't flagged as having CREATEDB privilege (Tom) -Modify WAL log entries for CREATE and -DROP DATABASE to not specify absolute paths (Tom) +Modify WAL log entries for CREATE and +DROP DATABASE to not specify absolute paths (Tom) This allows point-in-time recovery on a different machine with possibly -different database location. Note that CREATE TABLESPACE still +different database location. Note that CREATE TABLESPACE still poses a hazard in such situations. Fix crash from a backend exiting with an open transaction that created a table and opened a cursor on it (Tom) -Fix array_map() so it can call PL functions +Fix array_map() so it can call PL functions (Tom) -Several contrib/tsearch2 and -contrib/btree_gist fixes (Teodor) +Several contrib/tsearch2 and +contrib/btree_gist fixes (Teodor) -Fix crash of some contrib/pgcrypto +Fix crash of some contrib/pgcrypto functions on some platforms (Marko Kreen) -Fix contrib/intagg for 64-bit platforms +Fix contrib/intagg for 64-bit platforms (Tom) -Fix ecpg bugs in parsing of CREATE statement +Fix ecpg bugs in parsing of CREATE statement (Michael) Work around gcc bug on powerpc and amd64 causing problems in ecpg (Christof Petig) -Do not use locale-aware versions of upper(), -lower(), and initcap() when the locale is -C (Bruce) +Do not use locale-aware versions of upper(), +lower(), and initcap() when the locale is +C (Bruce) This allows these functions to work on platforms that generate errors - for non-7-bit data when the locale is C. + for non-7-bit data when the locale is C. -Fix quote_ident() to quote names that match keywords (Tom) -Fix to_date() to behave reasonably when -CC and YY fields are both used (Karel) -Prevent to_char(interval) from failing +Fix quote_ident() to quote names that match keywords (Tom) +Fix to_date() to behave reasonably when +CC and YY fields are both used (Karel) +Prevent to_char(interval) from failing when given a zero-month interval (Tom) -Fix wrong week returned by date_trunc('week') +Fix wrong week returned by date_trunc('week') (Bruce) -date_trunc('week') +date_trunc('week') returned the wrong year for the first few days of January in some years. -Use the correct default mask length for class D -addresses in INET data types (Tom) +Use the correct default mask length for class D +addresses in INET data types (Tom) @@ -3033,11 +3033,11 @@ addresses in INET data types (Tom) Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -3050,7 +3050,7 @@ contrib/intagg Jurka) Avoid buffer overrun when plpgsql cursor declaration has too many parameters (Neil) -Make ALTER TABLE ADD COLUMN enforce domain +Make ALTER TABLE ADD COLUMN enforce domain constraints in all cases Fix planning error for FULL and RIGHT outer joins @@ -3059,7 +3059,7 @@ left input. This could not only deliver mis-sorted output to the user, but in case of nested merge joins could give outright wrong answers. Improve planning of grouped aggregate queries -ROLLBACK TO savepoint +ROLLBACK TO savepoint closes cursors created since the savepoint Fix inadequate backend stack size on Windows Avoid SHGetSpecialFolderPath() on Windows @@ -3099,17 +3099,17 @@ typedefs (Michael) This is the first PostgreSQL release - to run natively on Microsoft Windows as - a server. It can run as a Windows service. This + to run natively on Microsoft Windows as + a server. It can run as a Windows service. This release supports NT-based Windows releases like - Windows 2000 SP4, Windows XP, and - Windows 2003. Older releases like - Windows 95, Windows 98, and - Windows ME are not supported because these operating + Windows 2000 SP4, Windows XP, and + Windows 2003. Older releases like + Windows 95, Windows 98, and + Windows ME are not supported because these operating systems do not have the infrastructure to support PostgreSQL. A separate installer project has been created to ease installation on - Windows — see Windows — see . @@ -3123,7 +3123,7 @@ typedefs (Michael) Previous releases required the Unix emulation toolkit - Cygwin in order to run the server on Windows + Cygwin in order to run the server on Windows operating systems. PostgreSQL has supported native clients on Windows for many years. @@ -3174,7 +3174,7 @@ typedefs (Michael) Tablespaces allow administrators to select different file systems for storage of individual tables, indexes, and databases. This improves performance and control over disk space - usage. Prior releases used initlocation and + usage. Prior releases used initlocation and manual symlink management for such tasks. @@ -3216,7 +3216,7 @@ typedefs (Michael) - A new version of the plperl server-side language now + A new version of the plperl server-side language now supports a persistent shared storage area, triggers, returning records and arrays of records, and SPI calls to access the database. @@ -3257,7 +3257,7 @@ typedefs (Michael) - In serialization mode, volatile functions now see the results of concurrent transactions committed up to the beginning of each statement within the function, rather than up to the beginning of the interactive command that called the function. @@ -3266,18 +3266,18 @@ typedefs (Michael) - Functions declared or always use the snapshot of the calling query, and therefore do not see the effects of actions taken after the calling query starts, whether in their own transaction or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL commands other than - SELECT. + SELECT. - Nondeferred triggers are now fired immediately after completion of the triggering query, rather than upon finishing the current interactive command. This makes a difference when the triggering query occurred within a function: @@ -3288,19 +3288,19 @@ typedefs (Michael) - Server configuration parameters virtual_host and - tcpip_socket have been replaced with a more general - parameter listen_addresses. Also, the server now listens on - localhost by default, which eliminates the need for the - -i postmaster switch in many scenarios. + Server configuration parameters virtual_host and + tcpip_socket have been replaced with a more general + parameter listen_addresses. Also, the server now listens on + localhost by default, which eliminates the need for the + -i postmaster switch in many scenarios. - Server configuration parameters SortMem and - VacuumMem have been renamed to work_mem - and maintenance_work_mem to better reflect their + Server configuration parameters SortMem and + VacuumMem have been renamed to work_mem + and maintenance_work_mem to better reflect their use. The original names are still supported in SET and SHOW. @@ -3308,34 +3308,34 @@ typedefs (Michael) - Server configuration parameters log_pid, - log_timestamp, and log_source_port have been - replaced with a more general parameter log_line_prefix. + Server configuration parameters log_pid, + log_timestamp, and log_source_port have been + replaced with a more general parameter log_line_prefix. - Server configuration parameter syslog has been - replaced with a more logical log_destination variable to + Server configuration parameter syslog has been + replaced with a more logical log_destination variable to control the log output destination. - Server configuration parameter log_statement has been + Server configuration parameter log_statement has been changed so it can selectively log just database modification or data definition statements. Server configuration parameter - log_duration now prints only when log_statement + log_duration now prints only when log_statement prints the query. - Server configuration parameter max_expr_depth parameter has - been replaced with max_stack_depth which measures the + Server configuration parameter max_expr_depth parameter has + been replaced with max_stack_depth which measures the physical stack size rather than the expression nesting depth. This helps prevent session termination due to stack overflow caused by recursive functions. @@ -3344,14 +3344,14 @@ typedefs (Michael) - The length() function no longer counts trailing spaces in - CHAR(n) values. + The length() function no longer counts trailing spaces in + CHAR(n) values. - Casting an integer to BIT(N) selects the rightmost N bits of the + Casting an integer to BIT(N) selects the rightmost N bits of the integer, not the leftmost N bits as before. @@ -3369,7 +3369,7 @@ typedefs (Michael) Syntax checking of array input values has been tightened up considerably. Junk that was previously allowed in odd places with odd results now causes an error. Empty-string element values - must now be written as "", rather than writing nothing. + must now be written as "", rather than writing nothing. Also changed behavior with respect to whitespace surrounding array elements: trailing whitespace is now ignored, for symmetry with leading whitespace (which has always been ignored). @@ -3386,14 +3386,14 @@ typedefs (Michael) The arithmetic operators associated with the single-byte - "char" data type have been removed. + "char" data type have been removed. - The extract() function (also called - date_part) now returns the proper year for BC dates. + The extract() function (also called + date_part) now returns the proper year for BC dates. It previously returned one less than the correct year. The function now also returns the proper values for millennium and century. @@ -3402,9 +3402,9 @@ typedefs (Michael) - CIDR values now must have their nonmasked bits be zero. + CIDR values now must have their nonmasked bits be zero. For example, we no longer allow - 204.248.199.1/31 as a CIDR value. Such + 204.248.199.1/31 as a CIDR value. Such values should never have been accepted by PostgreSQL and will now be rejected. @@ -3419,11 +3419,11 @@ typedefs (Michael) - psql's \copy command now reads or - writes to the query's stdin/stdout, rather than - psql's stdin/stdout. The previous + psql's \copy command now reads or + writes to the query's stdin/stdout, rather than + psql's stdin/stdout. The previous behavior can be accessed via new - / parameters. @@ -3449,14 +3449,14 @@ typedefs (Michael) one supplied by the operating system. This will provide consistent behavior across all platforms. In most cases, there should be little noticeable difference in time zone behavior, except that - the time zone names used by SET/SHOW - TimeZone might be different from what your platform provides. + the time zone names used by SET/SHOW + TimeZone might be different from what your platform provides. - Configure's threading option no longer requires + Configure's threading option no longer requires users to run tests or edit configuration files; threading options are now detected automatically. @@ -3465,7 +3465,7 @@ typedefs (Michael) Now that tablespaces have been implemented, - initlocation has been removed. + initlocation has been removed. @@ -3495,7 +3495,7 @@ typedefs (Michael) - The 8.1 release will remove the to_char() function + The 8.1 release will remove the to_char() function for intervals. @@ -3513,12 +3513,12 @@ typedefs (Michael) By default, tables in PostgreSQL 8.0 - and earlier are created with OIDs. In the next release, + and earlier are created with OIDs. In the next release, this will not be the case: to create a table - that contains OIDs, the clause must be specified or the default_with_oids configuration parameter must be set. Users are encouraged to - explicitly specify if their tables require OIDs for compatibility with future releases of PostgreSQL. @@ -3581,7 +3581,7 @@ typedefs (Michael) hurt performance. The new code uses a background writer to trickle disk writes at a steady pace so checkpoints have far fewer dirty pages to write to disk. Also, the new code does not issue a global - sync() call, but instead fsync()s just + sync() call, but instead fsync()s just the files written since the last checkpoint. This should improve performance and minimize degradation during checkpoints. @@ -3629,13 +3629,13 @@ typedefs (Michael) - Improved index usage with OR clauses (Tom) + Improved index usage with OR clauses (Tom) This allows the optimizer to use indexes in statements with many OR clauses that would not have been indexed in the past. It can also use multi-column indexes where the first column is specified and the second - column is part of an OR clause. + column is part of an OR clause. @@ -3645,7 +3645,7 @@ typedefs (Michael) The server is now smarter about using partial indexes in queries - involving complex clauses. @@ -3754,7 +3754,7 @@ typedefs (Michael) It is now possible to log server messages conveniently without - relying on either syslog or an external log + relying on either syslog or an external log rotation program. @@ -3762,56 +3762,56 @@ typedefs (Michael) Add new read-only server configuration parameters to show server - compile-time settings: block_size, - integer_datetimes, max_function_args, - max_identifier_length, max_index_keys (Joe) + compile-time settings: block_size, + integer_datetimes, max_function_args, + max_identifier_length, max_index_keys (Joe) - Make quoting of sameuser, samegroup, and - all remove special meaning of these terms in - pg_hba.conf (Andrew) + Make quoting of sameuser, samegroup, and + all remove special meaning of these terms in + pg_hba.conf (Andrew) - Use clearer IPv6 name ::1/128 for - localhost in default pg_hba.conf (Andrew) + Use clearer IPv6 name ::1/128 for + localhost in default pg_hba.conf (Andrew) - Use CIDR format in pg_hba.conf examples (Andrew) + Use CIDR format in pg_hba.conf examples (Andrew) - Rename server configuration parameters SortMem and - VacuumMem to work_mem and - maintenance_work_mem (Old names still supported) (Tom) + Rename server configuration parameters SortMem and + VacuumMem to work_mem and + maintenance_work_mem (Old names still supported) (Tom) This change was made to clarify that bulk operations such as index and - foreign key creation use maintenance_work_mem, while - work_mem is for workspaces used during query execution. + foreign key creation use maintenance_work_mem, while + work_mem is for workspaces used during query execution. Allow logging of session disconnections using server configuration - log_disconnections (Andrew) + log_disconnections (Andrew) - Add new server configuration parameter log_line_prefix to + Add new server configuration parameter log_line_prefix to allow control of information emitted in each log line (Andrew) @@ -3822,21 +3822,21 @@ typedefs (Michael) - Remove server configuration parameters log_pid, - log_timestamp, log_source_port; functionality - superseded by log_line_prefix (Andrew) + Remove server configuration parameters log_pid, + log_timestamp, log_source_port; functionality + superseded by log_line_prefix (Andrew) - Replace the virtual_host and tcpip_socket - parameters with a unified listen_addresses parameter + Replace the virtual_host and tcpip_socket + parameters with a unified listen_addresses parameter (Andrew, Tom) - virtual_host could only specify a single IP address to - listen on. listen_addresses allows multiple addresses + virtual_host could only specify a single IP address to + listen on. listen_addresses allows multiple addresses to be specified. @@ -3844,10 +3844,10 @@ typedefs (Michael) Listen on localhost by default, which eliminates the need for the - postmaster switch in many scenarios (Andrew) - Listening on localhost (127.0.0.1) opens no new + Listening on localhost (127.0.0.1) opens no new security holes but allows configurations like Windows and JDBC, which do not support local sockets, to work without special adjustments. @@ -3856,17 +3856,17 @@ typedefs (Michael) - Remove syslog server configuration parameter, and add more - logical log_destination variable to control log output + Remove syslog server configuration parameter, and add more + logical log_destination variable to control log output location (Magnus) - Change server configuration parameter log_statement to take - values all, mod, ddl, or - none to select which queries are logged (Bruce) + Change server configuration parameter log_statement to take + values all, mod, ddl, or + none to select which queries are logged (Bruce) This allows administrators to log only data definition changes or @@ -3877,12 +3877,12 @@ typedefs (Michael) Some logging-related configuration parameters could formerly be adjusted - by ordinary users, but only in the more verbose direction. + by ordinary users, but only in the more verbose direction. They are now treated more strictly: only superusers can set them. - However, a superuser can use ALTER USER to provide per-user + However, a superuser can use ALTER USER to provide per-user settings of these values for non-superusers. Also, it is now possible for superusers to set values of superuser-only configuration parameters - via PGOPTIONS. + via PGOPTIONS. @@ -3921,8 +3921,8 @@ typedefs (Michael) It is now useful to issue DECLARE CURSOR in a - Parse message with parameters. The parameter values - sent at Bind time will be substituted into the + Parse message with parameters. The parameter values + sent at Bind time will be substituted into the execution of the cursor's query. @@ -3942,7 +3942,7 @@ typedefs (Michael) - Make log_duration print only when log_statement + Make log_duration print only when log_statement prints the query (Ed L.) @@ -4007,10 +4007,10 @@ typedefs (Michael) - Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom) + Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom) - no longer evaluates the tested expression multiple times. This has benefits when the expression is complex or is volatile. @@ -4018,20 +4018,20 @@ typedefs (Michael) - Test before computing target list of an aggregate query (Tom) Fixes improper failure of cases such as SELECT SUM(win)/SUM(lose) - ... GROUP BY ... HAVING SUM(lose) > 0. This should work but formerly + ... GROUP BY ... HAVING SUM(lose) > 0. This should work but formerly could fail with divide-by-zero. - Replace max_expr_depth parameter with - max_stack_depth parameter, measured in kilobytes of stack + Replace max_expr_depth parameter with + max_stack_depth parameter, measured in kilobytes of stack size (Tom) @@ -4054,7 +4054,7 @@ typedefs (Michael) - Allow / to be used as the operator in row and subselect comparisons (Fabien Coelho) @@ -4065,8 +4065,8 @@ typedefs (Michael) identifiers and keywords (Tom) - This solves the Turkish problem with mangling of words - containing I and i. Folding of characters + This solves the Turkish problem with mangling of words + containing I and i. Folding of characters outside the 7-bit-ASCII set is still locale-aware. @@ -4094,7 +4094,7 @@ typedefs (Michael) - Avoid emitting in rule listings (Tom) Such a clause makes no logical sense, but in some cases the rule @@ -4112,36 +4112,36 @@ typedefs (Michael) - Add COMMENT ON for casts, conversions, languages, + Add COMMENT ON for casts, conversions, languages, operator classes, and large objects (Christopher) - Add new server configuration parameter default_with_oids to - control whether tables are created with OIDs by default (Neil) + Add new server configuration parameter default_with_oids to + control whether tables are created with OIDs by default (Neil) This allows administrators to control whether CREATE - TABLE commands create tables with or without OID + TABLE commands create tables with or without OID columns by default. (Note: the current factory default setting for - default_with_oids is TRUE, but the default - will become FALSE in future releases.) + default_with_oids is TRUE, but the default + will become FALSE in future releases.) - Add / clause to CREATE TABLE AS (Neil) - Allow ALTER TABLE DROP COLUMN to drop an OID - column (ALTER TABLE SET WITHOUT OIDS still works) + Allow ALTER TABLE DROP COLUMN to drop an OID + column (ALTER TABLE SET WITHOUT OIDS still works) (Tom) @@ -4154,11 +4154,11 @@ typedefs (Michael) - Allow ALTER ... ADD COLUMN with defaults and - constraints; works per SQL spec (Rod) - It is now possible for to create a column that is not initially filled with NULLs, but with a specified default value. @@ -4166,7 +4166,7 @@ typedefs (Michael) - Add ALTER COLUMN TYPE to change column's type (Rod) + Add ALTER COLUMN TYPE to change column's type (Rod) It is now possible to alter a column's data type without dropping @@ -4176,14 +4176,14 @@ typedefs (Michael) - Allow multiple ALTER actions in a single ALTER + Allow multiple ALTER actions in a single ALTER TABLE command (Rod) - This is particularly useful for ALTER commands that - rewrite the table (which include @@ -4213,13 +4213,13 @@ typedefs (Michael) Allow temporary object creation to be limited to functions (Sean Chittenden) - Add (Christopher) Prior to this release, there was no way to clear an auto-cluster @@ -4229,8 +4229,8 @@ typedefs (Michael) - Constraint/Index/SERIAL names are now - table_column_type + Constraint/Index/SERIAL names are now + table_column_type with numbers appended to guarantee uniqueness within the schema (Tom) @@ -4242,11 +4242,11 @@ typedefs (Michael) - Add pg_get_serial_sequence() to return a - SERIAL column's sequence name (Christopher) + Add pg_get_serial_sequence() to return a + SERIAL column's sequence name (Christopher) - This allows automated scripts to reliably find the SERIAL + This allows automated scripts to reliably find the SERIAL sequence name. @@ -4259,14 +4259,14 @@ typedefs (Michael) - New ALTER INDEX command to allow moving of indexes + New ALTER INDEX command to allow moving of indexes between tablespaces (Gavin) - Make ALTER TABLE OWNER change dependent sequence + Make ALTER TABLE OWNER change dependent sequence ownership too (Alvaro) @@ -4289,18 +4289,18 @@ typedefs (Michael) - Add keyword to CREATE RULE (Fabien Coelho) - This allows to be added to rule creation to contrast it with + rules. - Add option to LOCK (Tatsuo) This allows the LOCK command to fail if it @@ -4336,7 +4336,7 @@ typedefs (Michael) In 7.3 and 7.4, a long-running B-tree index build could block concurrent - CHECKPOINTs from completing, thereby causing WAL bloat because the + CHECKPOINTs from completing, thereby causing WAL bloat because the WAL log could not be recycled. @@ -4384,11 +4384,11 @@ typedefs (Michael) - New pg_ctl option for Windows (Andrew) - Windows does not have a kill command to send signals to - backends so this capability was added to pg_ctl. + Windows does not have a kill command to send signals to + backends so this capability was added to pg_ctl. @@ -4400,7 +4400,7 @@ typedefs (Michael) - Add option to initdb so the initial password can be set by GUI tools (Magnus) @@ -4415,7 +4415,7 @@ typedefs (Michael) - Add @@ -4443,7 +4443,7 @@ typedefs (Michael) Reject nonrectangular array values as erroneous (Joe) - Formerly, array_in would silently build a + Formerly, array_in would silently build a surprising result. @@ -4457,13 +4457,13 @@ typedefs (Michael) The arithmetic operators associated with the single-byte - "char" data type have been removed. + "char" data type have been removed. Formerly, the parser would select these operators in many situations - where an unable to select an operator error would be more - appropriate, such as null * null. If you actually want - to do arithmetic on a "char" column, you can cast it to + where an unable to select an operator error would be more + appropriate, such as null * null. If you actually want + to do arithmetic on a "char" column, you can cast it to integer explicitly. @@ -4474,7 +4474,7 @@ typedefs (Michael) Junk that was previously allowed in odd places with odd results - now causes an ERROR, for example, non-whitespace + now causes an ERROR, for example, non-whitespace after the closing right brace. @@ -4482,7 +4482,7 @@ typedefs (Michael) Empty-string array element values must now be written as - "", rather than writing nothing (Joe) + "", rather than writing nothing (Joe) Formerly, both ways of writing an empty-string element value were @@ -4512,13 +4512,13 @@ typedefs (Michael) - Accept YYYY-monthname-DD as a date string (Tom) + Accept YYYY-monthname-DD as a date string (Tom) - Make netmask and hostmask functions + Make netmask and hostmask functions return maximum-length mask length (Tom) @@ -4535,27 +4535,27 @@ typedefs (Michael) - to_char/to_date() date conversion + to_char/to_date() date conversion improvements (Kurt Roeckx, Fabien Coelho) - Make length() disregard trailing spaces in - CHAR(n) (Gavin) + Make length() disregard trailing spaces in + CHAR(n) (Gavin) This change was made to improve consistency: trailing spaces are - semantically insignificant in CHAR(n) data, so they - should not be counted by length(). + semantically insignificant in CHAR(n) data, so they + should not be counted by length(). Warn about empty string being passed to - OID/float4/float8 data types (Neil) + OID/float4/float8 data types (Neil) 8.1 will throw an error instead. @@ -4565,7 +4565,7 @@ typedefs (Michael) Allow leading or trailing whitespace in - int2/int4/int8/float4/float8 + int2/int4/int8/float4/float8 input routines (Neil) @@ -4573,7 +4573,7 @@ typedefs (Michael) - Better support for IEEE Infinity and NaN + Better support for IEEE Infinity and NaN values in float4/float8 (Neil) @@ -4584,27 +4584,27 @@ typedefs (Michael) - Add - Fix to_char for 1 BC - (previously it returned 1 AD) (Bruce) + Fix to_char for 1 BC + (previously it returned 1 AD) (Bruce) - Fix date_part(year) for BC dates (previously it + Fix date_part(year) for BC dates (previously it returned one less than the correct year) (Bruce) - Fix date_part() to return the proper millennium and + Fix date_part() to return the proper millennium and century (Fabien Coelho) @@ -4616,44 +4616,44 @@ typedefs (Michael) - Add ceiling() as an alias for ceil(), - and power() as an alias for pow() for + Add ceiling() as an alias for ceil(), + and power() as an alias for pow() for standards compliance (Neil) - Change ln(), log(), - power(), and sqrt() to emit the correct - SQLSTATE error codes for certain error conditions, as + Change ln(), log(), + power(), and sqrt() to emit the correct + SQLSTATE error codes for certain error conditions, as specified by SQL:2003 (Neil) - Add width_bucket() function as defined by SQL:2003 (Neil) + Add width_bucket() function as defined by SQL:2003 (Neil) - Add generate_series() functions to simplify working + Add generate_series() functions to simplify working with numeric sets (Joe) - Fix upper/lower/initcap() functions to work with + Fix upper/lower/initcap() functions to work with multibyte encodings (Tom) - Add boolean and bitwise integer / aggregates (Fabien Coelho) @@ -4679,17 +4679,17 @@ typedefs (Michael) - Add interval plus datetime operators (Tom) + Add interval plus datetime operators (Tom) - The reverse ordering, datetime plus interval, + The reverse ordering, datetime plus interval, was already supported, but both are required by the SQL standard. - Casting an integer to BIT(N) selects the rightmost N bits + Casting an integer to BIT(N) selects the rightmost N bits of the integer (Tom) @@ -4702,7 +4702,7 @@ typedefs (Michael) - Require CIDR values to have all nonmasked bits be zero + Require CIDR values to have all nonmasked bits be zero (Kevin Brintnall) @@ -4717,7 +4717,7 @@ typedefs (Michael) - In READ COMMITTED serialization mode, volatile functions + In READ COMMITTED serialization mode, volatile functions now see the results of concurrent transactions committed up to the beginning of each statement within the function, rather than up to the beginning of the interactive command that called the function. @@ -4726,20 +4726,20 @@ typedefs (Michael) - Functions declared STABLE or IMMUTABLE always + Functions declared STABLE or IMMUTABLE always use the snapshot of the calling query, and therefore do not see the effects of actions taken after the calling query starts, whether in their own transaction or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL commands other than - SELECT. There is a considerable performance gain from - declaring a function STABLE or IMMUTABLE - rather than VOLATILE. + SELECT. There is a considerable performance gain from + declaring a function STABLE or IMMUTABLE + rather than VOLATILE. - Nondeferred triggers are now fired immediately after completion of the triggering query, rather than upon finishing the current interactive command. This makes a difference when the triggering query occurred within a function: the trigger @@ -4801,8 +4801,8 @@ typedefs (Michael) Improve parsing of PL/pgSQL FOR loops (Tom) - Parsing is now driven by presence of ".." rather than - data type of variable. This makes no difference for correct functions, but should result in more understandable error messages when a mistake is made. @@ -4818,18 +4818,18 @@ typedefs (Michael) In PL/Tcl, SPI commands are now run in subtransactions. If an error occurs, the subtransaction is cleaned up and the error is reported - as an ordinary Tcl error, which can be trapped with catch. + as an ordinary Tcl error, which can be trapped with catch. Formerly, it was not possible to catch such errors. - Accept ELSEIF in PL/pgSQL (Neil) + Accept ELSEIF in PL/pgSQL (Neil) - Previously PL/pgSQL only allowed ELSIF, but many people - are accustomed to spelling this keyword ELSEIF. + Previously PL/pgSQL only allowed ELSIF, but many people + are accustomed to spelling this keyword ELSEIF. @@ -4838,47 +4838,47 @@ typedefs (Michael) - <application>psql</> Changes + <application>psql</application> Changes - Improve psql information display about database + Improve psql information display about database objects (Christopher) - Allow psql to display group membership in - \du and \dg (Markus Bertheau) + Allow psql to display group membership in + \du and \dg (Markus Bertheau) - Prevent psql \dn from showing + Prevent psql \dn from showing temporary schemas (Bruce) - Allow psql to handle tilde user expansion for file + Allow psql to handle tilde user expansion for file names (Zach Irmen) - Allow psql to display fancy prompts, including - color, via readline (Reece Hart, Chet Ramey) + Allow psql to display fancy prompts, including + color, via readline (Reece Hart, Chet Ramey) - Make psql \copy match COPY command syntax + Make psql \copy match COPY command syntax fully (Tom) @@ -4891,55 +4891,55 @@ typedefs (Michael) - Add CLUSTER information to psql - \d display + Add CLUSTER information to psql + \d display (Bruce) - Change psql \copy stdin/stdout to read + Change psql \copy stdin/stdout to read from command input/output (Bruce) - Add - Add global psql configuration file, psqlrc.sample + Add global psql configuration file, psqlrc.sample (Bruce) - This allows a central file where global psql startup commands can + This allows a central file where global psql startup commands can be stored. - Have psql \d+ indicate if the table - has an OID column (Neil) + Have psql \d+ indicate if the table + has an OID column (Neil) - On Windows, use binary mode in psql when reading files so control-Z + On Windows, use binary mode in psql when reading files so control-Z is not seen as end-of-file - Have \dn+ show permissions and description for schemas (Dennis + Have \dn+ show permissions and description for schemas (Dennis Björklund) @@ -4961,13 +4961,13 @@ typedefs (Michael) - <application>pg_dump</> Changes + <application>pg_dump</application> Changes Use dependency information to improve the reliability of - pg_dump (Tom) + pg_dump (Tom) This should solve the longstanding problems with related objects @@ -4977,7 +4977,7 @@ typedefs (Michael) - Have pg_dump output objects in alphabetical order if possible (Tom) + Have pg_dump output objects in alphabetical order if possible (Tom) This should make it easier to identify changes between @@ -4987,12 +4987,12 @@ typedefs (Michael) - Allow pg_restore to ignore some SQL errors (Fabien Coelho) + Allow pg_restore to ignore some SQL errors (Fabien Coelho) - This makes pg_restore's behavior similar to the - results of feeding a pg_dump output script to - psql. In most cases, ignoring errors and plowing + This makes pg_restore's behavior similar to the + results of feeding a pg_dump output script to + psql. In most cases, ignoring errors and plowing ahead is the most useful thing to do. Also added was a pg_restore option to give the old behavior of exiting on an error. @@ -5000,36 +5000,36 @@ typedefs (Michael) - pg_restore display now includes objects' schema names - New begin/end markers in pg_dump text output (Bruce) + New begin/end markers in pg_dump text output (Bruce) Add start/stop times for - pg_dump/pg_dumpall in verbose mode + pg_dump/pg_dumpall in verbose mode (Bruce) - Allow most pg_dump options in - pg_dumpall (Christopher) + Allow most pg_dump options in + pg_dumpall (Christopher) - Have pg_dump use ALTER OWNER rather - than SET SESSION AUTHORIZATION by default + Have pg_dump use ALTER OWNER rather + than SET SESSION AUTHORIZATION by default (Christopher) @@ -5044,42 +5044,42 @@ typedefs (Michael) - Make libpq's handling thread-safe (Bruce) - Add PQmbdsplen() which returns the display length + Add PQmbdsplen() which returns the display length of a character (Tatsuo) - Add thread locking to SSL and - Kerberos connections (Manfred Spraul) + Add thread locking to SSL and + Kerberos connections (Manfred Spraul) - Allow PQoidValue(), PQcmdTuples(), and - PQoidStatus() to work on EXECUTE + Allow PQoidValue(), PQcmdTuples(), and + PQoidStatus() to work on EXECUTE commands (Neil) - Add PQserverVersion() to provide more convenient + Add PQserverVersion() to provide more convenient access to the server version number (Greg Sabino Mullane) - Add PQprepare/PQsendPrepared() functions to support + Add PQprepare/PQsendPrepared() functions to support preparing statements without necessarily specifying the data types of their parameters (Abhijit Menon-Sen) @@ -5087,7 +5087,7 @@ typedefs (Michael) - Many ECPG improvements, including SET DESCRIPTOR (Michael) + Many ECPG improvements, including SET DESCRIPTOR (Michael) @@ -5127,7 +5127,7 @@ typedefs (Michael) Directory paths for installed files (such as the - /share directory) are now computed relative to the + /share directory) are now computed relative to the actual location of the executables, so that an installation tree can be moved to another place without reconfiguring and rebuilding. @@ -5136,31 +5136,31 @@ typedefs (Michael) - Use to choose installation location of documentation; also + allow (Peter) - Add to prevent installation of documentation (Peter) - Upgrade to DocBook V4.2 SGML (Peter) + Upgrade to DocBook V4.2 SGML (Peter) - New PostgreSQL CVS tag (Marc) + New PostgreSQL CVS tag (Marc) This was done to make it easier for organizations to manage their own copies of the PostgreSQL - CVS repository. File version stamps from the master + CVS repository. File version stamps from the master repository will not get munged by checking into or out of a copied repository. @@ -5186,7 +5186,7 @@ typedefs (Michael) - Add inlined test-and-set code on PA-RISC for gcc + Add inlined test-and-set code on PA-RISC for gcc (ViSolve, Tom) @@ -5200,7 +5200,7 @@ typedefs (Michael) Clean up spinlock assembly code to avoid warnings from newer - gcc releases (Tom) + gcc releases (Tom) @@ -5230,7 +5230,7 @@ typedefs (Michael) - New fsync() test program (Bruce) + New fsync() test program (Bruce) @@ -5268,7 +5268,7 @@ typedefs (Michael) - Use Olson's public domain timezone library (Magnus) + Use Olson's public domain timezone library (Magnus) @@ -5285,7 +5285,7 @@ typedefs (Michael) - psql now uses a flex-generated + psql now uses a flex-generated lexical analyzer to process command strings @@ -5322,7 +5322,7 @@ typedefs (Michael) - New pgevent for Windows logging + New pgevent for Windows logging @@ -5342,19 +5342,19 @@ typedefs (Michael) - Overhaul of contrib/dblink (Joe) + Overhaul of contrib/dblink (Joe) - contrib/dbmirror improvements (Steven Singer) + contrib/dbmirror improvements (Steven Singer) - New contrib/xml2 (John Gray, Torchbox) + New contrib/xml2 (John Gray, Torchbox) @@ -5366,51 +5366,51 @@ typedefs (Michael) - New version of contrib/btree_gist (Teodor) + New version of contrib/btree_gist (Teodor) - New contrib/trgm, trigram matching for + New contrib/trgm, trigram matching for PostgreSQL (Teodor) - Many contrib/tsearch2 improvements (Teodor) + Many contrib/tsearch2 improvements (Teodor) - Add double metaphone to contrib/fuzzystrmatch (Andrew) + Add double metaphone to contrib/fuzzystrmatch (Andrew) - Allow contrib/pg_autovacuum to run as a Windows service (Dave Page) + Allow contrib/pg_autovacuum to run as a Windows service (Dave Page) - Add functions to contrib/dbsize (Andreas Pflug) + Add functions to contrib/dbsize (Andreas Pflug) - Removed contrib/pg_logger: obsoleted by integrated logging + Removed contrib/pg_logger: obsoleted by integrated logging subprocess - Removed contrib/rserv: obsoleted by various separate projects + Removed contrib/rserv: obsoleted by various separate projects diff --git a/doc/src/sgml/release-8.1.sgml b/doc/src/sgml/release-8.1.sgml index d48bccd17d..6827afd7e0 100644 --- a/doc/src/sgml/release-8.1.sgml +++ b/doc/src/sgml/release-8.1.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.1.X series. Users are encouraged to update to a newer release branch soon. @@ -40,17 +40,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -63,19 +63,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -91,7 +91,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -101,7 +101,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -113,14 +113,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -132,7 +132,7 @@ - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -143,11 +143,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -166,29 +166,29 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -196,20 +196,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -235,7 +235,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.1.X release series in November 2010. Users are encouraged to update to a newer release branch soon. @@ -266,7 +266,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -295,7 +295,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -337,7 +337,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -363,7 +363,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -371,28 +371,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -400,13 +400,13 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) @@ -420,7 +420,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -470,19 +470,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -491,19 +491,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -516,10 +516,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -527,7 +527,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -539,7 +539,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -552,14 +552,14 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -567,7 +567,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -580,7 +580,7 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. @@ -624,7 +624,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -653,8 +653,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -680,7 +680,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -700,17 +700,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -718,14 +718,14 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) @@ -739,7 +739,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -751,28 +751,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -844,14 +844,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -870,7 +870,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -896,7 +896,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -906,14 +906,14 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -930,20 +930,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -951,7 +951,7 @@ - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -982,8 +982,8 @@ A dump/restore is not required for those running 8.1.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.1.18. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.1.18. Also, if you are upgrading from a version earlier than 8.1.15, see . @@ -997,14 +997,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -1018,32 +1018,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -1060,7 +1060,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -1068,7 +1068,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -1087,22 +1087,22 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -1115,7 +1115,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -1166,7 +1166,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -1177,7 +1177,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -1190,20 +1190,20 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -1214,15 +1214,15 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). @@ -1240,7 +1240,7 @@ - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -1294,13 +1294,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -1315,7 +1315,7 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) @@ -1337,30 +1337,30 @@ - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -1391,7 +1391,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, see . Also, if you were running a previous - 8.1.X release, it is recommended to REINDEX all GiST + 8.1.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -1405,13 +1405,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -1423,7 +1423,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -1438,13 +1438,13 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. @@ -1458,9 +1458,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -1479,14 +1479,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -1500,19 +1500,19 @@ - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -1560,7 +1560,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -1573,12 +1573,12 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -1586,18 +1586,18 @@ - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -1617,7 +1617,7 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) @@ -1635,21 +1635,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -1673,21 +1673,21 @@ - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -1730,18 +1730,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1749,13 +1749,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -1797,7 +1797,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -1809,8 +1809,8 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) @@ -1818,7 +1818,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1833,7 +1833,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1841,24 +1841,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis, and Chile) @@ -1866,34 +1866,34 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1901,21 +1901,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1924,14 +1924,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -1954,19 +1954,19 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -1974,20 +1974,20 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1999,7 +1999,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -2031,8 +2031,8 @@ - This is the last 8.1.X release for which the PostgreSQL - community will produce binary packages for Windows. + This is the last 8.1.X release for which the PostgreSQL + community will produce binary packages for Windows. Windows users are encouraged to move to 8.2.X or later, since there are Windows-specific fixes in 8.2.X that are impractical to back-port. 8.1.X will continue to @@ -2058,7 +2058,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -2069,18 +2069,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -2100,20 +2100,20 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.1.10 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -2128,14 +2128,14 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) Preserve the tablespace of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -2154,21 +2154,21 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Fix overflow in extract(epoch from interval) for intervals + Fix overflow in extract(epoch from interval) for intervals exceeding 68 years (Tom) @@ -2182,13 +2182,13 @@ - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -2200,64 +2200,64 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - Fix libpq crash when PGPASSFILE refers + Fix libpq crash when PGPASSFILE refers to a file that is not a plain file (Martin Pitt) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/pgcrypto defend against - OpenSSL libraries that fail on keys longer than 128 + Make contrib/pgcrypto defend against + OpenSSL libraries that fail on keys longer than 128 bits; which is the case at least on some Solaris versions (Marko Kreen) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -2300,20 +2300,20 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Allow the interval data type to accept input consisting only of + Allow the interval data type to accept input consisting only of milliseconds or microseconds (Neil) @@ -2326,7 +2326,7 @@ - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -2339,7 +2339,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -2352,7 +2352,7 @@ - Prevent REINDEX and CLUSTER from failing + Prevent REINDEX and CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -2371,14 +2371,14 @@ - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -2422,35 +2422,35 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Require COMMIT PREPARED to be executed in the same + Require COMMIT PREPARED to be executed in the same database as the transaction was prepared in (Heikki) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -2576,7 +2576,7 @@ - Improve VACUUM performance for databases with many tables (Tom) + Improve VACUUM performance for databases with many tables (Tom) @@ -2593,7 +2593,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -2606,7 +2606,7 @@ - Fix bogus permission denied failures occurring on Windows + Fix bogus permission denied failures occurring on Windows due to attempts to fsync already-deleted files (Magnus, Tom) @@ -2655,7 +2655,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -2666,21 +2666,21 @@ - Fix pg_restore to handle a tar-format backup + Fix pg_restore to handle a tar-format backup that contains large objects (blobs) with comments (Tom) - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) - Clean out pg_internal.init cache files during server + Clean out pg_internal.init cache files during server restart (Simon) @@ -2693,7 +2693,7 @@ Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -2717,7 +2717,7 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) @@ -2736,13 +2736,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -2753,7 +2753,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -2802,7 +2802,7 @@ Changes -Disallow aggregate functions in UPDATE +Disallow aggregate functions in UPDATE commands, except within sub-SELECTs (Tom) The behavior of such an aggregate was unpredictable, and in 8.1.X could cause a crash, so it has been disabled. The SQL standard does not allow @@ -2810,25 +2810,25 @@ this either. Fix core dump when an untyped literal is taken as ANYARRAY Fix core dump in duration logging for extended query protocol -when a COMMIT or ROLLBACK is +when a COMMIT or ROLLBACK is executed Fix mishandling of AFTER triggers when query contains a SQL function returning multiple rows (Tom) -Fix ALTER TABLE ... TYPE to recheck -NOT NULL for USING clause (Tom) -Fix string_to_array() to handle overlapping +Fix ALTER TABLE ... TYPE to recheck +NOT NULL for USING clause (Tom) +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). -Fix to_timestamp() for -AM/PM formats (Bruce) +Fix to_timestamp() for +AM/PM formats (Bruce) Fix autovacuum's calculation that decides whether - ANALYZE is needed (Alvaro) + ANALYZE is needed (Alvaro) Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) -Numerous robustness fixes in ecpg (Joachim +Numerous robustness fixes in ecpg (Joachim Wieland) Fix backslash escaping in /contrib/dbmirror Minor fixes in /contrib/dblink and /contrib/tsearch2 @@ -2836,14 +2836,14 @@ Wieland) Efficiency improvements in hash tables and bitmap index scans (Tom) Fix instability of statistics collection on Windows (Tom, Andrew) -Fix statement_timeout to use the proper +Fix statement_timeout to use the proper units on Win32 (Bruce) In previous Win32 8.1.X versions, the delay was off by a factor of 100. -Fixes for MSVC and Borland C++ +Fixes for MSVC and Borland C++ compilers (Hiroshi Saito) -Fixes for AIX and -Intel compilers (Tom) +Fixes for AIX and +Intel compilers (Tom) Fix rare bug in continuous archiving (Tom) @@ -2881,9 +2881,9 @@ compilers (Hiroshi Saito) into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -2893,61 +2893,61 @@ compilers (Hiroshi Saito) Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix weak key selection in pgcrypto (Marko Kreen) Errors in fortuna PRNG reseeding logic could cause a predictable -session key to be selected by pgp_sym_encrypt() in some cases. +session key to be selected by pgp_sym_encrypt() in some cases. This only affects non-OpenSSL-using builds. Fix some incorrect encoding conversion functions -win1251_to_iso, win866_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, win866_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) -Make autovacuum visible in pg_stat_activity +Make autovacuum visible in pg_stat_activity (Alvaro) -Disable full_page_writes (Tom) -In certain cases, having full_page_writes off would cause +Disable full_page_writes (Tom) +In certain cases, having full_page_writes off would cause crash recovery to fail. A proper fix will appear in 8.2; for now it's just disabled. @@ -2965,10 +2965,10 @@ same transaction Fix WAL replay for case where a B-Tree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) -Fix SELECT INTO and CREATE TABLE AS to +Fix SELECT INTO and CREATE TABLE AS to create tables in the default tablespace, not the base directory (Kris Jurka) @@ -2986,18 +2986,18 @@ Fuhr) Fix problem with password prompting on some Win32 systems (Robert Kinberg) -Improve pg_dump's handling of default values +Improve pg_dump's handling of default values for domains -Fix pg_dumpall to handle identically-named +Fix pg_dumpall to handle identically-named users and groups reasonably (only possible when dumping from a pre-8.1 server) (Tom) The user and group will be merged into a single role with -LOGIN permission. Formerly the merged role wouldn't have -LOGIN permission, making it unusable as a user. +LOGIN permission. Formerly the merged role wouldn't have +LOGIN permission, making it unusable as a user. -Fix pg_restore -n to work as +Fix pg_restore -n to work as documented (Tom) @@ -3035,14 +3035,14 @@ documented (Tom) Fix bug that allowed any logged-in user to SET -ROLE to any other database user id (CVE-2006-0553) +ROLE to any other database user id (CVE-2006-0553) Due to inadequate validity checking, a user could exploit the special -case that SET ROLE normally uses to restore the previous role +case that SET ROLE normally uses to restore the previous role setting after an error. This allowed ordinary users to acquire superuser status, for example. The escalation-of-privilege risk exists only in 8.1.0-8.1.2. However, in all releases back to 7.3 there is a related bug in SET -SESSION AUTHORIZATION that allows unprivileged users to crash the server, +SESSION AUTHORIZATION that allows unprivileged users to crash the server, if it has been compiled with Asserts enabled (which is not the default). Thanks to Akio Ishida for reporting this problem. @@ -3055,55 +3055,55 @@ created in 8.0.4, 7.4.9, and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog and pg_subtrans file creation +exists errors during pg_clog and pg_subtrans file creation (Tom) Fix cases that could lead to crashes if a cache-invalidation message arrives at just the wrong time (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) -Ensure ALTER COLUMN TYPE will process -FOREIGN KEY, UNIQUE, and PRIMARY KEY +Ensure ALTER COLUMN TYPE will process +FOREIGN KEY, UNIQUE, and PRIMARY KEY constraints in the proper order (Nakano Yoshihisa) Fixes to allow restoring dumps that have cross-schema references to custom operators or operator classes (Tom) -Allow pg_restore to continue properly after a -COPY failure; formerly it tried to treat the remaining -COPY data as SQL commands (Stephen Frost) +Allow pg_restore to continue properly after a +COPY failure; formerly it tried to treat the remaining +COPY data as SQL commands (Stephen Frost) -Fix pg_ctl unregister crash +Fix pg_ctl unregister crash when the data directory is not specified (Magnus) -Fix libpq PQprint HTML tags +Fix libpq PQprint HTML tags (Christoph Zwerschke) -Fix ecpg crash on AMD64 and PPC +Fix ecpg crash on AMD64 and PPC (Neil) -Allow SETOF and %TYPE to be used +Allow SETOF and %TYPE to be used together in function result type declarations Recover properly if error occurs during argument passing -in PL/Python (Neil) +in PL/Python (Neil) -Fix memory leak in plperl_return_next +Fix memory leak in plperl_return_next (Neil) -Fix PL/Perl's handling of locales on +Fix PL/Perl's handling of locales on Win32 to match the backend (Andrew) Various optimizer fixes (Tom) -Fix crash when log_min_messages is set to -DEBUG3 or above in postgresql.conf on Win32 +Fix crash when log_min_messages is set to +DEBUG3 or above in postgresql.conf on Win32 (Bruce) -Fix pgxs -L library path +Fix pgxs -L library path specification for Win32, Cygwin, macOS, AIX (Bruce) Check that SID is enabled while checking for Win32 admin @@ -3112,13 +3112,13 @@ privileges (Magnus) Properly reject out-of-range date inputs (Kris Jurka) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) -Improve speed of COPY IN via libpq, by +Improve speed of COPY IN via libpq, by avoiding a kernel call per data line (Alon Goldshuv) -Improve speed of /contrib/tsearch2 index +Improve speed of /contrib/tsearch2 index creation (Tom) @@ -3145,9 +3145,9 @@ creation (Tom) A dump/restore is not required for those running 8.1.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -3160,7 +3160,7 @@ creation (Tom) than exit if there is no more room in ShmemBackendArray (Magnus) The previous behavior could lead to a denial-of-service situation if too many connection requests arrive close together. This applies -only to the Windows port. +only to the Windows port. Fix bug introduced in 8.0 that could allow ReadBuffer to return an already-used page as new, potentially causing loss of @@ -3171,16 +3171,16 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Allow more flexible relocation of installation @@ -3189,7 +3189,7 @@ directories (Tom) directory paths were the same except for the last component. Prevent crashes caused by the use of -ISO-8859-5 and ISO-8859-9 encodings +ISO-8859-5 and ISO-8859-9 encodings (Tatsuo) Fix longstanding bug in strpos() and regular expression @@ -3197,22 +3197,22 @@ handling in certain rarely used Asian multi-byte character sets (Tatsuo) Fix bug where COPY CSV mode considered any -\. to terminate the copy data The new code -requires \. to appear alone on a line, as per +\. to terminate the copy data The new code +requires \. to appear alone on a line, as per documentation. Make COPY CSV mode quote a literal data value of -\. to ensure it cannot be interpreted as the +\. to ensure it cannot be interpreted as the end-of-data marker (Bruce) -Various fixes for functions returning RECORDs +Various fixes for functions returning RECORDs (Tom) -Fix processing of postgresql.conf so a +Fix processing of postgresql.conf so a final line with no newline is processed properly (Tom) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. @@ -3220,7 +3220,7 @@ XDES algorithms (Marko Kreen, Solar Designer) Fix autovacuum crash when processing expression indexes -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -3262,7 +3262,7 @@ what's actually returned by the query (Joe) involving sub-selects flattened by the optimizer (Tom) Fix update failures in scenarios involving CHECK constraints, -toasted columns, and indexes (Tom) +toasted columns, and indexes (Tom) Fix bgwriter problems after recovering from errors (Tom) @@ -3276,7 +3276,7 @@ later VACUUM commands. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/tsearch2 and /contrib/ltree +/contrib/tsearch2 and /contrib/ltree fixes (Teodor) Fix problems with translated error messages in @@ -3285,17 +3285,17 @@ unexpected truncation of output strings and wrong display of the smallest possible bigint value (Andrew, Tom) These problems only appeared on platforms that were using our -port/snprintf.c code, which includes BSD variants if ---enable-nls was given, and perhaps others. In addition, +port/snprintf.c code, which includes BSD variants if +--enable-nls was given, and perhaps others. In addition, a different form of the translated-error-message problem could appear -on Windows depending on which version of libintl was used. +on Windows depending on which version of libintl was used. -Re-allow AM/PM, HH, -HH12, and D format specifiers for -to_char(time) and to_char(interval). -(to_char(interval) should probably use -HH24.) (Bruce) +Re-allow AM/PM, HH, +HH12, and D format specifiers for +to_char(time) and to_char(interval). +(to_char(interval) should probably use +HH24.) (Bruce) AIX, HPUX, and MSVC compile fixes (Tom, Hiroshi Saito) @@ -3305,7 +3305,7 @@ Saito) Retry file reads and writes after Windows NO_SYSTEM_RESOURCES error (Qingqing Zhou) -Prevent autovacuum from crashing during +Prevent autovacuum from crashing during ANALYZE of expression index (Alvaro) Fix problems with ON COMMIT DELETE ROWS temp @@ -3315,7 +3315,7 @@ tables DISTINCT query Add 8.1.0 release note item on how to migrate invalid -UTF-8 byte sequences (Paul Lindner) +UTF-8 byte sequences (Paul Lindner) @@ -3365,13 +3365,13 @@ DISTINCT query In previous releases, only a single index could be used to do lookups on a table. With this feature, if a query has - WHERE tab.col1 = 4 and tab.col2 = 9, and there is - no multicolumn index on col1 and col2, - but there is an index on col1 and another on - col2, it is possible to search both indexes and + WHERE tab.col1 = 4 and tab.col2 = 9, and there is + no multicolumn index on col1 and col2, + but there is an index on col1 and another on + col2, it is possible to search both indexes and combine the results in memory, then do heap fetches for only - the rows matching both the col1 and - col2 restrictions. This is very useful in + the rows matching both the col1 and + col2 restrictions. This is very useful in environments that have a lot of unstructured queries where it is impossible to create indexes that match all possible access conditions. Bitmap scans are useful even with a single index, @@ -3394,9 +3394,9 @@ DISTINCT query their transactions (none failed), all transactions can be committed. Even if a machine crashes after a prepare, the prepared transaction can be committed after the machine is - restarted. New syntax includes PREPARE TRANSACTION and - COMMIT/ROLLBACK PREPARED. A new system view - pg_prepared_xacts has also been added. + restarted. New syntax includes PREPARE TRANSACTION and + COMMIT/ROLLBACK PREPARED. A new system view + pg_prepared_xacts has also been added. @@ -3445,12 +3445,12 @@ DISTINCT query Once a user logs into a role, she obtains capabilities of the login role plus any inherited roles, and can use - SET ROLE to switch to other roles she is a member of. + SET ROLE to switch to other roles she is a member of. This feature is a generalization of the SQL standard's concept of roles. - This change also replaces pg_shadow and - pg_group by new role-capable catalogs - pg_authid and pg_auth_members. The old + This change also replaces pg_shadow and + pg_group by new role-capable catalogs + pg_authid and pg_auth_members. The old tables are redefined as read-only views on the new role tables. @@ -3458,15 +3458,15 @@ DISTINCT query - Automatically use indexes for MIN() and - MAX() (Tom) + Automatically use indexes for MIN() and + MAX() (Tom) In previous releases, the only way to use an index for - MIN() or MAX() was to rewrite the - query as SELECT col FROM tab ORDER BY col LIMIT 1. + MIN() or MAX() was to rewrite the + query as SELECT col FROM tab ORDER BY col LIMIT 1. Index usage now happens automatically. @@ -3474,7 +3474,7 @@ DISTINCT query - Move /contrib/pg_autovacuum into the main server + Move /contrib/pg_autovacuum into the main server (Alvaro) @@ -3483,21 +3483,21 @@ DISTINCT query Integrating autovacuum into the server allows it to be automatically started and stopped in sync with the database server, and allows autovacuum to be configured from - postgresql.conf. + postgresql.conf. - Add shared row level locks using SELECT ... FOR SHARE + Add shared row level locks using SELECT ... FOR SHARE (Alvaro) While PostgreSQL's MVCC locking - allows SELECT to never be blocked by writers and + allows SELECT to never be blocked by writers and therefore does not need shared row locks for typical operations, shared locks are useful for applications that require shared row locking. In particular this reduces the locking requirements @@ -3516,7 +3516,7 @@ DISTINCT query This extension of the dependency mechanism prevents roles from being dropped while there are still database objects they own. - Formerly it was possible to accidentally orphan objects by + Formerly it was possible to accidentally orphan objects by deleting their owner. While this could be recovered from, it was messy and unpleasant. @@ -3537,7 +3537,7 @@ DISTINCT query This allows for a basic type of table partitioning. If child tables store separate key ranges and this is enforced using appropriate - CHECK constraints, the optimizer will skip child + CHECK constraints, the optimizer will skip child table accesses when the constraint guarantees no matching rows exist in the child table. @@ -3556,9 +3556,9 @@ DISTINCT query - The 8.0 release announced that the to_char() function + The 8.0 release announced that the to_char() function for intervals would be removed in 8.1. However, since no better API - has been suggested, to_char(interval) has been enhanced in + has been suggested, to_char(interval) has been enhanced in 8.1 and will remain in the server. @@ -3570,21 +3570,21 @@ DISTINCT query - add_missing_from is now false by default (Neil) + add_missing_from is now false by default (Neil) By default, we now generate an error if a table is used in a query - without a FROM reference. The old behavior is still + without a FROM reference. The old behavior is still available, but the parameter must be set to 'true' to obtain it. - It might be necessary to set add_missing_from to true + It might be necessary to set add_missing_from to true in order to load an existing dump file, if the dump contains any - views or rules created using the implicit-FROM syntax. + views or rules created using the implicit-FROM syntax. This should be a one-time annoyance, because PostgreSQL 8.1 will convert - such views and rules to standard explicit-FROM syntax. + such views and rules to standard explicit-FROM syntax. Subsequent dumps will therefore not have the problem. @@ -3604,29 +3604,29 @@ DISTINCT query - default_with_oids is now false by default (Neil) + default_with_oids is now false by default (Neil) With this option set to false, user-created tables no longer - have an OID column unless WITH OIDS is specified in - CREATE TABLE. Though OIDs have existed in all - releases of PostgreSQL, their use is limited + have an OID column unless WITH OIDS is specified in + CREATE TABLE. Though OIDs have existed in all + releases of PostgreSQL, their use is limited because they are only four bytes long and the counter is shared across all installed databases. The preferred way of uniquely - identifying rows is via sequences and the SERIAL type, - which have been supported since PostgreSQL 6.4. + identifying rows is via sequences and the SERIAL type, + which have been supported since PostgreSQL 6.4. - Add E'' syntax so eventually ordinary strings can + Add E'' syntax so eventually ordinary strings can treat backslashes literally (Bruce) Currently PostgreSQL processes a backslash in a string literal as introducing a special escape sequence, - e.g. \n or \010. + e.g. \n or \010. While this allows easy entry of special values, it is nonstandard and makes porting of applications from other databases more difficult. For this reason, the @@ -3634,8 +3634,8 @@ DISTINCT query remove the special meaning of backslashes in strings. For backward compatibility and for users who want special backslash processing, a new string syntax has been created. This new string - syntax is formed by writing an E immediately preceding the - single quote that starts the string, e.g. E'hi\n'. While + syntax is formed by writing an E immediately preceding the + single quote that starts the string, e.g. E'hi\n'. While this release does not change the handling of backslashes in strings, it does add new configuration parameters to help users migrate applications for future releases: @@ -3644,14 +3644,14 @@ DISTINCT query - standard_conforming_strings — does this release + standard_conforming_strings — does this release treat backslashes literally in ordinary strings? - escape_string_warning — warn about backslashes in + escape_string_warning — warn about backslashes in ordinary (non-E) strings @@ -3659,36 +3659,36 @@ DISTINCT query - The standard_conforming_strings value is read-only. + The standard_conforming_strings value is read-only. Applications can retrieve the value to know how backslashes are processed. (Presence of the parameter can also be taken as an - indication that E'' string syntax is supported.) - In a future release, standard_conforming_strings + indication that E'' string syntax is supported.) + In a future release, standard_conforming_strings will be true, meaning backslashes will be treated literally in - non-E strings. To prepare for this change, use E'' + non-E strings. To prepare for this change, use E'' strings in places that need special backslash processing, and - turn on escape_string_warning to find additional - strings that need to be converted to use E''. - Also, use two single-quotes ('') to embed a literal + turn on escape_string_warning to find additional + strings that need to be converted to use E''. + Also, use two single-quotes ('') to embed a literal single-quote in a string, rather than the PostgreSQL-supported syntax of - backslash single-quote (\'). The former is + backslash single-quote (\'). The former is standards-conforming and does not require the use of the - E'' string syntax. You can also use the - $$ string syntax, which does not treat backslashes + E'' string syntax. You can also use the + $$ string syntax, which does not treat backslashes specially. - Make REINDEX DATABASE reindex all indexes in the + Make REINDEX DATABASE reindex all indexes in the database (Tom) - Formerly, REINDEX DATABASE reindexed only + Formerly, REINDEX DATABASE reindexed only system tables. This new behavior seems more intuitive. A new - command REINDEX SYSTEM provides the old functionality + command REINDEX SYSTEM provides the old functionality of reindexing just the system tables. @@ -3698,13 +3698,13 @@ DISTINCT query Read-only large object descriptors now obey MVCC snapshot semantics - When a large object is opened with INV_READ (and not - INV_WRITE), the data read from the descriptor will now - reflect a snapshot of the large object's state at the + When a large object is opened with INV_READ (and not + INV_WRITE), the data read from the descriptor will now + reflect a snapshot of the large object's state at the time of the transaction snapshot in use by the query that called - lo_open(). To obtain the old behavior of always - returning the latest committed data, include INV_WRITE - in the mode flags for lo_open(). + lo_open(). To obtain the old behavior of always + returning the latest committed data, include INV_WRITE + in the mode flags for lo_open(). @@ -3713,28 +3713,28 @@ DISTINCT query Add proper dependencies for arguments of sequence functions (Tom) - In previous releases, sequence names passed to nextval(), - currval(), and setval() were stored as + In previous releases, sequence names passed to nextval(), + currval(), and setval() were stored as simple text strings, meaning that renaming or dropping a - sequence used in a DEFAULT clause made the clause + sequence used in a DEFAULT clause made the clause invalid. This release stores all newly-created sequence function arguments as internal OIDs, allowing them to track sequence renaming, and adding dependency information that prevents - improper sequence removal. It also makes such DEFAULT + improper sequence removal. It also makes such DEFAULT clauses immune to schema renaming and search path changes. Some applications might rely on the old behavior of run-time lookup for sequence names. This can still be done by - explicitly casting the argument to text, for example - nextval('myseq'::text). + explicitly casting the argument to text, for example + nextval('myseq'::text). Pre-8.1 database dumps loaded into 8.1 will use the old text-based representation and therefore will not have the features of OID-stored arguments. However, it is possible to update a - database containing text-based DEFAULT clauses. - First, save this query into a file, such as fixseq.sql: + database containing text-based DEFAULT clauses. + First, save this query into a file, such as fixseq.sql: SELECT 'ALTER TABLE ' || pg_catalog.quote_ident(n.nspname) || '.' || @@ -3754,11 +3754,11 @@ WHERE n.oid = c.relnamespace AND d.adsrc ~ $$val\(\('[^']*'::text\)::regclass$$; Next, run the query against a database to find what - adjustments are required, like this for database db1: + adjustments are required, like this for database db1: psql -t -f fixseq.sql db1 - This will show the ALTER TABLE commands needed to + This will show the ALTER TABLE commands needed to convert the database to the newer OID-based representation. If the commands look reasonable, run this to update the database: @@ -3771,51 +3771,51 @@ psql -t -f fixseq.sql db1 | psql -e db1 In psql, treat unquoted - \{digit}+ sequences as octal (Bruce) + \{digit}+ sequences as octal (Bruce) - In previous releases, \{digit}+ sequences were - treated as decimal, and only \0{digit}+ were treated + In previous releases, \{digit}+ sequences were + treated as decimal, and only \0{digit}+ were treated as octal. This change was made for consistency. - Remove grammar productions for prefix and postfix % - and ^ operators + Remove grammar productions for prefix and postfix % + and ^ operators (Tom) These have never been documented and complicated the use of the - modulus operator (%) with negative numbers. + modulus operator (%) with negative numbers. - Make &< and &> for polygons + Make &< and &> for polygons consistent with the box "over" operators (Tom) - CREATE LANGUAGE can ignore the provided arguments - in favor of information from pg_pltemplate + CREATE LANGUAGE can ignore the provided arguments + in favor of information from pg_pltemplate (Tom) - A new system catalog pg_pltemplate has been defined + A new system catalog pg_pltemplate has been defined to carry information about the preferred definitions of procedural languages (such as whether they have validator functions). When an entry exists in this catalog for the language being created, - CREATE LANGUAGE will ignore all its parameters except the + CREATE LANGUAGE will ignore all its parameters except the language name and instead use the catalog information. This measure was taken because of increasing problems with obsolete language definitions being loaded by old dump files. As of 8.1, - pg_dump will dump procedural language definitions as - just CREATE LANGUAGE name, relying + pg_dump will dump procedural language definitions as + just CREATE LANGUAGE name, relying on a template entry to exist at load time. We expect this will be a more future-proof representation. @@ -3835,11 +3835,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 sequences to be entered into the database, and this release properly accepts only valid UTF-8 sequences. One way to correct a dumpfile is to run the command iconv -c -f UTF-8 -t - UTF-8 -o cleanfile.sql dumpfile.sql. The -c option + UTF-8 -o cleanfile.sql dumpfile.sql. The -c option removes invalid character sequences. A diff of the two files will - show the sequences that are invalid. iconv reads the + show the sequences that are invalid. iconv reads the entire input file into memory so it might be necessary to use - split to break up the dump into multiple smaller + split to break up the dump into multiple smaller files for processing. @@ -3908,17 +3908,17 @@ psql -t -f fixseq.sql db1 | psql -e db1 For example, this allows an index on columns a,b,c to be used in - a query with WHERE a = 4 and c = 10. + a query with WHERE a = 4 and c = 10. - Skip WAL logging for CREATE TABLE AS / - SELECT INTO (Simon) + Skip WAL logging for CREATE TABLE AS / + SELECT INTO (Simon) - Since a crash during CREATE TABLE AS would cause the + Since a crash during CREATE TABLE AS would cause the table to be dropped during recovery, there is no reason to WAL log as the table is loaded. (Logging still happens if WAL archiving is enabled, however.) @@ -3933,7 +3933,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add configuration parameter full_page_writes to + Add configuration parameter full_page_writes to control writing full pages to WAL (Bruce) @@ -3948,22 +3948,22 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Use O_DIRECT if available when using - O_SYNC for wal_sync_method + Use O_DIRECT if available when using + O_SYNC for wal_sync_method (Itagaki Takahiro) - O_DIRECT causes disk writes to bypass the kernel + O_DIRECT causes disk writes to bypass the kernel cache, and for WAL writes, this improves performance. - Improve COPY FROM performance (Alon Goldshuv) + Improve COPY FROM performance (Alon Goldshuv) - This was accomplished by reading COPY input in + This was accomplished by reading COPY input in larger chunks, rather than character by character. @@ -4005,14 +4005,14 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add warning about the need to increase - max_fsm_relations and max_fsm_pages - during VACUUM (Ron Mayer) + max_fsm_relations and max_fsm_pages + during VACUUM (Ron Mayer) - Add temp_buffers configuration parameter to allow + Add temp_buffers configuration parameter to allow users to determine the size of the local buffer area for temporary table access (Tom) @@ -4021,13 +4021,13 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add session start time and client IP address to - pg_stat_activity (Magnus) + pg_stat_activity (Magnus) - Adjust pg_stat views for bitmap scans (Tom) + Adjust pg_stat views for bitmap scans (Tom) The meanings of some of the fields have changed slightly. @@ -4036,27 +4036,27 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Enhance pg_locks view (Tom) + Enhance pg_locks view (Tom) - Log queries for client-side PREPARE and - EXECUTE (Simon) + Log queries for client-side PREPARE and + EXECUTE (Simon) Allow Kerberos name and user name case sensitivity to be - specified in postgresql.conf (Magnus) + specified in postgresql.conf (Magnus) - Add configuration parameter krb_server_hostname so + Add configuration parameter krb_server_hostname so that the server host name can be specified as part of service principal (Todd Kover) @@ -4069,8 +4069,8 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add log_line_prefix options for millisecond - timestamps (%m) and remote host (%h) (Ed + Add log_line_prefix options for millisecond + timestamps (%m) and remote host (%h) (Ed L.) @@ -4086,12 +4086,12 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Remove old *.backup files when we do - pg_stop_backup() (Bruce) + Remove old *.backup files when we do + pg_stop_backup() (Bruce) - This prevents a large number of *.backup files from - existing in pg_xlog/. + This prevents a large number of *.backup files from + existing in pg_xlog/. @@ -4112,7 +4112,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add per-user and per-database connection limits (Petr Jelinek) - Using ALTER USER and ALTER DATABASE, + Using ALTER USER and ALTER DATABASE, limits can now be enforced on the maximum number of sessions that can concurrently connect as a specific user or to a specific database. Setting the limit to zero disables user or database connections. @@ -4128,7 +4128,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - New system catalog pg_pltemplate allows overriding + New system catalog pg_pltemplate allows overriding obsolete procedural-language definitions in dump files (Tom) @@ -4149,63 +4149,63 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Fix HAVING without any aggregate functions or - GROUP BY so that the query returns a single group (Tom) + Fix HAVING without any aggregate functions or + GROUP BY so that the query returns a single group (Tom) - Previously, such a case would treat the HAVING - clause the same as a WHERE clause. This was not per spec. + Previously, such a case would treat the HAVING + clause the same as a WHERE clause. This was not per spec. - Add USING clause to allow additional tables to be - specified to DELETE (Euler Taveira de Oliveira, Neil) + Add USING clause to allow additional tables to be + specified to DELETE (Euler Taveira de Oliveira, Neil) In prior releases, there was no clear method for specifying - additional tables to be used for joins in a DELETE - statement. UPDATE already has a FROM + additional tables to be used for joins in a DELETE + statement. UPDATE already has a FROM clause for this purpose. - Add support for \x hex escapes in backend and ecpg + Add support for \x hex escapes in backend and ecpg strings (Bruce) - This is just like the standard C \x escape syntax. + This is just like the standard C \x escape syntax. Octal escapes were already supported. - Add BETWEEN SYMMETRIC query syntax (Pavel Stehule) + Add BETWEEN SYMMETRIC query syntax (Pavel Stehule) - This feature allows BETWEEN comparisons without + This feature allows BETWEEN comparisons without requiring the first value to be less than the second. For - example, 2 BETWEEN [ASYMMETRIC] 3 AND 1 returns - false, while 2 BETWEEN SYMMETRIC 3 AND 1 returns - true. BETWEEN ASYMMETRIC was already supported. + example, 2 BETWEEN [ASYMMETRIC] 3 AND 1 returns + false, while 2 BETWEEN SYMMETRIC 3 AND 1 returns + true. BETWEEN ASYMMETRIC was already supported. - Add NOWAIT option to SELECT ... FOR - UPDATE/SHARE (Hans-Juergen Schoenig) + Add NOWAIT option to SELECT ... FOR + UPDATE/SHARE (Hans-Juergen Schoenig) - While the statement_timeout configuration + While the statement_timeout configuration parameter allows a query taking more than a certain amount of - time to be canceled, the NOWAIT option allows a + time to be canceled, the NOWAIT option allows a query to be canceled as soon as a SELECT ... FOR - UPDATE/SHARE command cannot immediately acquire a row lock. + UPDATE/SHARE command cannot immediately acquire a row lock. @@ -4233,7 +4233,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Allow limited ALTER OWNER commands to be performed + Allow limited ALTER OWNER commands to be performed by the object owner (Stephen Frost) @@ -4248,7 +4248,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add ALTER object SET SCHEMA capability + Add ALTER object SET SCHEMA capability for some object types (tables, functions, types) (Bernd Helmle) @@ -4273,54 +4273,54 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Allow TRUNCATE to truncate multiple tables in a + Allow TRUNCATE to truncate multiple tables in a single command (Alvaro) Because of referential integrity checks, it is not allowed to truncate a table that is part of a referential integrity - constraint. Using this new functionality, TRUNCATE + constraint. Using this new functionality, TRUNCATE can be used to truncate such tables, if both tables involved in a referential integrity constraint are truncated in a single - TRUNCATE command. + TRUNCATE command. Properly process carriage returns and line feeds in - COPY CSV mode (Andrew) + COPY CSV mode (Andrew) In release 8.0, carriage returns and line feeds in CSV - COPY TO were processed in an inconsistent manner. (This was + COPY TO were processed in an inconsistent manner. (This was documented on the TODO list.) - Add COPY WITH CSV HEADER to allow a header line as - the first line in COPY (Andrew) + Add COPY WITH CSV HEADER to allow a header line as + the first line in COPY (Andrew) - This allows handling of the common CSV usage of + This allows handling of the common CSV usage of placing the column names on the first line of the data file. For - COPY TO, the first line contains the column names, - and for COPY FROM, the first line is ignored. + COPY TO, the first line contains the column names, + and for COPY FROM, the first line is ignored. On Windows, display better sub-second precision in - EXPLAIN ANALYZE (Magnus) + EXPLAIN ANALYZE (Magnus) - Add trigger duration display to EXPLAIN ANALYZE + Add trigger duration display to EXPLAIN ANALYZE (Tom) @@ -4332,7 +4332,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add support for \x hex escapes in COPY + Add support for \x hex escapes in COPY (Sergey Ten) @@ -4342,11 +4342,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Make SHOW ALL include variable descriptions + Make SHOW ALL include variable descriptions (Matthias Schmidt) - SHOW varname still only displays the variable's + SHOW varname still only displays the variable's value and does not include the description. @@ -4354,27 +4354,27 @@ psql -t -f fixseq.sql db1 | psql -e db1 Make initdb create a new standard - database called postgres, and convert utilities to - use postgres rather than template1 for + database called postgres, and convert utilities to + use postgres rather than template1 for standard lookups (Dave) - In prior releases, template1 was used both as a + In prior releases, template1 was used both as a default connection for utilities like createuser, and as a template for - new databases. This caused CREATE DATABASE to + new databases. This caused CREATE DATABASE to sometimes fail, because a new database cannot be created if anyone else is in the template database. With this change, the - default connection database is now postgres, + default connection database is now postgres, meaning it is much less likely someone will be using - template1 during CREATE DATABASE. + template1 during CREATE DATABASE. Create new reindexdb command-line - utility by moving /contrib/reindexdb into the + utility by moving /contrib/reindexdb into the server (Euler Taveira de Oliveira) @@ -4389,38 +4389,38 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add MAX() and MIN() aggregates for + Add MAX() and MIN() aggregates for array types (Koju Iijima) - Fix to_date() and to_timestamp() to - behave reasonably when CC and YY fields + Fix to_date() and to_timestamp() to + behave reasonably when CC and YY fields are both used (Karel Zak) - If the format specification contains CC and a year - specification is YYY or longer, ignore the - CC. If the year specification is YY or - shorter, interpret CC as the previous century. + If the format specification contains CC and a year + specification is YYY or longer, ignore the + CC. If the year specification is YY or + shorter, interpret CC as the previous century. - Add md5(bytea) (Abhijit Menon-Sen) + Add md5(bytea) (Abhijit Menon-Sen) - md5(text) already existed. + md5(text) already existed. - Add support for numeric ^ numeric based on - power(numeric, numeric) + Add support for numeric ^ numeric based on + power(numeric, numeric) The function already existed, but there was no operator assigned @@ -4430,7 +4430,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Fix NUMERIC modulus by properly truncating the quotient + Fix NUMERIC modulus by properly truncating the quotient during computation (Bruce) @@ -4441,29 +4441,29 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add a function lastval() (Dennis Björklund) + Add a function lastval() (Dennis Björklund) - lastval() is a simplified version of - currval(). It automatically determines the proper - sequence name based on the most recent nextval() or - setval() call performed by the current session. + lastval() is a simplified version of + currval(). It automatically determines the proper + sequence name based on the most recent nextval() or + setval() call performed by the current session. - Add to_timestamp(DOUBLE PRECISION) (Michael Glaesemann) + Add to_timestamp(DOUBLE PRECISION) (Michael Glaesemann) Converts Unix seconds since 1970 to a TIMESTAMP WITH - TIMEZONE. + TIMEZONE. - Add pg_postmaster_start_time() function (Euler + Add pg_postmaster_start_time() function (Euler Taveira de Oliveira, Matthias Schmidt) @@ -4471,11 +4471,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 Allow the full use of time zone names in AT TIME - ZONE, not just the short list previously available (Magnus) + ZONE, not just the short list previously available (Magnus) Previously, only a predefined list of time zone names were - supported by AT TIME ZONE. Now any supported time + supported by AT TIME ZONE. Now any supported time zone name can be used, e.g.: SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; @@ -4488,7 +4488,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add GREATEST() and LEAST() variadic + Add GREATEST() and LEAST() variadic functions (Pavel Stehule) @@ -4499,7 +4499,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add pg_column_size() (Mark Kirkwood) + Add pg_column_size() (Mark Kirkwood) This returns storage size of a column, which might be compressed. @@ -4508,7 +4508,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add regexp_replace() (Atsushi Ogawa) + Add regexp_replace() (Atsushi Ogawa) This allows regular expression replacement, like sed. An optional @@ -4523,8 +4523,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Previous versions sometimes returned unjustified results, like - '4 months'::interval / 5 returning '1 mon - -6 days'. + '4 months'::interval / 5 returning '1 mon + -6 days'. @@ -4534,24 +4534,24 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; This fixes some cases in which the seconds field would be shown as - 60 instead of incrementing the higher-order fields. + 60 instead of incrementing the higher-order fields. - Add a separate day field to type interval so a one day + Add a separate day field to type interval so a one day interval can be distinguished from a 24 hour interval (Michael Glaesemann) Days that contain a daylight saving time adjustment are not 24 hours long, but typically 23 or 25 hours. This change creates a - conceptual distinction between intervals of so many days - and intervals of so many hours. Adding - 1 day to a timestamp now gives the same local time on + conceptual distinction between intervals of so many days + and intervals of so many hours. Adding + 1 day to a timestamp now gives the same local time on the next day even if a daylight saving time adjustment occurs - between, whereas adding 24 hours will give a different + between, whereas adding 24 hours will give a different local time when this happens. For example, under US DST rules: '2005-04-03 00:00:00-05' + '1 day' = '2005-04-04 00:00:00-04' @@ -4562,7 +4562,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add justify_days() and justify_hours() + Add justify_days() and justify_hours() (Michael Glaesemann) @@ -4574,7 +4574,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Move /contrib/dbsize into the backend, and rename + Move /contrib/dbsize into the backend, and rename some of the functions (Dave Page, Andreas Pflug) @@ -4582,38 +4582,38 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - pg_tablespace_size() + pg_tablespace_size() - pg_database_size() + pg_database_size() - pg_relation_size() + pg_relation_size() - pg_total_relation_size() + pg_total_relation_size() - pg_size_pretty() + pg_size_pretty() - pg_total_relation_size() includes indexes and TOAST + pg_total_relation_size() includes indexes and TOAST tables. @@ -4628,19 +4628,19 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - pg_stat_file() + pg_stat_file() - pg_read_file() + pg_read_file() - pg_ls_dir() + pg_ls_dir() @@ -4650,21 +4650,21 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add pg_reload_conf() to force reloading of the + Add pg_reload_conf() to force reloading of the configuration files (Dave Page, Andreas Pflug) - Add pg_rotate_logfile() to force rotation of the + Add pg_rotate_logfile() to force rotation of the server log file (Dave Page, Andreas Pflug) - Change pg_stat_* views to include TOAST tables (Tom) + Change pg_stat_* views to include TOAST tables (Tom) @@ -4686,25 +4686,25 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - UNICODE is now UTF8 + UNICODE is now UTF8 - ALT is now WIN866 + ALT is now WIN866 - WIN is now WIN1251 + WIN is now WIN1251 - TCVN is now WIN1258 + TCVN is now WIN1258 @@ -4718,17 +4718,17 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for WIN1252 encoding (Roland Volkmann) + Add support for WIN1252 encoding (Roland Volkmann) - Add support for four-byte UTF8 characters (John + Add support for four-byte UTF8 characters (John Hansen) - Previously only one, two, and three-byte UTF8 characters + Previously only one, two, and three-byte UTF8 characters were supported. This is particularly important for support for some Chinese character sets. @@ -4736,8 +4736,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow direct conversion between EUC_JP and - SJIS to improve performance (Atsushi Ogawa) + Allow direct conversion between EUC_JP and + SJIS to improve performance (Atsushi Ogawa) @@ -4761,14 +4761,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Fix ALTER LANGUAGE RENAME (Sergey Yatskevich) + Fix ALTER LANGUAGE RENAME (Sergey Yatskevich) Allow function characteristics, like strictness and volatility, - to be modified via ALTER FUNCTION (Neil) + to be modified via ALTER FUNCTION (Neil) @@ -4780,14 +4780,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow SQL and PL/pgSQL functions to use OUT and - INOUT parameters (Tom) + Allow SQL and PL/pgSQL functions to use OUT and + INOUT parameters (Tom) - OUT is an alternate way for a function to return - values. Instead of using RETURN, values can be - returned by assigning to parameters declared as OUT or - INOUT. This is notationally simpler in some cases, + OUT is an alternate way for a function to return + values. Instead of using RETURN, values can be + returned by assigning to parameters declared as OUT or + INOUT. This is notationally simpler in some cases, particularly so when multiple values need to be returned. While returning multiple values from a function was possible in previous releases, this greatly simplifies the @@ -4798,7 +4798,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Move language handler functions into the pg_catalog schema + Move language handler functions into the pg_catalog schema This makes it easier to drop the public schema if desired. @@ -4831,7 +4831,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Check function syntax at CREATE FUNCTION time, + Check function syntax at CREATE FUNCTION time, rather than at runtime (Neil) @@ -4842,19 +4842,19 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow OPEN to open non-SELECT queries - like EXPLAIN and SHOW (Tom) + Allow OPEN to open non-SELECT queries + like EXPLAIN and SHOW (Tom) - No longer require functions to issue a RETURN + No longer require functions to issue a RETURN statement (Tom) - This is a byproduct of the newly added OUT and - INOUT functionality. RETURN can + This is a byproduct of the newly added OUT and + INOUT functionality. RETURN can be omitted when it is not needed to provide the function's return value. @@ -4862,21 +4862,21 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for an optional INTO clause to - PL/pgSQL's EXECUTE statement (Pavel Stehule, Neil) + Add support for an optional INTO clause to + PL/pgSQL's EXECUTE statement (Pavel Stehule, Neil) - Make CREATE TABLE AS set ROW_COUNT (Tom) + Make CREATE TABLE AS set ROW_COUNT (Tom) - Define SQLSTATE and SQLERRM to return - the SQLSTATE and error message of the current + Define SQLSTATE and SQLERRM to return + the SQLSTATE and error message of the current exception (Pavel Stehule, Neil) @@ -4886,14 +4886,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow the parameters to the RAISE statement to be + Allow the parameters to the RAISE statement to be expressions (Pavel Stehule, Neil) - Add a loop CONTINUE statement (Pavel Stehule, Neil) + Add a loop CONTINUE statement (Pavel Stehule, Neil) @@ -4917,7 +4917,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Menon-Sen) - This allows functions to use return_next() to avoid + This allows functions to use return_next() to avoid building the entire result set in memory. @@ -4927,16 +4927,16 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Allow one-row-at-a-time retrieval of query results (Abhijit Menon-Sen) - This allows functions to use spi_query() and - spi_fetchrow() to avoid accumulating the entire + This allows functions to use spi_query() and + spi_fetchrow() to avoid accumulating the entire result set in memory. - Force PL/Perl to handle strings as UTF8 if the - server encoding is UTF8 (David Kamholz) + Force PL/Perl to handle strings as UTF8 if the + server encoding is UTF8 (David Kamholz) @@ -4963,14 +4963,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow Perl nonfatal warnings to generate NOTICE + Allow Perl nonfatal warnings to generate NOTICE messages (Andrew) - Allow Perl's strict mode to be enabled (Andrew) + Allow Perl's strict mode to be enabled (Andrew) @@ -4979,12 +4979,12 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - <application>psql</> Changes + <application>psql</application> Changes - Add \set ON_ERROR_ROLLBACK to allow statements in + Add \set ON_ERROR_ROLLBACK to allow statements in a transaction to error without affecting the rest of the transaction (Greg Sabino Mullane) @@ -4996,8 +4996,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for \x hex strings in - psql variables (Bruce) + Add support for \x hex strings in + psql variables (Bruce) Octal escapes were already supported. @@ -5006,7 +5006,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for troff -ms output format (Roger + Add support for troff -ms output format (Roger Leigh) @@ -5014,7 +5014,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Allow the history file location to be controlled by - HISTFILE (Andreas Seltenreich) + HISTFILE (Andreas Seltenreich) This allows configuration of per-database history storage. @@ -5023,14 +5023,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Prevent \x (expanded mode) from affecting - the output of \d tablename (Neil) + Prevent \x (expanded mode) from affecting + the output of \d tablename (Neil) - Add option to psql to log sessions (Lorne Sunley) @@ -5041,44 +5041,44 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Make \d show the tablespaces of indexes (Qingqing + Make \d show the tablespaces of indexes (Qingqing Zhou) - Allow psql help (\h) to + Allow psql help (\h) to make a best guess on the proper help information (Greg Sabino Mullane) - This allows the user to just add \h to the front of + This allows the user to just add \h to the front of the syntax error query and get help on the supported syntax. Previously any additional query text beyond the command name - had to be removed to use \h. + had to be removed to use \h. - Add \pset numericlocale to allow numbers to be + Add \pset numericlocale to allow numbers to be output in a locale-aware format (Eugen Nedelcu) - For example, using C locale 100000 would - be output as 100,000.0 while a European locale might - output this value as 100.000,0. + For example, using C locale 100000 would + be output as 100,000.0 while a European locale might + output this value as 100.000,0. Make startup banner show both server version number and - psql's version number, when they are different (Bruce) + psql's version number, when they are different (Bruce) - Also, a warning will be shown if the server and psql + Also, a warning will be shown if the server and psql are from different major releases. @@ -5088,13 +5088,13 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - <application>pg_dump</> Changes + <application>pg_dump</application> Changes - Add This allows just the objects in a specified schema to be restored. @@ -5103,18 +5103,18 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow pg_dump to dump large objects even in + Allow pg_dump to dump large objects even in text mode (Tom) With this change, large objects are now always dumped; the former - switch is a no-op. - Allow pg_dump to dump a consistent snapshot of + Allow pg_dump to dump a consistent snapshot of large objects (Tom) @@ -5127,7 +5127,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add @@ -5139,14 +5139,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Rely on pg_pltemplate for procedural languages (Tom) + Rely on pg_pltemplate for procedural languages (Tom) If the call handler for a procedural language is in the - pg_catalog schema, pg_dump does not + pg_catalog schema, pg_dump does not dump the handler. Instead, it dumps the language using just - CREATE LANGUAGE name, - relying on the pg_pltemplate catalog to provide + CREATE LANGUAGE name, + relying on the pg_pltemplate catalog to provide the language's creation parameters at load time. @@ -5161,15 +5161,15 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add a PGPASSFILE environment variable to specify the + Add a PGPASSFILE environment variable to specify the password file's filename (Andrew) - Add lo_create(), that is similar to - lo_creat() but allows the OID of the large object + Add lo_create(), that is similar to + lo_creat() but allows the OID of the large object to be specified (Tom) @@ -5191,7 +5191,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Fix pgxs to support building against a relocated + Fix pgxs to support building against a relocated installation @@ -5238,10 +5238,10 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow pg_config to be compiled using MSVC (Andrew) + Allow pg_config to be compiled using MSVC (Andrew) - This is required to build DBD::Pg using MSVC. + This is required to build DBD::Pg using MSVC. @@ -5264,15 +5264,15 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Modify postgresql.conf to use documentation defaults - on/off rather than - true/false (Bruce) + Modify postgresql.conf to use documentation defaults + on/off rather than + true/false (Bruce) - Enhance pg_config to be able to report more + Enhance pg_config to be able to report more build-time values (Tom) @@ -5304,11 +5304,11 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - In previous releases, gist.h contained both the + In previous releases, gist.h contained both the public GiST API (intended for use by authors of GiST index implementations) as well as some private declarations used by the implementation of GiST itself. The latter have been moved - to a separate file, gist_private.h. Most GiST + to a separate file, gist_private.h. Most GiST index implementations should be unaffected. @@ -5320,10 +5320,10 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; GiST methods are now always invoked in a short-lived memory - context. Therefore, memory allocated via palloc() + context. Therefore, memory allocated via palloc() will be reclaimed automatically, so GiST index implementations do not need to manually release allocated memory via - pfree(). + pfree(). @@ -5336,7 +5336,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add /contrib/pg_buffercache contrib module (Mark + Add /contrib/pg_buffercache contrib module (Mark Kirkwood) @@ -5347,28 +5347,28 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Remove /contrib/array because it is obsolete (Tom) + Remove /contrib/array because it is obsolete (Tom) - Clean up the /contrib/lo module (Tom) + Clean up the /contrib/lo module (Tom) - Move /contrib/findoidjoins to - /src/tools (Tom) + Move /contrib/findoidjoins to + /src/tools (Tom) - Remove the <<, >>, - &<, and &> operators from - /contrib/cube + Remove the <<, >>, + &<, and &> operators from + /contrib/cube These operators were not useful. @@ -5377,13 +5377,13 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Improve /contrib/btree_gist (Janko Richter) + Improve /contrib/btree_gist (Janko Richter) - Improve /contrib/pgbench (Tomoaki Sato, Tatsuo) + Improve /contrib/pgbench (Tomoaki Sato, Tatsuo) There is now a facility for testing with SQL command scripts given @@ -5393,7 +5393,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Improve /contrib/pgcrypto (Marko Kreen) + Improve /contrib/pgcrypto (Marko Kreen) @@ -5421,16 +5421,16 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Take build parameters (OpenSSL, zlib) from configure result + Take build parameters (OpenSSL, zlib) from configure result - There is no need to edit the Makefile anymore. + There is no need to edit the Makefile anymore. - Remove support for libmhash and libmcrypt + Remove support for libmhash and libmcrypt diff --git a/doc/src/sgml/release-8.2.sgml b/doc/src/sgml/release-8.2.sgml index c00cbd3467..71b50cfb01 100644 --- a/doc/src/sgml/release-8.2.sgml +++ b/doc/src/sgml/release-8.2.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.2.X series. Users are encouraged to update to a newer release branch soon. @@ -30,7 +30,7 @@ However, a longstanding error was discovered in the definition of the - information_schema.referential_constraints view. If you + information_schema.referential_constraints view. If you rely on correct results from that view, you should replace its definition as explained in the first changelog item below. @@ -49,7 +49,7 @@ - Fix bugs in information_schema.referential_constraints view + Fix bugs in information_schema.referential_constraints view (Tom Lane) @@ -62,13 +62,13 @@ - Since the view definition is installed by initdb, + Since the view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can (as a superuser) drop the - information_schema schema then re-create it by sourcing - SHAREDIR/information_schema.sql. - (Run pg_config --sharedir if you're uncertain where - SHAREDIR is.) This must be repeated in each database + information_schema schema then re-create it by sourcing + SHAREDIR/information_schema.sql. + (Run pg_config --sharedir if you're uncertain where + SHAREDIR is.) This must be repeated in each database to be fixed. @@ -76,12 +76,12 @@ Fix TOAST-related data corruption during CREATE TABLE dest AS - SELECT * FROM src or INSERT INTO dest SELECT * FROM src + SELECT * FROM src or INSERT INTO dest SELECT * FROM src (Tom Lane) - If a table has been modified by ALTER TABLE ADD COLUMN, + If a table has been modified by ALTER TABLE ADD COLUMN, attempts to copy its data verbatim to another table could produce corrupt results in certain corner cases. The problem can only manifest in this precise form in 8.4 and later, @@ -98,22 +98,22 @@ The typical symptom was transient errors like missing chunk - number 0 for toast value NNNNN in pg_toast_2619, where the cited + number 0 for toast value NNNNN in pg_toast_2619, where the cited toast table would always belong to a system catalog. - Improve locale support in money type's input and output + Improve locale support in money type's input and output (Tom Lane) Aside from not supporting all standard - lc_monetary + lc_monetary formatting options, the input and output functions were inconsistent, - meaning there were locales in which dumped money values could + meaning there were locales in which dumped money values could not be re-read. @@ -121,15 +121,15 @@ Don't let transform_null_equals - affect CASE foo WHEN NULL ... constructs + linkend="guc-transform-null-equals">transform_null_equals + affect CASE foo WHEN NULL ... constructs (Heikki Linnakangas) - transform_null_equals is only supposed to affect - foo = NULL expressions written directly by the user, not - equality checks generated internally by this form of CASE. + transform_null_equals is only supposed to affect + foo = NULL expressions written directly by the user, not + equality checks generated internally by this form of CASE. @@ -141,14 +141,14 @@ For a cascading foreign key that references its own table, a row update - will fire both the ON UPDATE trigger and the - CHECK trigger as one event. The ON UPDATE - trigger must execute first, else the CHECK will check a + will fire both the ON UPDATE trigger and the + CHECK trigger as one event. The ON UPDATE + trigger must execute first, else the CHECK will check a non-final state of the row and possibly throw an inappropriate error. However, the firing order of these triggers is determined by their names, which generally sort in creation order since the triggers have auto-generated names following the convention - RI_ConstraintTrigger_NNNN. A proper fix would require + RI_ConstraintTrigger_NNNN. A proper fix would require modifying that convention, which we will do in 9.2, but it seems risky to change it in existing releases. So this patch just changes the creation order of the triggers. Users encountering this type of error @@ -159,7 +159,7 @@ - Preserve blank lines within commands in psql's command + Preserve blank lines within commands in psql's command history (Robert Haas) @@ -171,7 +171,7 @@ - Use the preferred version of xsubpp to build PL/Perl, + Use the preferred version of xsubpp to build PL/Perl, not necessarily the operating system's main copy (David Wheeler and Alex Hunsaker) @@ -179,7 +179,7 @@ - Honor query cancel interrupts promptly in pgstatindex() + Honor query cancel interrupts promptly in pgstatindex() (Robert Haas) @@ -210,15 +210,15 @@ - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -244,7 +244,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.2.X release series in December 2011. Users are encouraged to update to a newer release branch soon. @@ -279,7 +279,7 @@ - Avoid possibly accessing off the end of memory in ANALYZE + Avoid possibly accessing off the end of memory in ANALYZE (Noah Misch) @@ -297,7 +297,7 @@ There was a window wherein a new backend process could read a stale init file but miss the inval messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically - could not read block 0 in file ... later during startup. + could not read block 0 in file ... later during startup. @@ -346,13 +346,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -370,18 +370,18 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -389,25 +389,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -415,41 +415,41 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -461,14 +461,14 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) @@ -480,7 +480,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -488,13 +488,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -526,7 +526,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -582,15 +582,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -598,13 +598,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -617,7 +617,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -649,14 +649,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -700,15 +700,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -733,44 +733,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -782,16 +782,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -833,17 +833,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -853,7 +853,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -867,19 +867,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -895,7 +895,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -905,7 +905,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -917,14 +917,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -936,15 +936,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -955,11 +955,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -978,14 +978,14 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -997,22 +997,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -1020,20 +1020,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -1084,7 +1084,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -1113,7 +1113,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -1135,7 +1135,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -1149,7 +1149,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -1201,7 +1201,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -1227,7 +1227,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -1235,28 +1235,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -1264,30 +1264,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -1300,7 +1300,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -1315,7 +1315,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -1362,19 +1362,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -1383,19 +1383,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -1419,10 +1419,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -1430,7 +1430,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -1442,7 +1442,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -1455,15 +1455,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -1474,7 +1474,7 @@ - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -1482,7 +1482,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -1514,14 +1514,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -1563,7 +1563,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -1619,8 +1619,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -1646,7 +1646,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -1668,23 +1668,23 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -1692,35 +1692,35 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) @@ -1741,7 +1741,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -1753,28 +1753,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -1785,14 +1785,14 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -1864,14 +1864,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -1890,7 +1890,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -1948,7 +1948,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -1958,13 +1958,13 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -1984,7 +1984,7 @@ Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -2001,20 +2001,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -2027,14 +2027,14 @@ - This includes adding IDT and SGT to the default + This includes adding IDT and SGT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -2065,8 +2065,8 @@ A dump/restore is not required for those running 8.2.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.2.14. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.2.14. Also, if you are upgrading from a version earlier than 8.2.11, see . @@ -2080,7 +2080,7 @@ - Force WAL segment switch during pg_start_backup() + Force WAL segment switch during pg_start_backup() (Heikki) @@ -2091,26 +2091,26 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. @@ -2145,32 +2145,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -2187,7 +2187,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -2195,7 +2195,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -2228,14 +2228,14 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Make contrib/hstore throw an error when a key or + Make contrib/hstore throw an error when a key or value is too long to fit in its data structure, rather than silently truncating it (Andrew Gierth) @@ -2243,15 +2243,15 @@ - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -2264,7 +2264,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -2315,7 +2315,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -2326,7 +2326,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -2339,40 +2339,40 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Fix possible failure in contrib/tsearch2 when C locale is + Fix possible failure in contrib/tsearch2 when C locale is used with a multi-byte encoding (Teodor) - Crashes were possible on platforms where wchar_t is narrower - than int; Windows in particular. + Crashes were possible on platforms where wchar_t is narrower + than int; Windows in particular. - Fix extreme inefficiency in contrib/tsearch2 parser's - handling of an email-like string containing multiple @ + Fix extreme inefficiency in contrib/tsearch2 parser's + handling of an email-like string containing multiple @ characters (Heikki) - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -2383,24 +2383,24 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). - Fix PL/pgSQL to not treat INTO after INSERT as + Fix PL/pgSQL to not treat INTO after INSERT as an INTO-variables clause anywhere in the string, not only at the start; - in particular, don't fail for INSERT INTO within - CREATE RULE (Tom) + in particular, don't fail for INSERT INTO within + CREATE RULE (Tom) @@ -2418,21 +2418,21 @@ - Retry failed calls to CallNamedPipe() on Windows + Retry failed calls to CallNamedPipe() on Windows (Steve Marshall, Magnus) It appears that this function can sometimes fail transiently; we previously treated any failure as a hard error, which could - confuse LISTEN/NOTIFY as well as other + confuse LISTEN/NOTIFY as well as other operations. - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -2474,13 +2474,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -2497,7 +2497,7 @@ Fix possible Assert failure if a statement executed in PL/pgSQL is rewritten into another kind of statement, for example if an - INSERT is rewritten into an UPDATE (Heikki) + INSERT is rewritten into an UPDATE (Heikki) @@ -2507,7 +2507,7 @@ - This primarily affects domains that are declared with CHECK + This primarily affects domains that are declared with CHECK constraints involving user-defined stable or immutable functions. Such functions typically fail if no snapshot has been set. @@ -2522,14 +2522,14 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix a problem that made UPDATE RETURNING tableoid + Fix a problem that made UPDATE RETURNING tableoid return zero instead of the correct OID (Tom) @@ -2542,13 +2542,13 @@ This could result in bad plans for queries like - ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... + ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... - Improve optimizer's handling of long IN lists (Tom) + Improve optimizer's handling of long IN lists (Tom) @@ -2581,37 +2581,37 @@ - Fix contrib/dblink's - dblink_get_result(text,bool) function (Joe) + Fix contrib/dblink's + dblink_get_result(text,bool) function (Joe) - Fix possible garbage output from contrib/sslinfo functions + Fix possible garbage output from contrib/sslinfo functions (Tom) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -2642,7 +2642,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, see . Also, if you were running a previous - 8.2.X release, it is recommended to REINDEX all GiST + 8.2.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -2656,13 +2656,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -2674,7 +2674,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -2689,8 +2689,8 @@ - Improve optimization of expression IN - (expression-list) queries (Tom, per an idea from Robert + Improve optimization of expression IN + (expression-list) queries (Tom, per an idea from Robert Haas) @@ -2703,13 +2703,13 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. @@ -2729,9 +2729,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -2750,14 +2750,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -2771,31 +2771,31 @@ - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Ensure pg_control is opened in binary mode + Ensure pg_control is opened in binary mode (Itagaki Takahiro) - pg_controldata and pg_resetxlog + pg_controldata and pg_resetxlog did this incorrectly, and so could fail on Windows. - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -2847,12 +2847,12 @@ - Fix potential miscalculation of datfrozenxid (Alvaro) + Fix potential miscalculation of datfrozenxid (Alvaro) This error may explain some recent reports of failure to remove old - pg_clog data. + pg_clog data. @@ -2864,7 +2864,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -2877,7 +2877,7 @@ Fix missed permissions checks when a view contains a simple - UNION ALL construct (Heikki) + UNION ALL construct (Heikki) @@ -2889,12 +2889,12 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -2902,29 +2902,29 @@ - Fix possible repeated drops during DROP OWNED (Tom) + Fix possible repeated drops during DROP OWNED (Tom) This would typically result in strange errors such as cache - lookup failed for relation NNN. + lookup failed for relation NNN. - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -2951,14 +2951,14 @@ Allow spaces in the suffix part of an LDAP URL in - pg_hba.conf (Tom) + pg_hba.conf (Tom) Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) @@ -2976,21 +2976,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -3005,28 +3005,28 @@ On Windows, work around a Microsoft bug by preventing - libpq from trying to send more than 64kB per system call + libpq from trying to send more than 64kB per system call (Magnus) - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -3069,18 +3069,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -3088,13 +3088,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -3144,7 +3144,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -3156,16 +3156,16 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) - Fix pg_get_ruledef() to show the alias, if any, attached - to the target table of an UPDATE or DELETE + Fix pg_get_ruledef() to show the alias, if any, attached + to the target table of an UPDATE or DELETE (Tom) @@ -3200,14 +3200,14 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) - Fix several datatype input functions, notably array_in(), + Fix several datatype input functions, notably array_in(), that were allowing unused bytes in their results to contain uninitialized, unpredictable values (Tom) @@ -3215,7 +3215,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -3223,24 +3223,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, and Argentina/San_Luis) @@ -3248,47 +3248,47 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix broken GiST comparison function for contrib/tsearch2's - tsquery type (Teodor) + Fix broken GiST comparison function for contrib/tsearch2's + tsquery type (Teodor) - Fix possible crashes in contrib/cube functions (Tom) + Fix possible crashes in contrib/cube functions (Tom) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -3318,7 +3318,7 @@ A dump/restore is not required for those running 8.2.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the Windows locale issue described below. @@ -3342,34 +3342,34 @@ over two years ago, but Windows with UTF-8 uses a separate code path that was not updated. If you are using a locale that considers some non-identical strings as equal, you may need to - REINDEX to fix existing indexes on textual columns. + REINDEX to fix existing indexes on textual columns. - Repair potential deadlock between concurrent VACUUM FULL + Repair potential deadlock between concurrent VACUUM FULL operations on different system catalogs (Tom) - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -3378,14 +3378,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -3426,14 +3426,14 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Ensure pg_stat_activity.waiting flag + Ensure pg_stat_activity.waiting flag is cleared when a lock wait is aborted (Tom) @@ -3451,20 +3451,20 @@ - Update time zone data files to tzdata release 2008a + Update time zone data files to tzdata release 2008a (in particular, recent Chile changes); adjust timezone abbreviation - VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) + VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -3472,31 +3472,31 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Correctly enforce statement_timeout values longer - than INT_MAX microseconds (about 35 minutes) (Tom) + Correctly enforce statement_timeout values longer + than INT_MAX microseconds (about 35 minutes) (Tom) - This bug affects only builds with . - Fix unexpected PARAM_SUBLINK ID planner error when + Fix unexpected PARAM_SUBLINK ID planner error when constant-folding simplifies a sub-select (Tom) @@ -3504,7 +3504,7 @@ Fix logical errors in constraint-exclusion handling of IS - NULL and NOT expressions (Tom) + NULL and NOT expressions (Tom) @@ -3515,7 +3515,7 @@ - Fix another cause of failed to build any N-way joins + Fix another cause of failed to build any N-way joins planner errors (Tom) @@ -3539,8 +3539,8 @@ - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -3552,7 +3552,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -3600,7 +3600,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -3611,18 +3611,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -3642,13 +3642,13 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.2.5 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) @@ -3662,13 +3662,13 @@ Fix GIN index build to work properly when - maintenance_work_mem is 4GB or more (Tom) + maintenance_work_mem is 4GB or more (Tom) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -3690,22 +3690,22 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) - Make CREATE TABLE ... SERIAL and - ALTER SEQUENCE ... OWNED BY not change the - currval() state of the sequence (Tom) + Make CREATE TABLE ... SERIAL and + ALTER SEQUENCE ... OWNED BY not change the + currval() state of the sequence (Tom) Preserve the tablespace and storage parameters of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -3724,28 +3724,28 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Make corr() return the correct result for negative + Make corr() return the correct result for negative correlation values (Neil) - Fix overflow in extract(epoch from interval) for intervals + Fix overflow in extract(epoch from interval) for intervals exceeding 68 years (Tom) @@ -3759,13 +3759,13 @@ - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -3784,73 +3784,73 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - Fix libpq crash when PGPASSFILE refers + Fix libpq crash when PGPASSFILE refers to a file that is not a plain file (Martin Pitt) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/pgcrypto defend against - OpenSSL libraries that fail on keys longer than 128 + Make contrib/pgcrypto defend against + OpenSSL libraries that fail on keys longer than 128 bits; which is the case at least on some Solaris versions (Marko Kreen) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. - Update gettimeofday configuration check so that - PostgreSQL can be built on newer versions of - MinGW (Magnus) + Update gettimeofday configuration check so that + PostgreSQL can be built on newer versions of + MinGW (Magnus) @@ -3890,48 +3890,48 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Fix ALTER DOMAIN ADD CONSTRAINT for cases involving + Fix ALTER DOMAIN ADD CONSTRAINT for cases involving domains over domains (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) Fix some planner problems with outer joins, notably poor - size estimation for t1 LEFT JOIN t2 WHERE t2.col IS NULL + size estimation for t1 LEFT JOIN t2 WHERE t2.col IS NULL (Tom) - Allow the interval data type to accept input consisting only of + Allow the interval data type to accept input consisting only of milliseconds or microseconds (Neil) - Allow timezone name to appear before the year in timestamp input (Tom) + Allow timezone name to appear before the year in timestamp input (Tom) - Fixes for GIN indexes used by /contrib/tsearch2 (Teodor) + Fixes for GIN indexes used by /contrib/tsearch2 (Teodor) @@ -3943,7 +3943,7 @@ - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -3956,7 +3956,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -3969,13 +3969,13 @@ - Fix stddev_pop(numeric) and var_pop(numeric) (Tom) + Fix stddev_pop(numeric) and var_pop(numeric) (Tom) - Prevent REINDEX and CLUSTER from failing + Prevent REINDEX and CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -3994,39 +3994,39 @@ - Make pg_ctl -w work properly in Windows service mode (Dave Page) + Make pg_ctl -w work properly in Windows service mode (Dave Page) - Fix memory allocation bug when using MIT Kerberos on Windows (Magnus) + Fix memory allocation bug when using MIT Kerberos on Windows (Magnus) - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) - Restrict /contrib/pgstattuple functions to superusers, for security reasons (Tom) + Restrict /contrib/pgstattuple functions to superusers, for security reasons (Tom) - Do not let /contrib/intarray try to make its GIN opclass + Do not let /contrib/intarray try to make its GIN opclass the default (this caused problems at dump/restore) (Tom) @@ -4068,56 +4068,56 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - Fix shared_preload_libraries for Windows + Fix shared_preload_libraries for Windows by forcing reload in each backend (Korry Douglas) - Fix to_char() so it properly upper/lower cases localized day or month + Fix to_char() so it properly upper/lower cases localized day or month names (Pavel Stehule) - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Require COMMIT PREPARED to be executed in the same + Require COMMIT PREPARED to be executed in the same database as the transaction was prepared in (Heikki) - Allow pg_dump to do binary backups larger than two gigabytes + Allow pg_dump to do binary backups larger than two gigabytes on Windows (Magnus) - New traditional (Taiwan) Chinese FAQ (Zhou Daojing) + New traditional (Taiwan) Chinese FAQ (Zhou Daojing) @@ -4129,8 +4129,8 @@ - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -4142,8 +4142,8 @@ - Fix pg_dump so it can dump a serial column's sequence - using when not also dumping the owning table (Tom) @@ -4158,7 +4158,7 @@ Fix possible wrong answers or crash when a PL/pgSQL function tries - to RETURN from within an EXCEPTION block + to RETURN from within an EXCEPTION block (Tom) @@ -4286,8 +4286,8 @@ - Properly handle to_char('CC') for years ending in - 00 (Tom) + Properly handle to_char('CC') for years ending in + 00 (Tom) @@ -4297,41 +4297,41 @@ - /contrib/tsearch2 localization improvements (Tatsuo, Teodor) + /contrib/tsearch2 localization improvements (Tatsuo, Teodor) Fix incorrect permission check in - information_schema.key_column_usage view (Tom) + information_schema.key_column_usage view (Tom) - The symptom is relation with OID nnnnn does not exist errors. - To get this fix without using initdb, use CREATE OR - REPLACE VIEW to install the corrected definition found in - share/information_schema.sql. Note you will need to do + The symptom is relation with OID nnnnn does not exist errors. + To get this fix without using initdb, use CREATE OR + REPLACE VIEW to install the corrected definition found in + share/information_schema.sql. Note you will need to do this in each database. - Improve VACUUM performance for databases with many tables (Tom) + Improve VACUUM performance for databases with many tables (Tom) - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) Fix potentially incorrect results from index searches using - ROW inequality conditions (Tom) + ROW inequality conditions (Tom) @@ -4344,7 +4344,7 @@ - Fix bogus permission denied failures occurring on Windows + Fix bogus permission denied failures occurring on Windows due to attempts to fsync already-deleted files (Magnus, Tom) @@ -4414,21 +4414,21 @@ - Fix crash with SELECT ... LIMIT ALL (also - LIMIT NULL) (Tom) + Fix crash with SELECT ... LIMIT ALL (also + LIMIT NULL) (Tom) - Several /contrib/tsearch2 fixes (Teodor) + Several /contrib/tsearch2 fixes (Teodor) On Windows, make log messages coming from the operating system use - ASCII encoding (Hiroshi Saito) + ASCII encoding (Hiroshi Saito) @@ -4439,8 +4439,8 @@ - Fix Windows linking of pg_dump using - win32.mak + Fix Windows linking of pg_dump using + win32.mak (Hiroshi Saito) @@ -4469,13 +4469,13 @@ - Improve build speed of PDF documentation (Peter) + Improve build speed of PDF documentation (Peter) - Re-add JST (Japan) timezone abbreviation (Tom) + Re-add JST (Japan) timezone abbreviation (Tom) @@ -4487,8 +4487,8 @@ - Have psql print multi-byte combining characters as - before, rather than output as \u (Tom) + Have psql print multi-byte combining characters as + before, rather than output as \u (Tom) @@ -4498,19 +4498,19 @@ - This improves psql \d performance also. + This improves psql \d performance also. - Make pg_dumpall assume that databases have public - CONNECT privilege, when dumping from a pre-8.2 server (Tom) + Make pg_dumpall assume that databases have public + CONNECT privilege, when dumping from a pre-8.2 server (Tom) This preserves the previous behavior that anyone can connect to a - database if allowed by pg_hba.conf. + database if allowed by pg_hba.conf. @@ -4541,14 +4541,14 @@ Query language enhancements including INSERT/UPDATE/DELETE RETURNING, multirow VALUES lists, and optional target-table alias in - UPDATE/DELETE + UPDATE/DELETE Index creation without blocking concurrent - INSERT/UPDATE/DELETE + INSERT/UPDATE/DELETE operations @@ -4659,13 +4659,13 @@ Set escape_string_warning - to on by default (Bruce) + linkend="guc-escape-string-warning">escape_string_warning + to on by default (Bruce) This issues a warning if backslash escapes are used in - non-escape (non-E'') + non-escape (non-E'') strings. @@ -4673,8 +4673,8 @@ Change the row - constructor syntax (ROW(...)) so that - list elements foo.* will be expanded to a list + constructor syntax (ROW(...)) so that + list elements foo.* will be expanded to a list of their member fields, rather than creating a nested row type field as formerly (Tom) @@ -4682,15 +4682,15 @@ The new behavior is substantially more useful since it allows, for example, triggers to check for data changes - with IF row(new.*) IS DISTINCT FROM row(old.*). - The old behavior is still available by omitting .*. + with IF row(new.*) IS DISTINCT FROM row(old.*). + The old behavior is still available by omitting .*. Make row comparisons - follow SQL standard semantics and allow them + follow SQL standard semantics and allow them to be used in index scans (Tom) @@ -4704,13 +4704,13 @@ - Make row IS NOT NULL - tests follow SQL standard semantics (Tom) + Make row IS NOT NULL + tests follow SQL standard semantics (Tom) The former behavior conformed to the standard for simple cases - with IS NULL, but IS NOT NULL would return + with IS NULL, but IS NOT NULL would return true if any row field was non-null, whereas the standard says it should return true only when all fields are non-null. @@ -4719,11 +4719,11 @@ Make SET - CONSTRAINT affect only one constraint (Kris Jurka) + CONSTRAINT affect only one constraint (Kris Jurka) - In previous releases, SET CONSTRAINT modified + In previous releases, SET CONSTRAINT modified all constraints with a matching name. In this release, the schema search path is used to modify only the first matching constraint. A schema specification is also @@ -4733,14 +4733,14 @@ - Remove RULE permission for tables, for security reasons + Remove RULE permission for tables, for security reasons (Tom) As of this release, only a table's owner can create or modify rules for the table. For backwards compatibility, - GRANT/REVOKE RULE is still accepted, + GRANT/REVOKE RULE is still accepted, but it does nothing. @@ -4769,14 +4769,14 @@ - Make command-line options of postmaster - and postgres + Make command-line options of postmaster + and postgres identical (Peter) This allows the postmaster to pass arguments to each backend - without using -o. Note that some options are now + without using -o. Note that some options are now only available as long-form options, because there were conflicting single-letter options. @@ -4784,13 +4784,13 @@ - Deprecate use of postmaster symbolic link (Peter) + Deprecate use of postmaster symbolic link (Peter) - postmaster and postgres + postmaster and postgres commands now act identically, with the behavior determined - by command-line options. The postmaster symbolic link is + by command-line options. The postmaster symbolic link is kept for compatibility, but is not really needed. @@ -4798,12 +4798,12 @@ Change log_duration + linkend="guc-log-duration">log_duration to output even if the query is not output (Tom) - In prior releases, log_duration only printed if + In prior releases, log_duration only printed if the query appeared earlier in the log. @@ -4811,15 +4811,15 @@ Make to_char(time) + linkend="functions-formatting">to_char(time) and to_char(interval) - treat HH and HH12 as 12-hour + linkend="functions-formatting">to_char(interval) + treat HH and HH12 as 12-hour intervals - Most applications should use HH24 unless they + Most applications should use HH24 unless they want a 12-hour display. @@ -4827,19 +4827,19 @@ Zero unmasked bits in conversion from INET to CIDR (Tom) + linkend="datatype-inet">INET to CIDR (Tom) This ensures that the converted value is actually valid for - CIDR. + CIDR. - Remove australian_timezones configuration variable + Remove australian_timezones configuration variable (Joachim Wieland) @@ -4857,35 +4857,35 @@ This might eliminate the need to set unrealistically small values of random_page_cost. - If you have been using a very small random_page_cost, + linkend="guc-random-page-cost">random_page_cost. + If you have been using a very small random_page_cost, please recheck your test cases. - Change behavior of pg_dump -n and - -t options. (Greg Sabino Mullane) + Change behavior of pg_dump -n and + -t options. (Greg Sabino Mullane) - See the pg_dump manual page for details. + See the pg_dump manual page for details. - Change libpq - PQdsplen() to return a useful value (Martijn + Change libpq + PQdsplen() to return a useful value (Martijn van Oosterhout) - Declare libpq - PQgetssl() as returning void *, - rather than SSL * (Martijn van Oosterhout) + Declare libpq + PQgetssl() as returning void *, + rather than SSL * (Martijn van Oosterhout) @@ -4897,7 +4897,7 @@ C-language loadable modules must now include a - PG_MODULE_MAGIC + PG_MODULE_MAGIC macro call for version compatibility checking (Martijn van Oosterhout) @@ -4923,12 +4923,12 @@ - In contrib/xml2/, rename xml_valid() to - xml_is_well_formed() (Tom) + In contrib/xml2/, rename xml_valid() to + xml_is_well_formed() (Tom) - xml_valid() will remain for backward compatibility, + xml_valid() will remain for backward compatibility, but its behavior will change to do schema checking in a future release. @@ -4936,7 +4936,7 @@ - Remove contrib/ora2pg/, now at contrib/ora2pg/, now at @@ -4944,21 +4944,21 @@ Remove contrib modules that have been migrated to PgFoundry: - adddepend, dbase, dbmirror, - fulltextindex, mac, userlock + adddepend, dbase, dbmirror, + fulltextindex, mac, userlock Remove abandoned contrib modules: - mSQL-interface, tips + mSQL-interface, tips - Remove QNX and BEOS ports (Bruce) + Remove QNX and BEOS ports (Bruce) @@ -5002,7 +5002,7 @@ Improve efficiency of IN + linkend="functions-comparisons">IN (list-of-expressions) clauses (Tom) @@ -5022,7 +5022,7 @@ - Add FILLFACTOR to FILLFACTOR to table and index creation (ITAGAKI Takahiro) @@ -5038,8 +5038,8 @@ Increase default values for shared_buffers - and max_fsm_pages + linkend="guc-shared-buffers">shared_buffers + and max_fsm_pages (Andrew) @@ -5074,8 +5074,8 @@ Improve the optimizer's selectivity estimates for LIKE, ILIKE, and + linkend="functions-like">LIKE, ILIKE, and regular expression operations (Tom) @@ -5085,7 +5085,7 @@ Improve planning of joins to inherited tables and UNION - ALL views (Tom) + ALL views (Tom) @@ -5093,18 +5093,18 @@ Allow constraint exclusion to be applied to inherited UPDATE and - DELETE queries (Tom) + linkend="ddl-inherit">inherited UPDATE and + DELETE queries (Tom) - SELECT already honored constraint exclusion. + SELECT already honored constraint exclusion. - Improve planning of constant WHERE clauses, such as + Improve planning of constant WHERE clauses, such as a condition that depends only on variables inherited from an outer query level (Tom) @@ -5113,7 +5113,7 @@ Protocol-level unnamed prepared statements are re-planned - for each set of BIND values (Tom) + for each set of BIND values (Tom) @@ -5132,13 +5132,13 @@ Avoid extra scan of tables without indexes during VACUUM (Greg Stark) + linkend="SQL-VACUUM">VACUUM (Greg Stark) - Improve multicolumn GiST + Improve multicolumn GiST indexing (Oleg, Teodor) @@ -5167,7 +5167,7 @@ This is valuable for keeping warm standby slave servers in sync with the master. Transaction log file switching now also happens automatically during pg_stop_backup(). + linkend="functions-admin">pg_stop_backup(). This ensures that all transaction log files needed for recovery can be archived immediately. @@ -5175,26 +5175,26 @@ - Add WAL informational functions (Simon) + Add WAL informational functions (Simon) Add functions for interrogating the current transaction log insertion - point and determining WAL filenames from the - hex WAL locations displayed by pg_stop_backup() + point and determining WAL filenames from the + hex WAL locations displayed by pg_stop_backup() and related functions. - Improve recovery from a crash during WAL replay (Simon) + Improve recovery from a crash during WAL replay (Simon) - The server now does periodic checkpoints during WAL - recovery, so if there is a crash, future WAL + The server now does periodic checkpoints during WAL + recovery, so if there is a crash, future WAL recovery is shortened. This also eliminates the need for warm standby servers to replay the entire log since the base backup if they crash. @@ -5203,7 +5203,7 @@ - Improve reliability of long-term WAL replay + Improve reliability of long-term WAL replay (Heikki, Simon, Tom) @@ -5218,7 +5218,7 @@ Add archive_timeout + linkend="guc-archive-timeout">archive_timeout to force transaction log file switches at a given interval (Simon) @@ -5229,46 +5229,46 @@ - Add native LDAP + Add native LDAP authentication (Magnus Hagander) This is particularly useful for platforms that do not - support PAM, such as Windows. + support PAM, such as Windows. Add GRANT - CONNECT ON DATABASE (Gevik Babakhani) + CONNECT ON DATABASE (Gevik Babakhani) This gives SQL-level control over database access. It works as an additional filter on top of the existing - pg_hba.conf + pg_hba.conf controls. - Add support for SSL - Certificate Revocation List (CRL) files + Add support for SSL + Certificate Revocation List (CRL) files (Libor Hohoš) - The server and libpq both recognize CRL + The server and libpq both recognize CRL files now. - GiST indexes are + GiST indexes are now clusterable (Teodor) @@ -5280,7 +5280,7 @@ pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity now shows autovacuum activity. @@ -5304,7 +5304,7 @@ These values now appear in the pg_stat_*_tables + linkend="monitoring-stats-views-table">pg_stat_*_tables system views. @@ -5312,44 +5312,44 @@ Improve performance of statistics monitoring, especially - stats_command_string + stats_command_string (Tom, Bruce) - This release enables stats_command_string by + This release enables stats_command_string by default, now that its overhead is minimal. This means pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity will now show all active queries by default. - Add a waiting column to pg_stat_activity + Add a waiting column to pg_stat_activity (Tom) - This allows pg_stat_activity to show all the - information included in the ps display. + This allows pg_stat_activity to show all the + information included in the ps display. Add configuration parameter update_process_title - to control whether the ps display is updated + linkend="guc-update-process-title">update_process_title + to control whether the ps display is updated for every command (Bruce) - On platforms where it is expensive to update the ps + On platforms where it is expensive to update the ps display, it might be worthwhile to turn this off and rely solely on - pg_stat_activity for status information. + pg_stat_activity for status information. @@ -5361,15 +5361,15 @@ For example, you can now set shared_buffers - to 32MB rather than mentally converting sizes. + linkend="guc-shared-buffers">shared_buffers + to 32MB rather than mentally converting sizes. Add support for include - directives in postgresql.conf (Joachim + directives in postgresql.conf (Joachim Wieland) @@ -5384,21 +5384,21 @@ Such logging now shows statement names, bind parameter values, and the text of the query being executed. Also, the query text is properly included in logged error messages - when enabled by log_min_error_statement. + when enabled by log_min_error_statement. Prevent max_stack_depth + linkend="guc-max-stack-depth">max_stack_depth from being set to unsafe values On platforms where we can determine the actual kernel stack depth limit (which is most), make sure that the initial default value of - max_stack_depth is safe, and reject attempts to set it + max_stack_depth is safe, and reject attempts to set it to unsafely large values. @@ -5418,14 +5418,14 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) - Clean out pg_internal.init cache files during server + Clean out pg_internal.init cache files during server restart (Simon) @@ -5438,7 +5438,7 @@ Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -5475,15 +5475,15 @@ - Add INSERT/UPDATE/DELETE - RETURNING (Jonah Harris, Tom) + Add INSERT/UPDATE/DELETE + RETURNING (Jonah Harris, Tom) This allows these commands to return values, such as the - computed serial key for a new row. In the UPDATE + computed serial key for a new row. In the UPDATE case, values from the updated version of the row are returned. @@ -5491,23 +5491,23 @@ Add support for multiple-row VALUES clauses, + linkend="queries-values">VALUES clauses, per SQL standard (Joe, Tom) - This allows INSERT to insert multiple rows of + This allows INSERT to insert multiple rows of constants, or queries to generate result sets using constants. For example, INSERT ... VALUES (...), (...), - ...., and SELECT * FROM (VALUES (...), (...), - ....) AS alias(f1, ...). + ...., and SELECT * FROM (VALUES (...), (...), + ....) AS alias(f1, ...). - Allow UPDATE - and DELETE + Allow UPDATE + and DELETE to use an alias for the target table (Atsushi Ogawa) @@ -5519,7 +5519,7 @@ - Allow UPDATE + Allow UPDATE to set multiple columns with a list of values (Susanne Ebrecht) @@ -5527,7 +5527,7 @@ This is basically a short-hand for assigning the columns and values in pairs. The syntax is UPDATE tab - SET (column, ...) = (val, ...). + SET (column, ...) = (val, ...). @@ -5546,12 +5546,12 @@ - Add CASCADE - option to TRUNCATE (Joachim Wieland) + Add CASCADE + option to TRUNCATE (Joachim Wieland) - This causes TRUNCATE to automatically include all tables + This causes TRUNCATE to automatically include all tables that reference the specified table(s) via foreign keys. While convenient, this is a dangerous tool — use with caution! @@ -5559,8 +5559,8 @@ - Support FOR UPDATE and FOR SHARE - in the same SELECT + Support FOR UPDATE and FOR SHARE + in the same SELECT command (Tom) @@ -5568,21 +5568,21 @@ Add IS NOT - DISTINCT FROM (Pavel Stehule) + DISTINCT FROM (Pavel Stehule) - This operator is similar to equality (=), but + This operator is similar to equality (=), but evaluates to true when both left and right operands are - NULL, and to false when just one is, rather than - yielding NULL in these cases. + NULL, and to false when just one is, rather than + yielding NULL in these cases. Improve the length output used by UNION/INTERSECT/EXCEPT + linkend="queries-union">UNION/INTERSECT/EXCEPT (Tom) @@ -5594,13 +5594,13 @@ - Allow ILIKE + Allow ILIKE to work for multi-byte encodings (Tom) - Internally, ILIKE now calls lower() - and then uses LIKE. Locale-specific regular + Internally, ILIKE now calls lower() + and then uses LIKE. Locale-specific regular expression patterns still do not work in these encodings. @@ -5608,39 +5608,39 @@ Enable standard_conforming_strings - to be turned on (Kevin Grittner) + linkend="guc-standard-conforming-strings">standard_conforming_strings + to be turned on (Kevin Grittner) This allows backslash escaping in strings to be disabled, - making PostgreSQL more - standards-compliant. The default is off for backwards - compatibility, but future releases will default this to on. + making PostgreSQL more + standards-compliant. The default is off for backwards + compatibility, but future releases will default this to on. - Do not flatten subqueries that contain volatile + Do not flatten subqueries that contain volatile functions in their target lists (Jaime Casanova) This prevents surprising behavior due to multiple evaluation - of a volatile function (such as random() - or nextval()). It might cause performance + of a volatile function (such as random() + or nextval()). It might cause performance degradation in the presence of functions that are unnecessarily - marked as volatile. + marked as volatile. Add system views pg_prepared_statements + linkend="view-pg-prepared-statements">pg_prepared_statements and pg_cursors + linkend="view-pg-cursors">pg_cursors to show prepared statements and open cursors (Joachim Wieland, Neil) @@ -5652,32 +5652,32 @@ Support portal parameters in EXPLAIN and EXECUTE (Tom) + linkend="SQL-EXPLAIN">EXPLAIN and EXECUTE (Tom) - This allows, for example, JDBC ? parameters to + This allows, for example, JDBC ? parameters to work in these commands. - If SQL-level PREPARE parameters + If SQL-level PREPARE parameters are unspecified, infer their types from the content of the query (Neil) - Protocol-level PREPARE already did this. + Protocol-level PREPARE already did this. - Allow LIMIT and OFFSET to exceed + Allow LIMIT and OFFSET to exceed two billion (Dhanaraj M) @@ -5692,8 +5692,8 @@ - Add TABLESPACE clause to CREATE TABLE AS + Add TABLESPACE clause to CREATE TABLE AS (Neil) @@ -5704,8 +5704,8 @@ - Add ON COMMIT clause to CREATE TABLE AS + Add ON COMMIT clause to CREATE TABLE AS (Neil) @@ -5718,13 +5718,13 @@ - Add INCLUDING CONSTRAINTS to CREATE TABLE LIKE + Add INCLUDING CONSTRAINTS to CREATE TABLE LIKE (Greg Stark) - This allows easy copying of CHECK constraints to a new + This allows easy copying of CHECK constraints to a new table. @@ -5740,8 +5740,8 @@ any of the details of the type. Making a shell type is useful because it allows cleaner declaration of the type's input/output functions, which must exist before the type can be defined for - real. The syntax is CREATE TYPE typename. + real. The syntax is CREATE TYPE typename. @@ -5760,8 +5760,8 @@ The new syntax is CREATE AGGREGATE - aggname (input_type) - (parameter_list). This more + aggname (input_type) + (parameter_list). This more naturally supports the new multi-parameter aggregate functionality. The previous syntax is still supported. @@ -5770,26 +5770,26 @@ Add ALTER ROLE PASSWORD NULL + linkend="SQL-ALTERROLE">ALTER ROLE PASSWORD NULL to remove a previously set role password (Peter) - Add DROP object IF EXISTS for many + Add DROP object IF EXISTS for many object types (Andrew) - This allows DROP operations on non-existent + This allows DROP operations on non-existent objects without generating an error. - Add DROP OWNED + Add DROP OWNED to drop all objects owned by a role (Alvaro) @@ -5797,50 +5797,50 @@ Add REASSIGN - OWNED to reassign ownership of all objects owned + OWNED to reassign ownership of all objects owned by a role (Alvaro) - This, and DROP OWNED above, facilitate dropping + This, and DROP OWNED above, facilitate dropping roles. - Add GRANT ON SEQUENCE + Add GRANT ON SEQUENCE syntax (Bruce) This was added for setting sequence-specific permissions. - GRANT ON TABLE for sequences is still supported + GRANT ON TABLE for sequences is still supported for backward compatibility. - Add USAGE - permission for sequences that allows only currval() - and nextval(), not setval() + Add USAGE + permission for sequences that allows only currval() + and nextval(), not setval() (Bruce) - USAGE permission allows more fine-grained - control over sequence access. Granting USAGE + USAGE permission allows more fine-grained + control over sequence access. Granting USAGE allows users to increment a sequence, but prevents them from setting the sequence to - an arbitrary value using setval(). + an arbitrary value using setval(). Add ALTER TABLE - [ NO ] INHERIT (Greg Stark) + [ NO ] INHERIT (Greg Stark) @@ -5882,7 +5882,7 @@ The new syntax is CREATE - INDEX CONCURRENTLY. The default behavior is + INDEX CONCURRENTLY. The default behavior is still to block table modification while an index is being created. @@ -5902,20 +5902,20 @@ - Allow COPY to - dump a SELECT query (Zoltan Boszormenyi, Karel + Allow COPY to + dump a SELECT query (Zoltan Boszormenyi, Karel Zak) - This allows COPY to dump arbitrary SQL - queries. The syntax is COPY (SELECT ...) TO. + This allows COPY to dump arbitrary SQL + queries. The syntax is COPY (SELECT ...) TO. - Make the COPY + Make the COPY command return a command tag that includes the number of rows copied (Volkan YAZICI) @@ -5923,29 +5923,29 @@ - Allow VACUUM + Allow VACUUM to expire rows without being affected by other concurrent - VACUUM operations (Hannu Krossing, Alvaro, Tom) + VACUUM operations (Hannu Krossing, Alvaro, Tom) - Make initdb + Make initdb detect the operating system locale and set the default - DateStyle accordingly (Peter) + DateStyle accordingly (Peter) This makes it more likely that the installed - postgresql.conf DateStyle value will + postgresql.conf DateStyle value will be as desired. - Reduce number of progress messages displayed by initdb (Tom) + Reduce number of progress messages displayed by initdb (Tom) @@ -5960,13 +5960,13 @@ Allow full timezone names in timestamp input values + linkend="datatype-datetime">timestamp input values (Joachim Wieland) For example, '2006-05-24 21:11 - America/New_York'::timestamptz. + America/New_York'::timestamptz. @@ -5978,16 +5978,16 @@ A desired set of timezone abbreviations can be chosen via the configuration parameter timezone_abbreviations. + linkend="guc-timezone-abbreviations">timezone_abbreviations. Add pg_timezone_abbrevs + linkend="view-pg-timezone-abbrevs">pg_timezone_abbrevs and pg_timezone_names + linkend="view-pg-timezone-names">pg_timezone_names views to show supported timezones (Magnus Hagander) @@ -5995,27 +5995,27 @@ Add clock_timestamp(), + linkend="functions-datetime-table">clock_timestamp(), statement_timestamp(), + linkend="functions-datetime-table">statement_timestamp(), and transaction_timestamp() + linkend="functions-datetime-table">transaction_timestamp() (Bruce) - clock_timestamp() is the current wall-clock time, - statement_timestamp() is the time the current + clock_timestamp() is the current wall-clock time, + statement_timestamp() is the time the current statement arrived at the server, and - transaction_timestamp() is an alias for - now(). + transaction_timestamp() is an alias for + now(). Allow to_char() + linkend="functions-formatting">to_char() to print localized month and day names (Euler Taveira de Oliveira) @@ -6024,23 +6024,23 @@ Allow to_char(time) + linkend="functions-formatting">to_char(time) and to_char(interval) - to output AM/PM specifications + linkend="functions-formatting">to_char(interval) + to output AM/PM specifications (Bruce) Intervals and times are treated as 24-hour periods, e.g. - 25 hours is considered AM. + 25 hours is considered AM. Add new function justify_interval() + linkend="functions-datetime-table">justify_interval() to adjust interval units (Mark Dilger) @@ -6071,7 +6071,7 @@ - Allow arrays to contain NULL elements (Tom) + Allow arrays to contain NULL elements (Tom) @@ -6090,13 +6090,13 @@ New built-in operators - for array-subset comparisons (@>, - <@, &&) (Teodor, Tom) + for array-subset comparisons (@>, + <@, &&) (Teodor, Tom) These operators can be indexed for many data types using - GiST or GIN indexes. + GiST or GIN indexes. @@ -6104,15 +6104,15 @@ Add convenient arithmetic operations on - INET/CIDR values (Stephen R. van den + INET/CIDR values (Stephen R. van den Berg) - The new operators are & (and), | - (or), ~ (not), inet + int8, - inet - int8, and - inet - inet. + The new operators are & (and), | + (or), ~ (not), inet + int8, + inet - int8, and + inet - inet. @@ -6124,12 +6124,12 @@ - The new functions are var_pop(), - var_samp(), stddev_pop(), and - stddev_samp(). var_samp() and - stddev_samp() are merely renamings of the - existing aggregates variance() and - stddev(). The latter names remain available + The new functions are var_pop(), + var_samp(), stddev_pop(), and + stddev_samp(). var_samp() and + stddev_samp() are merely renamings of the + existing aggregates variance() and + stddev(). The latter names remain available for backward compatibility. @@ -6142,13 +6142,13 @@ - New functions: regr_intercept(), - regr_slope(), regr_r2(), - corr(), covar_samp(), - covar_pop(), regr_avgx(), - regr_avgy(), regr_sxy(), - regr_sxx(), regr_syy(), - regr_count(). + New functions: regr_intercept(), + regr_slope(), regr_r2(), + corr(), covar_samp(), + covar_pop(), regr_avgx(), + regr_avgy(), regr_sxy(), + regr_sxx(), regr_syy(), + regr_count(). @@ -6162,7 +6162,7 @@ Properly enforce domain CHECK constraints + linkend="ddl-constraints">CHECK constraints everywhere (Neil, Tom) @@ -6177,24 +6177,24 @@ Fix problems with dumping renamed SERIAL columns + linkend="datatype-serial">SERIAL columns (Tom) - The fix is to dump a SERIAL column by explicitly - specifying its DEFAULT and sequence elements, - and reconstructing the SERIAL column on reload + The fix is to dump a SERIAL column by explicitly + specifying its DEFAULT and sequence elements, + and reconstructing the SERIAL column on reload using a new ALTER - SEQUENCE OWNED BY command. This also allows - dropping a SERIAL column specification. + SEQUENCE OWNED BY command. This also allows + dropping a SERIAL column specification. Add a server-side sleep function pg_sleep() + linkend="functions-datetime-delay">pg_sleep() (Joachim Wieland) @@ -6202,7 +6202,7 @@ Add all comparison operators for the tid (tuple id) data + linkend="datatype-oid">tid (tuple id) data type (Mark Kirkwood, Greg Stark, Tom) @@ -6217,12 +6217,12 @@ - Add TG_table_name and TG_table_schema to + Add TG_table_name and TG_table_schema to trigger parameters (Andrew) - TG_relname is now deprecated. Comparable + TG_relname is now deprecated. Comparable changes have been made in the trigger parameters for the other PLs as well. @@ -6230,29 +6230,29 @@ - Allow FOR statements to return values to scalars + Allow FOR statements to return values to scalars as well as records and row types (Pavel Stehule) - Add a BY clause to the FOR loop, + Add a BY clause to the FOR loop, to control the iteration increment (Jaime Casanova) - Add STRICT to STRICT to SELECT - INTO (Matt Miller) + INTO (Matt Miller) - STRICT mode throws an exception if more or less - than one row is returned by the SELECT, for - Oracle PL/SQL compatibility. + STRICT mode throws an exception if more or less + than one row is returned by the SELECT, for + Oracle PL/SQL compatibility. @@ -6266,7 +6266,7 @@ - Add table_name and table_schema to + Add table_name and table_schema to trigger parameters (Adam Sjøgren) @@ -6279,7 +6279,7 @@ - Make $_TD trigger data a global variable (Andrew) + Make $_TD trigger data a global variable (Andrew) @@ -6312,13 +6312,13 @@ Named parameters are passed as ordinary variables, as well as in the - args[] array (Sven Suursoho) + args[] array (Sven Suursoho) - Add table_name and table_schema to + Add table_name and table_schema to trigger parameters (Andrew) @@ -6331,14 +6331,14 @@ - Return result-set as list, iterator, - or generator (Sven Suursoho) + Return result-set as list, iterator, + or generator (Sven Suursoho) - Allow functions to return void (Neil) + Allow functions to return void (Neil) @@ -6353,40 +6353,40 @@ - <link linkend="APP-PSQL"><application>psql</></link> Changes + <link linkend="APP-PSQL"><application>psql</application></link> Changes - Add new command \password for changing role + Add new command \password for changing role password with client-side password encryption (Peter) - Allow \c to connect to a new host and port + Allow \c to connect to a new host and port number (David, Volkan YAZICI) - Add tablespace display to \l+ (Philip Yarra) + Add tablespace display to \l+ (Philip Yarra) - Improve \df slash command to include the argument - names and modes (OUT or INOUT) of + Improve \df slash command to include the argument + names and modes (OUT or INOUT) of the function (David Fetter) - Support binary COPY (Andreas Pflug) + Support binary COPY (Andreas Pflug) @@ -6397,21 +6397,21 @@ - Use option -1 or --single-transaction. + Use option -1 or --single-transaction. - Support for automatically retrieving SELECT + Support for automatically retrieving SELECT results in batches using a cursor (Chris Mair) This is enabled using \set FETCH_COUNT - n. This + n. This feature allows large result sets to be retrieved in - psql without attempting to buffer the entire + psql without attempting to buffer the entire result set in memory. @@ -6451,8 +6451,8 @@ Report both the returned data and the command status tag - for INSERT/UPDATE/DELETE - RETURNING (Tom) + for INSERT/UPDATE/DELETE + RETURNING (Tom) @@ -6461,31 +6461,31 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</></link> Changes + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> Changes Allow complex selection of objects to be included or excluded - by pg_dump (Greg Sabino Mullane) + by pg_dump (Greg Sabino Mullane) - pg_dump now supports multiple -n - (schema) and -t (table) options, and adds - -N and -T options to exclude objects. + pg_dump now supports multiple -n + (schema) and -t (table) options, and adds + -N and -T options to exclude objects. Also, the arguments of these switches can now be wild-card expressions rather than single object names, for example - -t 'foo*', and a schema can be part of - a -t or -T switch, for example - -t schema1.table1. + -t 'foo*', and a schema can be part of + a -t or -T switch, for example + -t schema1.table1. - Add pg_restore - --no-data-for-failed-tables option to suppress + Add pg_restore + --no-data-for-failed-tables option to suppress loading data if table creation failed (i.e., the table already exists) (Martin Pitt) @@ -6493,13 +6493,13 @@ - Add pg_restore + Add pg_restore option to run the entire session in a single transaction (Simon) - Use option -1 or --single-transaction. + Use option -1 or --single-transaction. @@ -6508,27 +6508,27 @@ - <link linkend="libpq"><application>libpq</></link> Changes + <link linkend="libpq"><application>libpq</application></link> Changes Add PQencryptPassword() + linkend="libpq-misc">PQencryptPassword() to encrypt passwords (Tom) This allows passwords to be sent pre-encrypted for commands like ALTER ROLE ... - PASSWORD. + PASSWORD. Add function PQisthreadsafe() + linkend="libpq-threading">PQisthreadsafe() (Bruce) @@ -6541,9 +6541,9 @@ Add PQdescribePrepared(), + linkend="libpq-exec-main">PQdescribePrepared(), PQdescribePortal(), + linkend="libpq-exec-main">PQdescribePortal(), and related functions to return information about previously prepared statements and open cursors (Volkan YAZICI) @@ -6551,9 +6551,9 @@ - Allow LDAP lookups + Allow LDAP lookups from pg_service.conf + linkend="libpq-pgservice">pg_service.conf (Laurenz Albe) @@ -6561,7 +6561,7 @@ Allow a hostname in ~/.pgpass + linkend="libpq-pgpass">~/.pgpass to match the default socket directory (Bruce) @@ -6577,19 +6577,19 @@ - <link linkend="ecpg"><application>ecpg</></link> Changes + <link linkend="ecpg"><application>ecpg</application></link> Changes - Allow SHOW to + Allow SHOW to put its result into a variable (Joachim Wieland) - Add COPY TO STDOUT + Add COPY TO STDOUT (Joachim Wieland) @@ -6611,28 +6611,28 @@ - <application>Windows</> Port + <application>Windows</application> Port - Allow MSVC to compile the PostgreSQL + Allow MSVC to compile the PostgreSQL server (Magnus, Hiroshi Saito) - Add MSVC support for utility commands and pg_dump (Hiroshi + Add MSVC support for utility commands and pg_dump (Hiroshi Saito) - Add support for Windows code pages 1253, - 1254, 1255, and 1257 + Add support for Windows code pages 1253, + 1254, 1255, and 1257 (Kris Jurka) @@ -6670,7 +6670,7 @@ - Add GIN (Generalized + Add GIN (Generalized Inverted iNdex) index access method (Teodor, Oleg) @@ -6682,7 +6682,7 @@ Rtree has been re-implemented using GiST. Among other + linkend="GiST">GiST. Among other differences, this means that rtree indexes now have support for crash recovery via write-ahead logging (WAL). @@ -6698,12 +6698,12 @@ Add a configure flag to allow libedit to be preferred over - GNU readline (Bruce) + GNU readline (Bruce) Use configure --with-libedit-preferred. + linkend="configure">--with-libedit-preferred. @@ -6722,21 +6722,21 @@ - Add support for Solaris x86_64 using the - Solaris compiler (Pierre Girard, Theo + Add support for Solaris x86_64 using the + Solaris compiler (Pierre Girard, Theo Schlossnagle, Bruce) - Add DTrace support (Robert Lor) + Add DTrace support (Robert Lor) - Add PG_VERSION_NUM for use by third-party + Add PG_VERSION_NUM for use by third-party applications wanting to test the backend version in C using > and < comparisons (Bruce) @@ -6744,37 +6744,37 @@ - Add XLOG_BLCKSZ as independent from BLCKSZ + Add XLOG_BLCKSZ as independent from BLCKSZ (Mark Wong) - Add LWLOCK_STATS define to report locking + Add LWLOCK_STATS define to report locking activity (Tom) - Emit warnings for unknown configure options + Emit warnings for unknown configure options (Martijn van Oosterhout) - Add server support for plugin libraries + Add server support for plugin libraries that can be used for add-on tasks such as debugging and performance measurement (Korry Douglas) This consists of two features: a table of rendezvous - variables that allows separately-loaded shared libraries to + variables that allows separately-loaded shared libraries to communicate, and a new configuration parameter local_preload_libraries + linkend="guc-local-preload-libraries">local_preload_libraries that allows libraries to be loaded into specific sessions without explicit cooperation from the client application. This allows external add-ons to implement features such as a PL/pgSQL debugger. @@ -6784,27 +6784,27 @@ Rename existing configuration parameter - preload_libraries to shared_preload_libraries + preload_libraries to shared_preload_libraries (Tom) This was done for clarity in comparison to - local_preload_libraries. + local_preload_libraries. Add new configuration parameter server_version_num + linkend="guc-server-version-num">server_version_num (Greg Sabino Mullane) This is like server_version, but is an - integer, e.g. 80200. This allows applications to + integer, e.g. 80200. This allows applications to make version checks more easily. @@ -6812,7 +6812,7 @@ Add a configuration parameter seq_page_cost + linkend="guc-seq-page-cost">seq_page_cost (Tom) @@ -6839,11 +6839,11 @@ New functions - _PG_init() and _PG_fini() are + _PG_init() and _PG_fini() are called if the library defines such symbols. Hence we no longer need to specify an initialization function in - shared_preload_libraries; we can assume that - the library used the _PG_init() convention + shared_preload_libraries; we can assume that + the library used the _PG_init() convention instead. @@ -6851,7 +6851,7 @@ Add PG_MODULE_MAGIC + linkend="xfunc-c-dynload">PG_MODULE_MAGIC header block to all shared object files (Martijn van Oosterhout) @@ -6870,7 +6870,7 @@ - New XML + New XML documentation section (Bruce) @@ -6892,7 +6892,7 @@ - multibyte encoding support, including UTF8 + multibyte encoding support, including UTF8 @@ -6912,13 +6912,13 @@ - Ispell dictionaries now recognize MySpell - format, used by OpenOffice + Ispell dictionaries now recognize MySpell + format, used by OpenOffice - GIN support + GIN support @@ -6928,13 +6928,13 @@ - Add adminpack module containing Pgadmin administration + Add adminpack module containing Pgadmin administration functions (Dave) These functions provide additional file system access - routines not present in the default PostgreSQL + routines not present in the default PostgreSQL server. @@ -6945,7 +6945,7 @@ - Reports information about the current connection's SSL + Reports information about the current connection's SSL certificate. @@ -6972,9 +6972,9 @@ - This new implementation supports EAN13, UPC, - ISBN (books), ISMN (music), and - ISSN (serials). + This new implementation supports EAN13, UPC, + ISBN (books), ISMN (music), and + ISSN (serials). @@ -7034,9 +7034,9 @@ - New functions are cube(float[]), - cube(float[], float[]), and - cube_subset(cube, int4[]). + New functions are cube(float[]), + cube(float[], float[]), and + cube_subset(cube, int4[]). @@ -7049,8 +7049,8 @@ - New operators for array-subset comparisons (@>, - <@, &&) (Tom) + New operators for array-subset comparisons (@>, + <@, &&) (Tom) diff --git a/doc/src/sgml/release-8.3.sgml b/doc/src/sgml/release-8.3.sgml index a82410d057..45ecf9c054 100644 --- a/doc/src/sgml/release-8.3.sgml +++ b/doc/src/sgml/release-8.3.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.3.X series. Users are encouraged to update to a newer release branch soon. @@ -42,7 +42,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -63,19 +63,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -86,13 +86,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -110,26 +110,26 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. @@ -149,15 +149,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -185,7 +185,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -212,13 +212,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -226,8 +226,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -249,8 +249,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -268,10 +268,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -279,7 +279,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -292,14 +292,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -313,7 +313,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -321,7 +321,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -349,7 +349,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -362,8 +362,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -373,33 +373,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -410,41 +410,41 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -455,7 +455,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -481,7 +481,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -524,22 +524,22 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -558,7 +558,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -566,7 +566,7 @@ - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -591,7 +591,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -622,7 +622,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -635,22 +635,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -678,22 +678,22 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -718,7 +718,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -730,7 +730,7 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) @@ -738,24 +738,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -763,7 +763,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -775,21 +775,21 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -835,12 +835,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -851,7 +851,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -863,7 +863,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -889,7 +889,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -897,19 +897,19 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -930,8 +930,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -964,7 +964,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -983,25 +983,25 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -1009,14 +1009,14 @@ - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -1064,26 +1064,26 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -1099,10 +1099,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -1114,16 +1114,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -1145,7 +1145,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -1159,18 +1159,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -1201,32 +1201,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -1898,7 +1898,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -1906,13 +1906,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -1944,7 +1944,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -2013,15 +2013,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -2029,13 +2029,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -2048,7 +2048,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -2080,14 +2080,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -2098,14 +2098,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -2149,15 +2149,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -2182,44 +2182,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -2231,16 +2231,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -2282,17 +2282,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -2302,7 +2302,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -2321,7 +2321,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -2329,19 +2329,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -2357,7 +2357,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -2367,7 +2367,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -2379,14 +2379,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -2398,15 +2398,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -2418,7 +2418,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -2426,7 +2426,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -2438,11 +2438,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -2461,14 +2461,14 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -2480,22 +2480,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -2503,20 +2503,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -2567,7 +2567,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -2596,7 +2596,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -2605,7 +2605,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -2627,13 +2627,13 @@ This is a back-patch of an 8.4 fix that was missed in the 8.3 branch. This corrects an error introduced in 8.3.8 that could cause incorrect results for outer joins when the inner relation is an inheritance tree - or UNION ALL subquery. + or UNION ALL subquery. - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -2655,7 +2655,7 @@ - If a plan is prepared while CREATE INDEX CONCURRENTLY is + If a plan is prepared while CREATE INDEX CONCURRENTLY is in progress for one of the referenced tables, it is supposed to be re-planned once the index is ready for use. This was not happening reliably. @@ -2709,7 +2709,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -2746,7 +2746,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -2754,35 +2754,35 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) - Fix REASSIGN OWNED to handle operator classes and families + Fix REASSIGN OWNED to handle operator classes and families (Asko Tiidumaa) - Fix possible core dump when comparing two empty tsquery values + Fix possible core dump when comparing two empty tsquery values (Tom Lane) - Fix LIKE's handling of patterns containing % - followed by _ (Tom Lane) + Fix LIKE's handling of patterns containing % + followed by _ (Tom Lane) @@ -2794,14 +2794,14 @@ In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -2809,14 +2809,14 @@ - Fix ecpg to process data from RETURNING + Fix ecpg to process data from RETURNING clauses correctly (Michael Meskes) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -2824,30 +2824,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -2860,7 +2860,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -2875,7 +2875,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -2922,19 +2922,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -2943,19 +2943,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -2980,7 +2980,7 @@ This avoids failures if the function's code is invalid without the setting; an example is that SQL functions may not parse if the - search_path is not correct. + search_path is not correct. @@ -2992,10 +2992,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -3003,7 +3003,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -3016,13 +3016,13 @@ Ensure the archiver process responds to changes in - archive_command as soon as possible (Tom) + archive_command as soon as possible (Tom) - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -3035,15 +3035,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -3054,15 +3054,15 @@ - Fix unnecessary GIN indexes do not support whole-index scans - errors for unsatisfiable queries using contrib/intarray + Fix unnecessary GIN indexes do not support whole-index scans + errors for unsatisfiable queries using contrib/intarray operators (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -3070,7 +3070,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -3102,14 +3102,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -3151,7 +3151,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -3214,8 +3214,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -3241,7 +3241,7 @@ - Fix assorted crashes in xml processing caused by sloppy + Fix assorted crashes in xml processing caused by sloppy memory management (Tom) @@ -3261,7 +3261,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -3283,23 +3283,23 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -3307,49 +3307,49 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Disallow GSSAPI authentication on local connections, + Disallow GSSAPI authentication on local connections, since it requires a hostname to function correctly (Magnus) - Make ecpg report the proper SQLSTATE if the connection + Make ecpg report the proper SQLSTATE if the connection disappears (Michael) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) @@ -3370,7 +3370,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -3382,43 +3382,43 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Allow zero-dimensional arrays in contrib/ltree operations + Allow zero-dimensional arrays in contrib/ltree operations (Tom) This case was formerly rejected as an error, but it's more convenient to treat it the same as a zero-element array. In particular this avoids - unnecessary failures when an ltree operation is applied to the - result of ARRAY(SELECT ...) and the sub-select returns no + unnecessary failures when an ltree operation is applied to the + result of ARRAY(SELECT ...) and the sub-select returns no rows. - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -3429,14 +3429,14 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -3514,14 +3514,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -3540,7 +3540,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -3617,7 +3617,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -3650,19 +3650,19 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix incorrect handling of WHERE - x=x conditions (Tom) + Fix incorrect handling of WHERE + x=x conditions (Tom) In some cases these could get ignored as redundant, but they aren't - — they're equivalent to x IS NOT NULL. + — they're equivalent to x IS NOT NULL. @@ -3674,7 +3674,7 @@ - Fix encoding handling in xml binary input (Heikki) + Fix encoding handling in xml binary input (Heikki) @@ -3685,7 +3685,7 @@ - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -3705,7 +3705,7 @@ Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -3722,7 +3722,7 @@ - In contrib/pg_standby, disable triggering failover with a + In contrib/pg_standby, disable triggering failover with a signal on Windows (Fujii Masao) @@ -3734,20 +3734,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -3760,14 +3760,14 @@ - This includes adding IDT and SGT to the default + This includes adding IDT and SGT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -3798,8 +3798,8 @@ A dump/restore is not required for those running 8.3.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.3.8. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.3.8. Also, if you are upgrading from a version earlier than 8.3.5, see . @@ -3818,13 +3818,13 @@ This bug led to the often-reported could not reattach - to shared memory error message. + to shared memory error message. - Force WAL segment switch during pg_start_backup() + Force WAL segment switch during pg_start_backup() (Heikki) @@ -3835,26 +3835,26 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. @@ -3881,8 +3881,8 @@ - Prevent synchronize_seqscans from changing the results of - scrollable and WITH HOLD cursors (Tom) + Prevent synchronize_seqscans from changing the results of + scrollable and WITH HOLD cursors (Tom) @@ -3896,32 +3896,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -3938,14 +3938,14 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) - Fix LIKE for case where pattern contains %_ + Fix LIKE for case where pattern contains %_ (Tom) @@ -3953,7 +3953,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -3972,8 +3972,8 @@ - Ensure that a fast shutdown request will forcibly terminate - open sessions, even if a smart shutdown was already in progress + Ensure that a fast shutdown request will forcibly terminate + open sessions, even if a smart shutdown was already in progress (Fujii Masao) @@ -4000,35 +4000,35 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Improve pg_dump's efficiency when there are + Improve pg_dump's efficiency when there are many large objects (Tamas Vincze) - Use SIGUSR1, not SIGQUIT, as the - failover signal for pg_standby (Heikki) + Use SIGUSR1, not SIGQUIT, as the + failover signal for pg_standby (Heikki) - Make pg_standby's maxretries option + Make pg_standby's maxretries option behave as documented (Fujii Masao) - Make contrib/hstore throw an error when a key or + Make contrib/hstore throw an error when a key or value is too long to fit in its data structure, rather than silently truncating it (Andrew Gierth) @@ -4036,15 +4036,15 @@ - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -4057,7 +4057,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -4108,7 +4108,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -4119,7 +4119,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -4132,19 +4132,19 @@ - Fix xpath() to not modify the path expression unless + Fix xpath() to not modify the path expression unless necessary, and to make a saner attempt at it when necessary (Andrew) - The SQL standard suggests that xpath should work on data - that is a document fragment, but libxml doesn't support + The SQL standard suggests that xpath should work on data + that is a document fragment, but libxml doesn't support that, and indeed it's not clear that this is sensible according to the - XPath standard. xpath attempted to work around this + XPath standard. xpath attempted to work around this mismatch by modifying both the data and the path expression, but the modification was buggy and could cause valid searches to fail. Now, - xpath checks whether the data is in fact a well-formed - document, and if so invokes libxml with no change to the + xpath checks whether the data is in fact a well-formed + document, and if so invokes libxml with no change to the data or path expression. Otherwise, a different modification method that is somewhat less likely to fail is used. @@ -4155,15 +4155,15 @@ seems likely that no real solution is possible. This patch should therefore be viewed as a band-aid to keep from breaking existing applications unnecessarily. It is likely that - PostgreSQL 8.4 will simply reject use of - xpath on data that is not a well-formed document. + PostgreSQL 8.4 will simply reject use of + xpath on data that is not a well-formed document. - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) @@ -4175,40 +4175,40 @@ - Crashes were possible on platforms where wchar_t is narrower - than int; Windows in particular. + Crashes were possible on platforms where wchar_t is narrower + than int; Windows in particular. Fix extreme inefficiency in text search parser's handling of an - email-like string containing multiple @ characters (Heikki) + email-like string containing multiple @ characters (Heikki) - Fix planner problem with sub-SELECT in the output list + Fix planner problem with sub-SELECT in the output list of a larger subquery (Tom) The known symptom of this bug is a failed to locate grouping - columns error that is dependent on the datatype involved; + columns error that is dependent on the datatype involved; but there could be other issues as well. - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -4219,38 +4219,38 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). - Change UNLISTEN to exit quickly if the current session has - never executed any LISTEN command (Tom) + Change UNLISTEN to exit quickly if the current session has + never executed any LISTEN command (Tom) Most of the time this is not a particularly useful optimization, but - since DISCARD ALL invokes UNLISTEN, the previous + since DISCARD ALL invokes UNLISTEN, the previous coding caused a substantial performance problem for applications that - made heavy use of DISCARD ALL. + made heavy use of DISCARD ALL. - Fix PL/pgSQL to not treat INTO after INSERT as + Fix PL/pgSQL to not treat INTO after INSERT as an INTO-variables clause anywhere in the string, not only at the start; - in particular, don't fail for INSERT INTO within - CREATE RULE (Tom) + in particular, don't fail for INSERT INTO within + CREATE RULE (Tom) @@ -4268,21 +4268,21 @@ - Retry failed calls to CallNamedPipe() on Windows + Retry failed calls to CallNamedPipe() on Windows (Steve Marshall, Magnus) It appears that this function can sometimes fail transiently; we previously treated any failure as a hard error, which could - confuse LISTEN/NOTIFY as well as other + confuse LISTEN/NOTIFY as well as other operations. - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -4324,7 +4324,7 @@ - Make DISCARD ALL release advisory locks, in addition + Make DISCARD ALL release advisory locks, in addition to everything it already did (Tom) @@ -4347,13 +4347,13 @@ - Fix crash of xmlconcat(NULL) (Peter) + Fix crash of xmlconcat(NULL) (Peter) - Fix possible crash in ispell dictionary if high-bit-set + Fix possible crash in ispell dictionary if high-bit-set characters are used as flags (Teodor) @@ -4365,7 +4365,7 @@ - Fix misordering of pg_dump output for composite types + Fix misordering of pg_dump output for composite types (Tom) @@ -4377,13 +4377,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -4400,7 +4400,7 @@ Fix possible Assert failure if a statement executed in PL/pgSQL is rewritten into another kind of statement, for example if an - INSERT is rewritten into an UPDATE (Heikki) + INSERT is rewritten into an UPDATE (Heikki) @@ -4410,7 +4410,7 @@ - This primarily affects domains that are declared with CHECK + This primarily affects domains that are declared with CHECK constraints involving user-defined stable or immutable functions. Such functions typically fail if no snapshot has been set. @@ -4425,7 +4425,7 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) @@ -4433,21 +4433,21 @@ Fix a problem that sometimes kept ALTER TABLE ENABLE/DISABLE - RULE from being recognized by active sessions (Tom) + RULE from being recognized by active sessions (Tom) - Fix a problem that made UPDATE RETURNING tableoid + Fix a problem that made UPDATE RETURNING tableoid return zero instead of the correct OID (Tom) - Allow functions declared as taking ANYARRAY to work on - the pg_statistic columns of that type (Tom) + Allow functions declared as taking ANYARRAY to work on + the pg_statistic columns of that type (Tom) @@ -4463,13 +4463,13 @@ This could result in bad plans for queries like - ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... + ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... - Improve optimizer's handling of long IN lists (Tom) + Improve optimizer's handling of long IN lists (Tom) @@ -4521,21 +4521,21 @@ - Fix contrib/dblink's - dblink_get_result(text,bool) function (Joe) + Fix contrib/dblink's + dblink_get_result(text,bool) function (Joe) - Fix possible garbage output from contrib/sslinfo functions + Fix possible garbage output from contrib/sslinfo functions (Tom) - Fix incorrect behavior of contrib/tsearch2 compatibility + Fix incorrect behavior of contrib/tsearch2 compatibility trigger when it's fired more than once in a command (Teodor) @@ -4554,29 +4554,29 @@ - Fix ecpg's handling of varchar structs (Michael) + Fix ecpg's handling of varchar structs (Michael) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -4607,7 +4607,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, see . Also, if you were running a previous - 8.3.X release, it is recommended to REINDEX all GiST + 8.3.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -4621,13 +4621,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -4639,7 +4639,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -4647,7 +4647,7 @@ - Fix possible crash in bytea-to-XML mapping (Michael McMaster) + Fix possible crash in bytea-to-XML mapping (Michael McMaster) @@ -4660,8 +4660,8 @@ - Improve optimization of expression IN - (expression-list) queries (Tom, per an idea from Robert + Improve optimization of expression IN + (expression-list) queries (Tom, per an idea from Robert Haas) @@ -4674,20 +4674,20 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. - Fix Assert failure during rescan of an IS NULL + Fix Assert failure during rescan of an IS NULL search of a GiST index (Teodor) @@ -4707,7 +4707,7 @@ - Force a checkpoint before CREATE DATABASE starts to copy + Force a checkpoint before CREATE DATABASE starts to copy files (Heikki) @@ -4719,9 +4719,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -4740,21 +4740,21 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Make ILIKE compare characters case-insensitively + Make ILIKE compare characters case-insensitively even when they're escaped (Andrew) - Ensure DISCARD is handled properly by statement logging (Tom) + Ensure DISCARD is handled properly by statement logging (Tom) @@ -4767,7 +4767,7 @@ - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -4781,15 +4781,15 @@ - Mark SessionReplicationRole as PGDLLIMPORT - so it can be used by Slony on Windows (Magnus) + Mark SessionReplicationRole as PGDLLIMPORT + so it can be used by Slony on Windows (Magnus) - Fix small memory leak when using libpq's - gsslib parameter (Magnus) + Fix small memory leak when using libpq's + gsslib parameter (Magnus) @@ -4800,38 +4800,38 @@ - Ensure libgssapi is linked into libpq + Ensure libgssapi is linked into libpq if needed (Markus Schaaf) - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Ensure pg_control is opened in binary mode + Ensure pg_control is opened in binary mode (Itagaki Takahiro) - pg_controldata and pg_resetxlog + pg_controldata and pg_resetxlog did this incorrectly, and so could fail on Windows. - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -4888,41 +4888,41 @@ This error created a risk of corruption in system - catalogs that are consulted by VACUUM: dead tuple versions + catalogs that are consulted by VACUUM: dead tuple versions might be removed too soon. The impact of this on actual database operations would be minimal, since the system doesn't follow MVCC rules while examining catalogs, but it might result in transiently - wrong output from pg_dump or other client programs. + wrong output from pg_dump or other client programs. - Fix potential miscalculation of datfrozenxid (Alvaro) + Fix potential miscalculation of datfrozenxid (Alvaro) This error may explain some recent reports of failure to remove old - pg_clog data. + pg_clog data. - Fix incorrect HOT updates after pg_class is reindexed + Fix incorrect HOT updates after pg_class is reindexed (Tom) - Corruption of pg_class could occur if REINDEX - TABLE pg_class was followed in the same session by an ALTER - TABLE RENAME or ALTER TABLE SET SCHEMA command. + Corruption of pg_class could occur if REINDEX + TABLE pg_class was followed in the same session by an ALTER + TABLE RENAME or ALTER TABLE SET SCHEMA command. - Fix missed combo cid case (Karl Schnaitter) + Fix missed combo cid case (Karl Schnaitter) @@ -4946,7 +4946,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -4972,7 +4972,7 @@ Fix missed permissions checks when a view contains a simple - UNION ALL construct (Heikki) + UNION ALL construct (Heikki) @@ -4984,7 +4984,7 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) @@ -4996,12 +4996,12 @@ - Fix possible repeated drops during DROP OWNED (Tom) + Fix possible repeated drops during DROP OWNED (Tom) This would typically result in strange errors such as cache - lookup failed for relation NNN. + lookup failed for relation NNN. @@ -5013,7 +5013,7 @@ - Fix xmlserialize() to raise error properly for + Fix xmlserialize() to raise error properly for unacceptable target data type (Tom) @@ -5026,7 +5026,7 @@ Certain characters occurring in configuration files would always cause - invalid byte sequence for encoding failures. + invalid byte sequence for encoding failures. @@ -5039,18 +5039,18 @@ - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -5077,26 +5077,26 @@ Allow spaces in the suffix part of an LDAP URL in - pg_hba.conf (Tom) + pg_hba.conf (Tom) Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner bug that could improperly push down IS NULL + Fix planner bug that could improperly push down IS NULL tests below an outer join (Tom) - This was triggered by occurrence of IS NULL tests for - the same relation in all arms of an upper OR clause. + This was triggered by occurrence of IS NULL tests for + the same relation in all arms of an upper OR clause. @@ -5114,21 +5114,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -5142,49 +5142,49 @@ - Improve performance of PQescapeBytea() (Rudolf Leitgeb) + Improve performance of PQescapeBytea() (Rudolf Leitgeb) On Windows, work around a Microsoft bug by preventing - libpq from trying to send more than 64kB per system call + libpq from trying to send more than 64kB per system call (Magnus) - Fix ecpg to handle variables properly in SET + Fix ecpg to handle variables properly in SET commands (Michael) - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) Fix erroneous WAL file cutoff point calculation in - pg_standby (Simon) + pg_standby (Simon) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -5227,18 +5227,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -5246,13 +5246,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -5303,19 +5303,19 @@ Fix incorrect archive truncation point calculation for the - %r macro in restore_command parameters + %r macro in restore_command parameters (Simon) This could lead to data loss if a warm-standby script relied on - %r to decide when to throw away WAL segment files. + %r to decide when to throw away WAL segment files. - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -5327,31 +5327,31 @@ - Fix REASSIGN OWNED so that it works on procedural + Fix REASSIGN OWNED so that it works on procedural languages too (Alvaro) - Fix problems with SELECT FOR UPDATE/SHARE occurring as a - subquery in a query with a non-SELECT top-level operation + Fix problems with SELECT FOR UPDATE/SHARE occurring as a + subquery in a query with a non-SELECT top-level operation (Tom) - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) - Fix pg_get_ruledef() to show the alias, if any, attached - to the target table of an UPDATE or DELETE + Fix pg_get_ruledef() to show the alias, if any, attached + to the target table of an UPDATE or DELETE (Tom) @@ -5377,13 +5377,13 @@ - Fix broken GiST comparison function for tsquery (Teodor) + Fix broken GiST comparison function for tsquery (Teodor) - Fix tsvector_update_trigger() and ts_stat() + Fix tsvector_update_trigger() and ts_stat() to accept domains over the types they expect to work with (Tom) @@ -5404,7 +5404,7 @@ Fix race conditions between delayed unlinks and DROP - DATABASE (Heikki) + DATABASE (Heikki) @@ -5431,11 +5431,11 @@ Fix possible crash due to incorrect plan generated for an - x IN (SELECT y - FROM ...) clause when x and y + x IN (SELECT y + FROM ...) clause when x and y have different data types; and make sure the behavior is semantically - correct when the conversion from y's type to - x's type is lossy (Tom) + correct when the conversion from y's type to + x's type is lossy (Tom) @@ -5456,15 +5456,15 @@ - Fix planner failure when an indexable MIN or - MAX aggregate is used with DISTINCT or - ORDER BY (Tom) + Fix planner failure when an indexable MIN or + MAX aggregate is used with DISTINCT or + ORDER BY (Tom) - Fix planner to ensure it never uses a physical tlist for a + Fix planner to ensure it never uses a physical tlist for a plan node that is feeding a Sort node (Tom) @@ -5488,7 +5488,7 @@ - Make TransactionIdIsCurrentTransactionId() use binary + Make TransactionIdIsCurrentTransactionId() use binary search instead of linear search when checking child-transaction XIDs (Heikki) @@ -5502,14 +5502,14 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) - Fix several datatype input functions, notably array_in(), + Fix several datatype input functions, notably array_in(), that were allowing unused bytes in their results to contain uninitialized, unpredictable values (Tom) @@ -5517,7 +5517,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -5525,18 +5525,18 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). @@ -5549,7 +5549,7 @@ - Improve ANALYZE's handling of in-doubt tuples (those + Improve ANALYZE's handling of in-doubt tuples (those inserted or deleted by a not-yet-committed transaction) so that the counts it reports to the stats collector are more likely to be correct (Pavan Deolasee) @@ -5558,14 +5558,14 @@ - Fix initdb to reject a relative path for its - --xlogdir (-X) option (Tom) + Fix initdb to reject a relative path for its + --xlogdir (-X) option (Tom) - Make psql print tab characters as an appropriate + Make psql print tab characters as an appropriate number of spaces, rather than \x09 as was done in 8.3.0 and 8.3.1 (Bruce) @@ -5573,7 +5573,7 @@ - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, and Argentina/San_Luis) @@ -5581,44 +5581,44 @@ - Add ECPGget_PGconn() function to - ecpglib (Michael) + Add ECPGget_PGconn() function to + ecpglib (Michael) - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix handling of continuation line markers in ecpg + Fix handling of continuation line markers in ecpg (Michael) - Fix possible crashes in contrib/cube functions (Tom) + Fix possible crashes in contrib/cube functions (Tom) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS, and make it auto-configure properly for - libxslt present or not (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS, and make it auto-configure properly for + libxslt present or not (Tom) @@ -5646,7 +5646,7 @@ A dump/restore is not required for those running 8.3.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the Windows locale issue described below. @@ -5670,17 +5670,17 @@ over two years ago, but Windows with UTF-8 uses a separate code path that was not updated. If you are using a locale that considers some non-identical strings as equal, you may need to - REINDEX to fix existing indexes on textual columns. + REINDEX to fix existing indexes on textual columns. - Repair corner-case bugs in VACUUM FULL (Tom) + Repair corner-case bugs in VACUUM FULL (Tom) - A potential deadlock between concurrent VACUUM FULL + A potential deadlock between concurrent VACUUM FULL operations on different system catalogs was introduced in 8.2. This has now been corrected. 8.3 made this worse because the deadlock could occur within a critical code section, making it @@ -5688,13 +5688,13 @@ - Also, a VACUUM FULL that failed partway through + Also, a VACUUM FULL that failed partway through vacuuming a system catalog could result in cache corruption in concurrent database sessions. - Another VACUUM FULL bug introduced in 8.3 could + Another VACUUM FULL bug introduced in 8.3 could result in a crash or out-of-memory report when dealing with pages containing no live tuples. @@ -5702,13 +5702,13 @@ - Fix misbehavior of foreign key checks involving character - or bit columns (Tom) + Fix misbehavior of foreign key checks involving character + or bit columns (Tom) If the referencing column were of a different but compatible type - (for instance varchar), the constraint was enforced incorrectly. + (for instance varchar), the constraint was enforced incorrectly. @@ -5726,7 +5726,7 @@ This bug affected only protocol-level prepare operations, not - SQL PREPARE, and so tended to be seen only with + SQL PREPARE, and so tended to be seen only with JDBC, DBI, and other client-side drivers that use prepared statements heavily. @@ -5748,21 +5748,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -5771,14 +5771,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -5803,20 +5803,20 @@ - Fix incorrect comparison of tsquery values (Teodor) + Fix incorrect comparison of tsquery values (Teodor) - Fix incorrect behavior of LIKE with non-ASCII characters + Fix incorrect behavior of LIKE with non-ASCII characters in single-byte encodings (Rolf Jentsch) - Disable xmlvalidate (Tom) + Disable xmlvalidate (Tom) @@ -5835,8 +5835,8 @@ - Make encode(bytea, 'escape') convert all - high-bit-set byte values into \nnn octal + Make encode(bytea, 'escape') convert all + high-bit-set byte values into \nnn octal escape sequences (Tom) @@ -5844,7 +5844,7 @@ This is necessary to avoid encoding problems when the database encoding is multi-byte. This change could pose compatibility issues for applications that are expecting specific results from - encode. + encode. @@ -5860,21 +5860,21 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) Avoid tablespace permissions errors in CREATE TABLE LIKE - INCLUDING INDEXES (Tom) + INCLUDING INDEXES (Tom) - Ensure pg_stat_activity.waiting flag + Ensure pg_stat_activity.waiting flag is cleared when a lock wait is aborted (Tom) @@ -5892,26 +5892,26 @@ - Update time zone data files to tzdata release 2008a + Update time zone data files to tzdata release 2008a (in particular, recent Chile changes); adjust timezone abbreviation - VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) + VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) - Fix ecpg problems with arrays (Michael) + Fix ecpg problems with arrays (Michael) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -5919,19 +5919,19 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Enable building contrib/uuid-ossp with MSVC (Hiroshi Saito) + Enable building contrib/uuid-ossp with MSVC (Hiroshi Saito) @@ -5954,7 +5954,7 @@ With significant new functionality and performance enhancements, this release represents a major leap forward for - PostgreSQL. This was made possible by a growing + PostgreSQL. This was made possible by a growing community that has dramatically accelerated the pace of development. This release adds the following major features: @@ -5988,13 +5988,13 @@ - Universally Unique Identifier (UUID) data type + Universally Unique Identifier (UUID) data type - Add control over whether NULLs sort first or last + Add control over whether NULLs sort first or last @@ -6032,7 +6032,7 @@ - Support Security Service Provider Interface (SSPI) for + Support Security Service Provider Interface (SSPI) for authentication on Windows @@ -6046,8 +6046,8 @@ - Allow the whole PostgreSQL distribution to be compiled - with Microsoft Visual C++ + Allow the whole PostgreSQL distribution to be compiled + with Microsoft Visual C++ @@ -6076,8 +6076,8 @@ - Heap-Only Tuples (HOT) accelerate space reuse for - most UPDATEs and DELETEs + Heap-Only Tuples (HOT) accelerate space reuse for + most UPDATEs and DELETEs @@ -6091,7 +6091,7 @@ Using non-persistent transaction IDs for read-only transactions - reduces overhead and VACUUM requirements + reduces overhead and VACUUM requirements @@ -6116,7 +6116,7 @@ - ORDER BY ... LIMIT can be done without sorting + ORDER BY ... LIMIT can be done without sorting @@ -6148,14 +6148,14 @@ Non-character data types are no longer automatically cast to - TEXT (Peter, Tom) + TEXT (Peter, Tom) Previously, if a non-character value was supplied to an operator or - function that requires text input, it was automatically - cast to text, for most (though not all) built-in data types. - This no longer happens: an explicit cast to text is now + function that requires text input, it was automatically + cast to text, for most (though not all) built-in data types. + This no longer happens: an explicit cast to text is now required for all non-character-string types. For example, these expressions formerly worked: @@ -6164,15 +6164,15 @@ substr(current_date, 1, 4) 23 LIKE '2%' - but will now draw function does not exist and operator - does not exist errors respectively. Use an explicit cast instead: + but will now draw function does not exist and operator + does not exist errors respectively. Use an explicit cast instead: substr(current_date::text, 1, 4) 23::text LIKE '2%' - (Of course, you can use the more verbose CAST() syntax too.) + (Of course, you can use the more verbose CAST() syntax too.) The reason for the change is that these automatic casts too often caused surprising behavior. An example is that in previous releases, this expression was accepted but did not do what was expected: @@ -6183,35 +6183,35 @@ current_date < 2017-11-17 This is actually comparing a date to an integer, which should be (and now is) rejected — but in the presence of automatic - casts both sides were cast to text and a textual comparison - was done, because the text < text operator was able - to match the expression when no other < operator could. + casts both sides were cast to text and a textual comparison + was done, because the text < text operator was able + to match the expression when no other < operator could. - Types char(n) and - varchar(n) still cast to text - automatically. Also, automatic casting to text still works for - inputs to the concatenation (||) operator, so long as least + Types char(n) and + varchar(n) still cast to text + automatically. Also, automatic casting to text still works for + inputs to the concatenation (||) operator, so long as least one input is a character-string type. - Full text search features from contrib/tsearch2 have + Full text search features from contrib/tsearch2 have been moved into the core server, with some minor syntax changes - contrib/tsearch2 now contains a compatibility + contrib/tsearch2 now contains a compatibility interface. - ARRAY(SELECT ...), where the SELECT + ARRAY(SELECT ...), where the SELECT returns no rows, now returns an empty array, rather than NULL (Tom) @@ -6233,8 +6233,8 @@ current_date < 2017-11-17 - ORDER BY ... USING operator must now - use a less-than or greater-than operator that is + ORDER BY ... USING operator must now + use a less-than or greater-than operator that is defined in a btree operator class @@ -6251,7 +6251,7 @@ current_date < 2017-11-17 Previously SET LOCAL's effects were lost - after subtransaction commit (RELEASE SAVEPOINT + after subtransaction commit (RELEASE SAVEPOINT or exit from a PL/pgSQL exception block). @@ -6263,15 +6263,15 @@ current_date < 2017-11-17 - For example, "BEGIN; DROP DATABASE; COMMIT" will now be + For example, "BEGIN; DROP DATABASE; COMMIT" will now be rejected even if submitted as a single query message. - ROLLBACK outside a transaction block now - issues NOTICE instead of WARNING (Bruce) + ROLLBACK outside a transaction block now + issues NOTICE instead of WARNING (Bruce) @@ -6282,15 +6282,15 @@ current_date < 2017-11-17 - Formerly, these commands accepted schema.relation but + Formerly, these commands accepted schema.relation but ignored the schema part, which was confusing. - ALTER SEQUENCE no longer affects the sequence's - currval() state (Tom) + ALTER SEQUENCE no longer affects the sequence's + currval() state (Tom) @@ -6314,16 +6314,16 @@ current_date < 2017-11-17 For example, pg_database_size() now requires - CONNECT permission, which is granted to everyone by + CONNECT permission, which is granted to everyone by default. pg_tablespace_size() requires - CREATE permission in the tablespace, or is allowed if + CREATE permission in the tablespace, or is allowed if the tablespace is the default tablespace for the database. - Remove the undocumented !!= (not in) operator (Tom) + Remove the undocumented !!= (not in) operator (Tom) @@ -6339,7 +6339,7 @@ current_date < 2017-11-17 If application code was computing and storing hash values using - internal PostgreSQL hashing functions, the hash + internal PostgreSQL hashing functions, the hash values must be regenerated. @@ -6351,8 +6351,8 @@ current_date < 2017-11-17 - The new SET_VARSIZE() macro must be used - to set the length of generated varlena values. Also, it + The new SET_VARSIZE() macro must be used + to set the length of generated varlena values. Also, it might be necessary to expand (de-TOAST) input values in more cases. @@ -6361,7 +6361,7 @@ current_date < 2017-11-17 Continuous archiving no longer reports each successful archive - operation to the server logs unless DEBUG level is used + operation to the server logs unless DEBUG level is used (Simon) @@ -6381,18 +6381,18 @@ current_date < 2017-11-17 - bgwriter_lru_percent, - bgwriter_all_percent, - bgwriter_all_maxpages, - stats_start_collector, and - stats_reset_on_server_start are removed. - redirect_stderr is renamed to - logging_collector. - stats_command_string is renamed to - track_activities. - stats_block_level and stats_row_level - are merged into track_counts. - A new boolean configuration parameter, archive_mode, + bgwriter_lru_percent, + bgwriter_all_percent, + bgwriter_all_maxpages, + stats_start_collector, and + stats_reset_on_server_start are removed. + redirect_stderr is renamed to + logging_collector. + stats_command_string is renamed to + track_activities. + stats_block_level and stats_row_level + are merged into track_counts. + A new boolean configuration parameter, archive_mode, controls archiving. Autovacuum's default settings have changed. @@ -6403,7 +6403,7 @@ current_date < 2017-11-17 - We now always start the collector process, unless UDP + We now always start the collector process, unless UDP socket creation fails. @@ -6421,7 +6421,7 @@ current_date < 2017-11-17 - Commenting out a parameter in postgresql.conf now + Commenting out a parameter in postgresql.conf now causes it to revert to its default value (Joachim Wieland) @@ -6461,12 +6461,12 @@ current_date < 2017-11-17 - On most platforms, C locale is the only locale that + On most platforms, C locale is the only locale that will work with any database encoding. Other locale settings imply a specific encoding and will misbehave if the database encoding is something different. (Typical symptoms include bogus textual - sort order and wrong results from upper() or - lower().) The server now rejects attempts to create + sort order and wrong results from upper() or + lower().) The server now rejects attempts to create databases that have an incompatible encoding. @@ -6503,7 +6503,7 @@ current_date < 2017-11-17 convert_from(bytea, name) returns - text — converts the first argument from the named + text — converts the first argument from the named encoding to the database encoding @@ -6511,7 +6511,7 @@ current_date < 2017-11-17 convert_to(text, name) returns - bytea — converts the first argument from the + bytea — converts the first argument from the database encoding to the named encoding @@ -6519,7 +6519,7 @@ current_date < 2017-11-17 length(bytea, name) returns - integer — gives the length of the first + integer — gives the length of the first argument in characters in the named encoding @@ -6582,10 +6582,10 @@ current_date < 2017-11-17 database consistency at risk; the worst case is that after a crash the last few reportedly-committed transactions might not be committed after all. - This feature is enabled by turning off synchronous_commit + This feature is enabled by turning off synchronous_commit (which can be done per-session or per-transaction, if some transactions are critical and others are not). - wal_writer_delay can be adjusted to control the maximum + wal_writer_delay can be adjusted to control the maximum delay before transactions actually reach disk. @@ -6609,19 +6609,19 @@ current_date < 2017-11-17 - Heap-Only Tuples (HOT) accelerate space reuse for most - UPDATEs and DELETEs (Pavan Deolasee, with + Heap-Only Tuples (HOT) accelerate space reuse for most + UPDATEs and DELETEs (Pavan Deolasee, with ideas from many others) - UPDATEs and DELETEs leave dead tuples - behind, as do failed INSERTs. Previously only - VACUUM could reclaim space taken by dead tuples. With - HOT dead tuple space can be automatically reclaimed at - the time of INSERT or UPDATE if no changes + UPDATEs and DELETEs leave dead tuples + behind, as do failed INSERTs. Previously only + VACUUM could reclaim space taken by dead tuples. With + HOT dead tuple space can be automatically reclaimed at + the time of INSERT or UPDATE if no changes are made to indexed columns. This allows for more consistent - performance. Also, HOT avoids adding duplicate index + performance. Also, HOT avoids adding duplicate index entries. @@ -6655,13 +6655,13 @@ current_date < 2017-11-17 Using non-persistent transaction IDs for read-only transactions - reduces overhead and VACUUM requirements (Florian Pflug) + reduces overhead and VACUUM requirements (Florian Pflug) Non-persistent transaction IDs do not increment the global transaction counter. Therefore, they reduce the load on - pg_clog and increase the time between forced + pg_clog and increase the time between forced vacuums to prevent transaction ID wraparound. Other performance improvements were also made that should improve concurrency. @@ -6674,7 +6674,7 @@ current_date < 2017-11-17 - There was formerly a hard limit of 232 + There was formerly a hard limit of 232 (4 billion) commands per transaction. Now only commands that actually changed the database count, so while this limit still exists, it should be significantly less annoying. @@ -6683,7 +6683,7 @@ current_date < 2017-11-17 - Create a dedicated WAL writer process to off-load + Create a dedicated WAL writer process to off-load work from backends (Simon) @@ -6696,7 +6696,7 @@ current_date < 2017-11-17 Unless WAL archiving is enabled, the system now avoids WAL writes - for CLUSTER and just fsync()s the + for CLUSTER and just fsync()s the table at the end of the command. It also does the same for COPY if the table was created in the same transaction. @@ -6720,22 +6720,22 @@ current_date < 2017-11-17 middle of the table (where another sequential scan is already in-progress) and wrapping around to the beginning to finish. This can affect the order of returned rows in a query that does not - specify ORDER BY. The synchronize_seqscans + specify ORDER BY. The synchronize_seqscans configuration parameter can be used to disable this if necessary. - ORDER BY ... LIMIT can be done without sorting + ORDER BY ... LIMIT can be done without sorting (Greg Stark) This is done by sequentially scanning the table and tracking just - the top N candidate rows, rather than performing a + the top N candidate rows, rather than performing a full sort of the entire table. This is useful when there is no - matching index and the LIMIT is not large. + matching index and the LIMIT is not large. @@ -6805,7 +6805,7 @@ current_date < 2017-11-17 Previously PL/pgSQL functions that referenced temporary tables would fail if the temporary table was dropped and recreated - between function invocations, unless EXECUTE was + between function invocations, unless EXECUTE was used. This improvement fixes that problem and many related issues. @@ -6830,7 +6830,7 @@ current_date < 2017-11-17 Place temporary tables' TOAST tables in special schemas named - pg_toast_temp_nnn (Tom) + pg_toast_temp_nnn (Tom) @@ -6860,7 +6860,7 @@ current_date < 2017-11-17 - Fix CREATE CONSTRAINT TRIGGER + Fix CREATE CONSTRAINT TRIGGER to convert old-style foreign key trigger definitions into regular foreign key constraints (Tom) @@ -6868,17 +6868,17 @@ current_date < 2017-11-17 This will ease porting of foreign key constraints carried forward from pre-7.3 databases, if they were never converted using - contrib/adddepend. + contrib/adddepend. - Fix DEFAULT NULL to override inherited defaults (Tom) + Fix DEFAULT NULL to override inherited defaults (Tom) - DEFAULT NULL was formerly considered a noise phrase, but it + DEFAULT NULL was formerly considered a noise phrase, but it should (and now does) override non-null defaults that would otherwise be inherited from a parent table or domain. @@ -6998,9 +6998,9 @@ current_date < 2017-11-17 This avoids Windows-specific problems with localized time zone names that are in the wrong encoding. There is a new - log_timezone parameter that controls the timezone + log_timezone parameter that controls the timezone used in log messages, independently of the client-visible - timezone parameter. + timezone parameter. @@ -7031,7 +7031,7 @@ current_date < 2017-11-17 - Add n_live_tuples and n_dead_tuples columns + Add n_live_tuples and n_dead_tuples columns to pg_stat_all_tables and related views (Glen Parker) @@ -7039,8 +7039,8 @@ current_date < 2017-11-17 - Merge stats_block_level and stats_row_level - parameters into a single parameter track_counts, which + Merge stats_block_level and stats_row_level + parameters into a single parameter track_counts, which controls all messages sent to the statistics collector process (Tom) @@ -7070,7 +7070,7 @@ current_date < 2017-11-17 - Support Security Service Provider Interface (SSPI) for + Support Security Service Provider Interface (SSPI) for authentication on Windows (Magnus) @@ -7094,14 +7094,14 @@ current_date < 2017-11-17 - Add ssl_ciphers parameter to control accepted SSL ciphers + Add ssl_ciphers parameter to control accepted SSL ciphers (Victor Wagner) - Add a Kerberos realm parameter, krb_realm (Magnus) + Add a Kerberos realm parameter, krb_realm (Magnus) @@ -7110,7 +7110,7 @@ current_date < 2017-11-17 - Write-Ahead Log (<acronym>WAL</>) and Continuous Archiving + Write-Ahead Log (<acronym>WAL</acronym>) and Continuous Archiving @@ -7133,7 +7133,7 @@ current_date < 2017-11-17 This change allows a warm standby server to pass the name of the earliest still-needed WAL file to the recovery script, allowing automatic removal - of no-longer-needed WAL files. This is done using %r in + of no-longer-needed WAL files. This is done using %r in the restore_command parameter of recovery.conf. @@ -7141,14 +7141,14 @@ current_date < 2017-11-17 - New boolean configuration parameter, archive_mode, + New boolean configuration parameter, archive_mode, controls archiving (Simon) - Previously setting archive_command to an empty string - turned off archiving. Now archive_mode turns archiving - on and off, independently of archive_command. This is + Previously setting archive_command to an empty string + turned off archiving. Now archive_mode turns archiving + on and off, independently of archive_command. This is useful for stopping archiving temporarily. @@ -7169,40 +7169,40 @@ current_date < 2017-11-17 Text search has been improved, moved into the core code, and is now - installed by default. contrib/tsearch2 now contains + installed by default. contrib/tsearch2 now contains a compatibility interface. - Add control over whether NULLs sort first or last (Teodor, Tom) + Add control over whether NULLs sort first or last (Teodor, Tom) - The syntax is ORDER BY ... NULLS FIRST/LAST. + The syntax is ORDER BY ... NULLS FIRST/LAST. - Allow per-column ascending/descending (ASC/DESC) + Allow per-column ascending/descending (ASC/DESC) ordering options for indexes (Teodor, Tom) - Previously a query using ORDER BY with mixed - ASC/DESC specifiers could not fully use + Previously a query using ORDER BY with mixed + ASC/DESC specifiers could not fully use an index. Now an index can be fully used in such cases if the index was created with matching - ASC/DESC specifications. - NULL sort order within an index can be controlled, too. + ASC/DESC specifications. + NULL sort order within an index can be controlled, too. - Allow col IS NULL to use an index (Teodor) + Allow col IS NULL to use an index (Teodor) @@ -7213,8 +7213,8 @@ current_date < 2017-11-17 This eliminates the need to reference a primary key to - UPDATE or DELETE rows returned by a cursor. - The syntax is UPDATE/DELETE WHERE CURRENT OF. + UPDATE or DELETE rows returned by a cursor. + The syntax is UPDATE/DELETE WHERE CURRENT OF. @@ -7243,7 +7243,7 @@ current_date < 2017-11-17 - Allow UNION and related constructs to return a domain + Allow UNION and related constructs to return a domain type, when all inputs are of that domain type (Tom) @@ -7271,7 +7271,7 @@ current_date < 2017-11-17 Improve optimizer logic for detecting when variables are equal - in a WHERE clause (Tom) + in a WHERE clause (Tom) @@ -7318,8 +7318,8 @@ current_date < 2017-11-17 For example, functions can now set their own - search_path to prevent unexpected behavior if a - different search_path exists at run-time. Security + search_path to prevent unexpected behavior if a + different search_path exists at run-time. Security definer functions should set search_path to avoid security loopholes. @@ -7367,7 +7367,7 @@ current_date < 2017-11-17 - Make CREATE/DROP/RENAME DATABASE wait briefly for + Make CREATE/DROP/RENAME DATABASE wait briefly for conflicting backends to exit before failing (Tom) @@ -7385,7 +7385,7 @@ current_date < 2017-11-17 This allows replication systems to disable triggers and rewrite rules as a group without modifying the system catalogs directly. - The behavior is controlled by ALTER TABLE and a new + The behavior is controlled by ALTER TABLE and a new parameter session_replication_role. @@ -7397,7 +7397,7 @@ current_date < 2017-11-17 This allows a user-defined type to take a modifier, like - ssnum(7). Previously only built-in + ssnum(7). Previously only built-in data types could have modifiers. @@ -7419,7 +7419,7 @@ current_date < 2017-11-17 While this is reasonably safe, some administrators might wish to revoke the privilege. It is controlled by - pg_pltemplate.tmpldbacreate. + pg_pltemplate.tmpldbacreate. @@ -7465,7 +7465,7 @@ current_date < 2017-11-17 Add new CLUSTER syntax: CLUSTER - table USING index + table USING index (Holger Schurig) @@ -7483,7 +7483,7 @@ current_date < 2017-11-17 References to subplan outputs are now always shown correctly, - instead of using ?columnN? + instead of using ?columnN? for complicated cases. @@ -7527,19 +7527,19 @@ current_date < 2017-11-17 This feature provides convenient support for fields that have a small, fixed set of allowed values. An example of creating an - ENUM type is - CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'). + ENUM type is + CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'). - Universally Unique Identifier (UUID) data type (Gevik + Universally Unique Identifier (UUID) data type (Gevik Babakhani, Neil) - This closely matches RFC 4122. + This closely matches RFC 4122. @@ -7549,7 +7549,7 @@ current_date < 2017-11-17 - This greatly increases the range of supported MONEY + This greatly increases the range of supported MONEY values. @@ -7557,13 +7557,13 @@ current_date < 2017-11-17 Fix float4/float8 to handle - Infinity and NAN (Not A Number) + Infinity and NAN (Not A Number) consistently (Bruce) The code formerly was not consistent about distinguishing - Infinity from overflow conditions. + Infinity from overflow conditions. @@ -7576,7 +7576,7 @@ current_date < 2017-11-17 - Prevent COPY from using digits and lowercase letters as + Prevent COPY from using digits and lowercase letters as delimiters (Tom) @@ -7613,7 +7613,7 @@ current_date < 2017-11-17 - Implement width_bucket() for the float8 + Implement width_bucket() for the float8 data type (Neil) @@ -7636,34 +7636,34 @@ current_date < 2017-11-17 - Add isodow option to EXTRACT() and - date_part() (Bruce) + Add isodow option to EXTRACT() and + date_part() (Bruce) This returns the day of the week, with Sunday as seven. - (dow returns Sunday as zero.) + (dow returns Sunday as zero.) - Add ID (ISO day of week) and IDDD (ISO - day of year) format codes for to_char(), - to_date(), and to_timestamp() (Brendan + Add ID (ISO day of week) and IDDD (ISO + day of year) format codes for to_char(), + to_date(), and to_timestamp() (Brendan Jurd) - Make to_timestamp() and to_date() + Make to_timestamp() and to_date() assume TM (trim) option for potentially variable-width fields (Bruce) - This matches Oracle's behavior. + This matches Oracle's behavior. @@ -7671,7 +7671,7 @@ current_date < 2017-11-17 Fix off-by-one conversion error in to_date()/to_timestamp() - D (non-ISO day of week) fields (Bruce) + D (non-ISO day of week) fields (Bruce) @@ -7757,7 +7757,7 @@ current_date < 2017-11-17 This adds convenient syntax for PL/pgSQL set-returning functions - that want to return the result of a query. RETURN QUERY + that want to return the result of a query. RETURN QUERY is easier and more efficient than a loop around RETURN NEXT. @@ -7770,7 +7770,7 @@ current_date < 2017-11-17 - For example, myfunc.myvar. This is particularly + For example, myfunc.myvar. This is particularly useful for specifying variables in a query where the variable name might match a column name. @@ -7790,11 +7790,11 @@ current_date < 2017-11-17 Tighten requirements for FOR loop - STEP values (Tom) + STEP values (Tom) - Prevent non-positive STEP values, and handle + Prevent non-positive STEP values, and handle loop overflows. @@ -7831,7 +7831,7 @@ current_date < 2017-11-17 - Allow type-name arguments to PL/Tcl spi_prepare to + Allow type-name arguments to PL/Tcl spi_prepare to be data type aliases in addition to names found in pg_type (Andrew) @@ -7852,7 +7852,7 @@ current_date < 2017-11-17 - Fix PL/Tcl problems with thread-enabled libtcl spawning + Fix PL/Tcl problems with thread-enabled libtcl spawning multiple threads within the backend (Steve Marshall, Paul Bayer, Doug Knight) @@ -7867,7 +7867,7 @@ current_date < 2017-11-17 - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> @@ -7907,20 +7907,20 @@ current_date < 2017-11-17 Allow \pset, \t, and - \x to specify on or off, + \x to specify on or off, rather than just toggling (Chad Wagner) - Add \sleep capability (Jan) + Add \sleep capability (Jan) - Enable \timing output for \copy (Andrew) + Enable \timing output for \copy (Andrew) @@ -7933,20 +7933,20 @@ current_date < 2017-11-17 - Flush \o output after each backslash command (Tom) + Flush \o output after each backslash command (Tom) - Correctly detect and report errors while reading a -f + Correctly detect and report errors while reading a -f input file (Peter) - Remove -u option (this option has long been deprecated) + Remove -u option (this option has long been deprecated) (Tom) @@ -7956,12 +7956,12 @@ current_date < 2017-11-17 - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add --tablespaces-only and --roles-only + Add --tablespaces-only and --roles-only options to pg_dumpall (Dave Page) @@ -7980,7 +7980,7 @@ current_date < 2017-11-17 - Allow pg_dumpall to accept an initial-connection + Allow pg_dumpall to accept an initial-connection database name rather than the default template1 (Dave Page) @@ -7988,7 +7988,7 @@ current_date < 2017-11-17 - In -n and -t switches, always match + In -n and -t switches, always match $ literally (Tom) @@ -8001,7 +8001,7 @@ current_date < 2017-11-17 - Remove -u option (this option has long been deprecated) + Remove -u option (this option has long been deprecated) (Tom) @@ -8016,7 +8016,7 @@ current_date < 2017-11-17 - In initdb, allow the location of the + In initdb, allow the location of the pg_xlog directory to be specified (Euler Taveira de Oliveira) @@ -8024,19 +8024,19 @@ current_date < 2017-11-17 - Enable server core dump generation in pg_regress + Enable server core dump generation in pg_regress on supported operating systems (Andrew) - Add a -t (timeout) parameter to pg_ctl + Add a -t (timeout) parameter to pg_ctl (Bruce) - This controls how long pg_ctl will wait when waiting + This controls how long pg_ctl will wait when waiting for server startup or shutdown. Formerly the timeout was hard-wired as 60 seconds. @@ -8044,28 +8044,28 @@ current_date < 2017-11-17 - Add a pg_ctl option to control generation + Add a pg_ctl option to control generation of server core dumps (Andrew) - Allow Control-C to cancel clusterdb, - reindexdb, and vacuumdb (Itagaki + Allow Control-C to cancel clusterdb, + reindexdb, and vacuumdb (Itagaki Takahiro, Magnus) - Suppress command tag output for createdb, - createuser, dropdb, and - dropuser (Peter) + Suppress command tag output for createdb, + createuser, dropdb, and + dropuser (Peter) - The --quiet option is ignored and will be removed in 8.4. + The --quiet option is ignored and will be removed in 8.4. Progress messages when acting on all databases now go to stdout instead of stderr because they are not actually errors. @@ -8076,33 +8076,33 @@ current_date < 2017-11-17 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Interpret the dbName parameter of - PQsetdbLogin() as a conninfo string if + Interpret the dbName parameter of + PQsetdbLogin() as a conninfo string if it contains an equals sign (Andrew) - This allows use of conninfo strings in client - programs that still use PQsetdbLogin(). + This allows use of conninfo strings in client + programs that still use PQsetdbLogin(). - Support a global SSL configuration file (Victor + Support a global SSL configuration file (Victor Wagner) - Add environment variable PGSSLKEY to control - SSL hardware keys (Victor Wagner) + Add environment variable PGSSLKEY to control + SSL hardware keys (Victor Wagner) @@ -8147,7 +8147,7 @@ current_date < 2017-11-17 - <link linkend="ecpg"><application>ecpg</></link> + <link linkend="ecpg"><application>ecpg</application></link> @@ -8183,13 +8183,13 @@ current_date < 2017-11-17 - <application>Windows</> Port + <application>Windows</application> Port - Allow the whole PostgreSQL distribution to be compiled - with Microsoft Visual C++ (Magnus and others) + Allow the whole PostgreSQL distribution to be compiled + with Microsoft Visual C++ (Magnus and others) @@ -8226,7 +8226,7 @@ current_date < 2017-11-17 - Server Programming Interface (<acronym>SPI</>) + Server Programming Interface (<acronym>SPI</acronym>) @@ -8236,7 +8236,7 @@ current_date < 2017-11-17 Allow access to the cursor-related planning options, and add - FETCH/MOVE routines. + FETCH/MOVE routines. @@ -8247,15 +8247,15 @@ current_date < 2017-11-17 - The macro SPI_ERROR_CURSOR still exists but will + The macro SPI_ERROR_CURSOR still exists but will never be returned. - SPI plan pointers are now declared as SPIPlanPtr instead of - void * (Tom) + SPI plan pointers are now declared as SPIPlanPtr instead of + void * (Tom) @@ -8274,35 +8274,35 @@ current_date < 2017-11-17 - Add configure option --enable-profiling - to enable code profiling (works only with gcc) + Add configure option --enable-profiling + to enable code profiling (works only with gcc) (Korry Douglas and Nikhil Sontakke) - Add configure option --with-system-tzdata + Add configure option --with-system-tzdata to use the operating system's time zone database (Peter) - Fix PGXS so extensions can be built against PostgreSQL - installations whose pg_config program does not - appear first in the PATH (Tom) + Fix PGXS so extensions can be built against PostgreSQL + installations whose pg_config program does not + appear first in the PATH (Tom) Support gmake draft when building the - SGML documentation (Bruce) + SGML documentation (Bruce) - Unless draft is used, the documentation build will + Unless draft is used, the documentation build will now be repeated if necessary to ensure the index is up-to-date. @@ -8317,9 +8317,9 @@ current_date < 2017-11-17 - Rename macro DLLIMPORT to PGDLLIMPORT to + Rename macro DLLIMPORT to PGDLLIMPORT to avoid conflicting with third party includes (like Tcl) that - define DLLIMPORT (Magnus) + define DLLIMPORT (Magnus) @@ -8332,15 +8332,15 @@ current_date < 2017-11-17 - Update GIN extractQuery() API to allow signalling + Update GIN extractQuery() API to allow signalling that nothing can satisfy the query (Teodor) - Move NAMEDATALEN definition from - postgres_ext.h to pg_config_manual.h + Move NAMEDATALEN definition from + postgres_ext.h to pg_config_manual.h (Peter) @@ -8364,7 +8364,7 @@ current_date < 2017-11-17 - Create a function variable join_search_hook to let plugins + Create a function variable join_search_hook to let plugins override the join search order portion of the planner (Julius Stroffek) @@ -8372,7 +8372,7 @@ current_date < 2017-11-17 - Add tas() support for Renesas' M32R processor + Add tas() support for Renesas' M32R processor (Kazuhiro Inaoka) @@ -8388,14 +8388,14 @@ current_date < 2017-11-17 Change the on-disk representation of the NUMERIC - data type so that the sign_dscale word comes + data type so that the sign_dscale word comes before the weight (Tom) - Use SYSV semaphores rather than POSIX on Darwin + Use SYSV semaphores rather than POSIX on Darwin >= 6.0, i.e., macOS 10.2 and up (Chris Marcellino) @@ -8432,8 +8432,8 @@ current_date < 2017-11-17 - Move contrib README content into the - main PostgreSQL documentation (Albert Cervera i + Move contrib README content into the + main PostgreSQL documentation (Albert Cervera i Areny) @@ -8455,11 +8455,11 @@ current_date < 2017-11-17 Add contrib/uuid-ossp module for generating - UUID values using the OSSP UUID library (Peter) + UUID values using the OSSP UUID library (Peter) - Use configure + Use configure --with-ossp-uuid to activate. This takes advantage of the new UUID builtin type. @@ -8477,14 +8477,14 @@ current_date < 2017-11-17 - Allow contrib/pgbench to set the fillfactor (Pavan + Allow contrib/pgbench to set the fillfactor (Pavan Deolasee) - Add timestamps to contrib/pgbench -l + Add timestamps to contrib/pgbench -l (Greg Smith) @@ -8498,13 +8498,13 @@ current_date < 2017-11-17 - Add GIN support for contrib/hstore (Teodor) + Add GIN support for contrib/hstore (Teodor) - Add GIN support for contrib/pg_trgm (Guillaume Smet, Teodor) + Add GIN support for contrib/pg_trgm (Guillaume Smet, Teodor) diff --git a/doc/src/sgml/release-8.4.sgml b/doc/src/sgml/release-8.4.sgml index 53e319ff33..521048ad93 100644 --- a/doc/src/sgml/release-8.4.sgml +++ b/doc/src/sgml/release-8.4.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.4.X series. Users are encouraged to update to a newer release branch soon. @@ -48,15 +48,15 @@ - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -76,7 +76,7 @@ Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -103,13 +103,13 @@ This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -124,7 +124,7 @@ Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -137,7 +137,7 @@ - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -150,19 +150,19 @@ This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -170,7 +170,7 @@ - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -182,7 +182,7 @@ This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -190,7 +190,7 @@ Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -199,16 +199,16 @@ the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -232,15 +232,15 @@ - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -251,17 +251,17 @@ - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -269,27 +269,27 @@ - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -297,20 +297,20 @@ - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -335,7 +335,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.4.X release series in July 2014. Users are encouraged to update to a newer release branch soon. @@ -387,7 +387,7 @@ - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -400,35 +400,35 @@ - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -454,7 +454,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.4.X release series in July 2014. Users are encouraged to update to a newer release branch soon. @@ -480,19 +480,19 @@ - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -505,7 +505,7 @@ The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -525,7 +525,7 @@ If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -539,12 +539,12 @@ - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -571,7 +571,7 @@ - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -583,35 +583,35 @@ - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -626,7 +626,7 @@ The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -654,25 +654,25 @@ Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -696,7 +696,7 @@ - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -710,19 +710,19 @@ Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -730,21 +730,21 @@ - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -769,12 +769,12 @@ - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -782,51 +782,51 @@ - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -837,7 +837,7 @@ - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -851,21 +851,21 @@ - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -874,20 +874,20 @@ the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -939,13 +939,13 @@ - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -957,12 +957,12 @@ The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). @@ -979,8 +979,8 @@ - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -997,7 +997,7 @@ This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -1015,13 +1015,13 @@ - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -1035,7 +1035,7 @@ In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -1049,7 +1049,7 @@ - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -1060,10 +1060,10 @@ - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -1073,21 +1073,21 @@ - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -1139,7 +1139,7 @@ - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -1153,7 +1153,7 @@ - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -1171,29 +1171,29 @@ - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) @@ -1208,13 +1208,13 @@ Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -1238,14 +1238,14 @@ - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) @@ -1260,21 +1260,21 @@ Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -1288,7 +1288,7 @@ - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -1323,7 +1323,7 @@ However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -1347,41 +1347,41 @@ This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -1396,7 +1396,7 @@ These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. @@ -1417,27 +1417,27 @@ - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -1458,28 +1458,28 @@ - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump (Michael Paquier) + Ignore invalid indexes in pg_dump (Michael Paquier) @@ -1488,24 +1488,24 @@ a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. + pg_dump wouldn't be expected to dump anyway. - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -1513,12 +1513,12 @@ Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -1563,7 +1563,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -1596,19 +1596,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -1620,13 +1620,13 @@ Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -1634,13 +1634,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -1653,7 +1653,7 @@ - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) @@ -1664,41 +1664,41 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) @@ -1717,15 +1717,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -1774,13 +1774,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -1788,8 +1788,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -1811,8 +1811,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -1830,10 +1830,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -1841,7 +1841,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -1854,7 +1854,7 @@ - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -1866,14 +1866,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -1887,7 +1887,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -1895,7 +1895,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -1923,7 +1923,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -1936,8 +1936,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -1947,33 +1947,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -1984,41 +1984,41 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -2029,7 +2029,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -2081,7 +2081,7 @@ These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -2104,22 +2104,22 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -2138,7 +2138,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -2146,7 +2146,7 @@ - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -2196,7 +2196,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -2209,22 +2209,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -2252,22 +2252,22 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -2292,7 +2292,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -2304,15 +2304,15 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -2320,24 +2320,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -2345,7 +2345,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -2357,7 +2357,7 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -2365,22 +2365,22 @@ Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -2426,12 +2426,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -2442,7 +2442,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -2454,7 +2454,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -2480,7 +2480,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -2488,7 +2488,7 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) @@ -2502,7 +2502,7 @@ This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -2510,13 +2510,13 @@ - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -2537,8 +2537,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -2565,12 +2565,12 @@ - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. @@ -2583,7 +2583,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -2602,7 +2602,7 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) @@ -2615,33 +2615,33 @@ - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -2649,21 +2649,21 @@ - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -2711,14 +2711,14 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -2730,7 +2730,7 @@ - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -2745,12 +2745,12 @@ - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -2766,10 +2766,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -2795,16 +2795,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -2842,7 +2842,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -2856,18 +2856,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -2875,8 +2875,8 @@ - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -2910,32 +2910,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -3410,7 +3410,7 @@ - Fix possible buffer overrun in tsvector_concat() + Fix possible buffer overrun in tsvector_concat() (Tom Lane) @@ -3422,14 +3422,14 @@ - Fix crash in xml_recv when processing a - standalone parameter (Tom Lane) + Fix crash in xml_recv when processing a + standalone parameter (Tom Lane) - Make pg_options_to_table return NULL for an option with no + Make pg_options_to_table return NULL for an option with no value (Tom Lane) @@ -3440,7 +3440,7 @@ - Avoid possibly accessing off the end of memory in ANALYZE + Avoid possibly accessing off the end of memory in ANALYZE and in SJIS-2004 encoding conversion (Noah Misch) @@ -3469,7 +3469,7 @@ There was a window wherein a new backend process could read a stale init file but miss the inval messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically - could not read block 0 in file ... later during startup. + could not read block 0 in file ... later during startup. @@ -3490,7 +3490,7 @@ Fix incorrect memory accounting (leading to possible memory bloat) in tuplestores supporting holdable cursors and plpgsql's RETURN - NEXT command (Tom Lane) + NEXT command (Tom Lane) @@ -3526,7 +3526,7 @@ - Allow nested EXISTS queries to be optimized properly (Tom + Allow nested EXISTS queries to be optimized properly (Tom Lane) @@ -3546,12 +3546,12 @@ - Fix EXPLAIN to handle gating Result nodes within + Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans (Tom Lane) - The usual symptom of this oversight was bogus varno errors. + The usual symptom of this oversight was bogus varno errors. @@ -3567,13 +3567,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -3583,8 +3583,8 @@ - Fix VACUUM so that it always updates - pg_class.reltuples/relpages (Tom + Fix VACUUM so that it always updates + pg_class.reltuples/relpages (Tom Lane) @@ -3603,7 +3603,7 @@ - Fix cases where CLUSTER might attempt to access + Fix cases where CLUSTER might attempt to access already-removed TOAST data (Tom Lane) @@ -3611,7 +3611,7 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) @@ -3623,13 +3623,13 @@ The typical symptom of this problem was The function requested is - not supported errors during SSPI login. + not supported errors during SSPI login. - Throw an error if pg_hba.conf contains hostssl + Throw an error if pg_hba.conf contains hostssl but SSL is disabled (Tom Lane) @@ -3641,12 +3641,12 @@ - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -3654,25 +3654,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -3680,7 +3680,7 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) @@ -3698,59 +3698,59 @@ - Correctly handle quotes in locale names during initdb + Correctly handle quotes in locale names during initdb (Heikki Linnakangas) The case can arise with some Windows locales, such as People's - Republic of China. + Republic of China. - Fix pg_upgrade to preserve toast tables' relfrozenxids + Fix pg_upgrade to preserve toast tables' relfrozenxids during an upgrade from 8.3 (Bruce Momjian) - Failure to do this could lead to pg_clog files being + Failure to do this could lead to pg_clog files being removed too soon after the upgrade. - In pg_ctl, support silent mode for service registrations + In pg_ctl, support silent mode for service registrations on Windows (MauMau) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. Be more user-friendly about unsupported cases for parallel - pg_restore (Tom Lane) + pg_restore (Tom Lane) @@ -3761,14 +3761,14 @@ - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -3780,36 +3780,36 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Fix PQsetvalue() to avoid possible crash when adding a new - tuple to a PGresult originally obtained from a server + Fix PQsetvalue() to avoid possible crash when adding a new + tuple to a PGresult originally obtained from a server query (Andrew Chernow) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -3821,7 +3821,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -3829,13 +3829,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -3867,7 +3867,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -3900,10 +3900,10 @@ However, if your installation was upgraded from a previous major - release by running pg_upgrade, you should take + release by running pg_upgrade, you should take action to prevent possible data loss due to a now-fixed bug in - pg_upgrade. The recommended solution is to run - VACUUM FREEZE on all TOAST tables. + pg_upgrade. The recommended solution is to run + VACUUM FREEZE on all TOAST tables. More information is available at http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix. @@ -3923,36 +3923,36 @@ - Fix pg_upgrade's handling of TOAST tables + Fix pg_upgrade's handling of TOAST tables (Bruce Momjian) - The pg_class.relfrozenxid value for + The pg_class.relfrozenxid value for TOAST tables was not correctly copied into the new installation - during pg_upgrade. This could later result in - pg_clog files being discarded while they were still + during pg_upgrade. This could later result in + pg_clog files being discarded while they were still needed to validate tuples in the TOAST tables, leading to - could not access status of transaction failures. + could not access status of transaction failures. This error poses a significant risk of data loss for installations - that have been upgraded with pg_upgrade. This patch - corrects the problem for future uses of pg_upgrade, + that have been upgraded with pg_upgrade. This patch + corrects the problem for future uses of pg_upgrade, but does not in itself cure the issue in installations that have been - processed with a buggy version of pg_upgrade. + processed with a buggy version of pg_upgrade. - Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set + Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set warning (Heikki Linnakangas) - VACUUM would sometimes issue this warning in cases that + VACUUM would sometimes issue this warning in cases that are actually valid. @@ -3986,15 +3986,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -4002,13 +4002,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -4053,7 +4053,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -4085,14 +4085,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -4103,14 +4103,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -4154,15 +4154,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -4187,44 +4187,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -4236,16 +4236,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -4287,17 +4287,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -4307,7 +4307,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -4326,7 +4326,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -4334,19 +4334,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -4362,7 +4362,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -4389,16 +4389,16 @@ Certain cases where a large number of tuples needed to be read in - advance, but work_mem was large enough to allow them all + advance, but work_mem was large enough to allow them all to be held in memory, were unexpectedly slow. - percent_rank(), cume_dist() and - ntile() in particular were subject to this problem. + percent_rank(), cume_dist() and + ntile() in particular were subject to this problem. - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -4410,14 +4410,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -4429,15 +4429,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -4449,7 +4449,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -4457,7 +4457,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -4469,11 +4469,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -4493,20 +4493,20 @@ Fix incorrect calculation of transaction status in - ecpg (Itagaki Takahiro) + ecpg (Itagaki Takahiro) - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -4518,22 +4518,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -4541,20 +4541,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -4605,7 +4605,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -4634,7 +4634,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -4643,7 +4643,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -4669,7 +4669,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -4694,18 +4694,18 @@ - Fix mishandling of cross-type IN comparisons (Tom Lane) + Fix mishandling of cross-type IN comparisons (Tom Lane) This could result in failures if the planner tried to implement an - IN join with a sort-then-unique-then-plain-join plan. + IN join with a sort-then-unique-then-plain-join plan. - Fix computation of ANALYZE statistics for tsvector + Fix computation of ANALYZE statistics for tsvector columns (Jan Urbanski) @@ -4717,8 +4717,8 @@ - Improve planner's estimate of memory used by array_agg(), - string_agg(), and similar aggregate functions + Improve planner's estimate of memory used by array_agg(), + string_agg(), and similar aggregate functions (Hitoshi Harada) @@ -4734,7 +4734,7 @@ - If a plan is prepared while CREATE INDEX CONCURRENTLY is + If a plan is prepared while CREATE INDEX CONCURRENTLY is in progress for one of the referenced tables, it is supposed to be re-planned once the index is ready for use. This was not happening reliably. @@ -4812,7 +4812,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -4849,7 +4849,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -4861,7 +4861,7 @@ - In particular, fillfactor would be read as zero if any + In particular, fillfactor would be read as zero if any other reloption had been set for the table, leading to serious bloat. @@ -4869,49 +4869,49 @@ Fix inheritance count tracking in ALTER TABLE ... ADD - CONSTRAINT (Robert Haas) + CONSTRAINT (Robert Haas) Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) - Improve CREATE INDEX's checking of whether proposed index + Improve CREATE INDEX's checking of whether proposed index expressions are immutable (Tom Lane) - Fix REASSIGN OWNED to handle operator classes and families + Fix REASSIGN OWNED to handle operator classes and families (Asko Tiidumaa) - Fix possible core dump when comparing two empty tsquery values + Fix possible core dump when comparing two empty tsquery values (Tom Lane) - Fix LIKE's handling of patterns containing % - followed by _ (Tom Lane) + Fix LIKE's handling of patterns containing % + followed by _ (Tom Lane) @@ -4926,7 +4926,7 @@ - Input such as 'J100000'::date worked before 8.4, + Input such as 'J100000'::date worked before 8.4, but was unintentionally broken by added error-checking. @@ -4934,7 +4934,7 @@ Fix PL/pgSQL to throw an error, not crash, if a cursor is closed within - a FOR loop that is iterating over that cursor + a FOR loop that is iterating over that cursor (Heikki Linnakangas) @@ -4942,22 +4942,22 @@ In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - In libpq, fix full SSL certificate verification for the - case where both host and hostaddr are specified + In libpq, fix full SSL certificate verification for the + case where both host and hostaddr are specified (Tom Lane) - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -4965,19 +4965,19 @@ - Fix some issues in pg_dump's handling of SQL/MED objects + Fix some issues in pg_dump's handling of SQL/MED objects (Tom Lane) - Notably, pg_dump would always fail if run by a + Notably, pg_dump would always fail if run by a non-superuser, which was not intended. - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's handling of non-seekable archive files (Tom Lane, Robert Haas) @@ -4989,31 +4989,31 @@ Improve parallel pg_restore's ability to cope with selective restore - (-L option) (Tom Lane) + (-L option) (Tom Lane) - The original code tended to fail if the -L file commanded + The original code tended to fail if the -L file commanded a non-default restore ordering. - Fix ecpg to process data from RETURNING + Fix ecpg to process data from RETURNING clauses correctly (Michael Meskes) - Fix some memory leaks in ecpg (Zoltan Boszormenyi) + Fix some memory leaks in ecpg (Zoltan Boszormenyi) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -5021,30 +5021,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -5057,7 +5057,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -5072,7 +5072,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -5119,19 +5119,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -5140,19 +5140,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -5160,16 +5160,16 @@ Fix data corruption during WAL replay of - ALTER ... SET TABLESPACE (Tom) + ALTER ... SET TABLESPACE (Tom) - When archive_mode is on, ALTER ... SET TABLESPACE + When archive_mode is on, ALTER ... SET TABLESPACE generates a WAL record whose replay logic was incorrect. It could write the data to the wrong place, leading to possibly-unrecoverable data corruption. Data corruption would be observed on standby slaves, and could occur on the master as well if a database crash and recovery - occurred after committing the ALTER and before the next + occurred after committing the ALTER and before the next checkpoint. @@ -5194,20 +5194,20 @@ This avoids failures if the function's code is invalid without the setting; an example is that SQL functions may not parse if the - search_path is not correct. + search_path is not correct. - Do constraint exclusion for inherited UPDATE and - DELETE target tables when - constraint_exclusion = partition (Tom) + Do constraint exclusion for inherited UPDATE and + DELETE target tables when + constraint_exclusion = partition (Tom) Due to an oversight, this setting previously only caused constraint - exclusion to be checked in SELECT commands. + exclusion to be checked in SELECT commands. @@ -5219,10 +5219,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -5230,7 +5230,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -5242,8 +5242,8 @@ - Fix erroneous handling of %r parameter in - recovery_end_command (Heikki) + Fix erroneous handling of %r parameter in + recovery_end_command (Heikki) @@ -5254,20 +5254,20 @@ Ensure the archiver process responds to changes in - archive_command as soon as possible (Tom) + archive_command as soon as possible (Tom) - Fix PL/pgSQL's CASE statement to not fail when the + Fix PL/pgSQL's CASE statement to not fail when the case expression is a query that returns no rows (Tom) - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -5286,15 +5286,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -5305,23 +5305,23 @@ - Avoid formatting failure in psql when running in a - locale context that doesn't match the client_encoding + Avoid formatting failure in psql when running in a + locale context that doesn't match the client_encoding (Tom) - Fix unnecessary GIN indexes do not support whole-index scans - errors for unsatisfiable queries using contrib/intarray + Fix unnecessary GIN indexes do not support whole-index scans + errors for unsatisfiable queries using contrib/intarray operators (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -5329,7 +5329,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -5361,14 +5361,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -5410,7 +5410,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -5446,7 +5446,7 @@ Fix possible crash due to overenthusiastic invalidation of cached - plan for ROLLBACK (Tom) + plan for ROLLBACK (Tom) @@ -5492,8 +5492,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -5533,12 +5533,12 @@ - Avoid failure when EXPLAIN has to print a FieldStore or + Avoid failure when EXPLAIN has to print a FieldStore or assignment ArrayRef expression (Tom) - These cases can arise now that EXPLAIN VERBOSE tries to + These cases can arise now that EXPLAIN VERBOSE tries to print plan node target lists. @@ -5547,7 +5547,7 @@ Avoid an unnecessary coercion failure in some cases where an undecorated literal string appears in a subquery within - UNION/INTERSECT/EXCEPT (Tom) + UNION/INTERSECT/EXCEPT (Tom) @@ -5564,7 +5564,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -5573,7 +5573,7 @@ Always pass the catalog ID to an option validator function specified in - CREATE FOREIGN DATA WRAPPER (Martin Pihlak) + CREATE FOREIGN DATA WRAPPER (Martin Pihlak) @@ -5591,7 +5591,7 @@ - Add support for doing FULL JOIN ON FALSE (Tom) + Add support for doing FULL JOIN ON FALSE (Tom) @@ -5604,13 +5604,13 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - Prevent treating an INOUT cast as representing binary + Prevent treating an INOUT cast as representing binary compatibility (Heikki) @@ -5623,24 +5623,24 @@ This is more useful than before and helps to prevent confusion when - a REVOKE generates multiple messages, which formerly + a REVOKE generates multiple messages, which formerly appeared to be duplicates. - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -5648,83 +5648,83 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Disallow GSSAPI authentication on local connections, + Disallow GSSAPI authentication on local connections, since it requires a hostname to function correctly (Magnus) - Protect ecpg against applications freeing strings + Protect ecpg against applications freeing strings unexpectedly (Michael) - Make ecpg report the proper SQLSTATE if the connection + Make ecpg report the proper SQLSTATE if the connection disappears (Michael) - Fix translation of cell contents in psql \d + Fix translation of cell contents in psql \d output (Heikki) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Fix a small per-query memory leak in psql (Tom) + Fix a small per-query memory leak in psql (Tom) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) - Fix pg_dump's output of permissions for foreign servers + Fix pg_dump's output of permissions for foreign servers (Heikki) - Fix possible crash in parallel pg_restore due to + Fix possible crash in parallel pg_restore due to out-of-range dependency IDs (Tom) @@ -5745,7 +5745,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -5757,55 +5757,55 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent ExecutorEnd from being run on portals created + Prevent ExecutorEnd from being run on portals created within a failed transaction or subtransaction (Tom) This is known to cause issues when using - contrib/auto_explain. + contrib/auto_explain. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Allow zero-dimensional arrays in contrib/ltree operations + Allow zero-dimensional arrays in contrib/ltree operations (Tom) This case was formerly rejected as an error, but it's more convenient to treat it the same as a zero-element array. In particular this avoids - unnecessary failures when an ltree operation is applied to the - result of ARRAY(SELECT ...) and the sub-select returns no + unnecessary failures when an ltree operation is applied to the + result of ARRAY(SELECT ...) and the sub-select returns no rows. - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -5816,7 +5816,7 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. @@ -5835,7 +5835,7 @@ - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -5865,7 +5865,7 @@ A dump/restore is not required for those running 8.4.X. However, if you have any hash indexes, - you should REINDEX them after updating to 8.4.2, + you should REINDEX them after updating to 8.4.2, to repair possible damage. @@ -5911,7 +5911,7 @@ preserve the ordering. So application of either of those operations could lead to permanent corruption of an index, in the sense that searches might fail to find entries that are present. To deal with - this, it is recommended to REINDEX any hash indexes you may + this, it is recommended to REINDEX any hash indexes you may have after installing this update. @@ -5930,14 +5930,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -5956,14 +5956,14 @@ - Fix crash if a DROP is attempted on an internally-dependent + Fix crash if a DROP is attempted on an internally-dependent object (Tom) - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -5991,7 +5991,7 @@ - Fix memory leak in postmaster when re-parsing pg_hba.conf + Fix memory leak in postmaster when re-parsing pg_hba.conf (Tom) @@ -6010,8 +6010,8 @@ - Make FOR UPDATE/SHARE in the primary query not propagate - into WITH queries (Tom) + Make FOR UPDATE/SHARE in the primary query not propagate + into WITH queries (Tom) @@ -6019,18 +6019,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - the FOR UPDATE will now affect bar but not - foo. This is more useful and consistent than the original - 8.4 behavior, which tried to propagate FOR UPDATE into the - WITH query but always failed due to assorted implementation - restrictions. It also follows the design rule that WITH + the FOR UPDATE will now affect bar but not + foo. This is more useful and consistent than the original + 8.4 behavior, which tried to propagate FOR UPDATE into the + WITH query but always failed due to assorted implementation + restrictions. It also follows the design rule that WITH queries are executed as if independent of the main query. - Fix bug with a WITH RECURSIVE query immediately inside + Fix bug with a WITH RECURSIVE query immediately inside another one (Tom) @@ -6056,7 +6056,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix wrong search results for a multi-column GIN index with - fastupdate enabled (Teodor) + fastupdate enabled (Teodor) @@ -6066,7 +6066,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - These bugs were masked when full_page_writes was on, but + These bugs were masked when full_page_writes was on, but with it off a WAL replay failure was certain if a crash occurred before the next checkpoint. @@ -6104,7 +6104,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -6127,7 +6127,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Ensure that domain constraints are enforced in constructs like - ARRAY[...]::domain, where the domain is over an array type + ARRAY[...]::domain, where the domain is over an array type (Heikki) @@ -6153,7 +6153,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix CREATE TABLE to properly merge default expressions + Fix CREATE TABLE to properly merge default expressions coming from different inheritance parent tables (Tom) @@ -6175,39 +6175,39 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix incorrect handling of WHERE - x=x conditions (Tom) + Fix incorrect handling of WHERE + x=x conditions (Tom) In some cases these could get ignored as redundant, but they aren't - — they're equivalent to x IS NOT NULL. + — they're equivalent to x IS NOT NULL. Fix incorrect plan construction when using hash aggregation to implement - DISTINCT for textually identical volatile expressions (Tom) + DISTINCT for textually identical volatile expressions (Tom) - Fix Assert failure for a volatile SELECT DISTINCT ON + Fix Assert failure for a volatile SELECT DISTINCT ON expression (Tom) - Fix ts_stat() to not fail on an empty tsvector + Fix ts_stat() to not fail on an empty tsvector value (Tom) @@ -6220,7 +6220,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix encoding handling in xml binary input (Heikki) + Fix encoding handling in xml binary input (Heikki) @@ -6231,7 +6231,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -6251,7 +6251,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -6268,43 +6268,43 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix ecpg problem with comments in DECLARE - CURSOR statements (Michael) + Fix ecpg problem with comments in DECLARE + CURSOR statements (Michael) - Fix ecpg to not treat recently-added keywords as + Fix ecpg to not treat recently-added keywords as reserved words (Tom) - This affected the keywords CALLED, CATALOG, - DEFINER, ENUM, FOLLOWING, - INVOKER, OPTIONS, PARTITION, - PRECEDING, RANGE, SECURITY, - SERVER, UNBOUNDED, and WRAPPER. + This affected the keywords CALLED, CATALOG, + DEFINER, ENUM, FOLLOWING, + INVOKER, OPTIONS, PARTITION, + PRECEDING, RANGE, SECURITY, + SERVER, UNBOUNDED, and WRAPPER. - Re-allow regular expression special characters in psql's - \df function name parameter (Tom) + Re-allow regular expression special characters in psql's + \df function name parameter (Tom) - In contrib/fuzzystrmatch, correct the calculation of - levenshtein distances with non-default costs (Marcin Mank) + In contrib/fuzzystrmatch, correct the calculation of + levenshtein distances with non-default costs (Marcin Mank) - In contrib/pg_standby, disable triggering failover with a + In contrib/pg_standby, disable triggering failover with a signal on Windows (Fujii Masao) @@ -6316,35 +6316,35 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Put FREEZE and VERBOSE options in the right - order in the VACUUM command that - contrib/vacuumdb produces (Heikki) + Put FREEZE and VERBOSE options in the right + order in the VACUUM command that + contrib/vacuumdb produces (Heikki) - Fix possible leak of connections when contrib/dblink + Fix possible leak of connections when contrib/dblink encounters an error (Tatsuhito Kasahara) - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -6357,14 +6357,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This includes adding IDT to the default + This includes adding IDT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -6418,7 +6418,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix cannot make new WAL entries during recovery error (Tom) + Fix cannot make new WAL entries during recovery error (Tom) @@ -6435,39 +6435,39 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. - Make window function PARTITION BY and ORDER BY + Make window function PARTITION BY and ORDER BY items always be interpreted as simple expressions (Tom) In 8.4.0 these lists were parsed following the rules used for - top-level GROUP BY and ORDER BY lists. + top-level GROUP BY and ORDER BY lists. But this was not correct per the SQL standard, and it led to possible circularity. @@ -6479,8 +6479,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - These led to wrong query results in some cases where IN - or EXISTS was used together with another join. + These led to wrong query results in some cases where IN + or EXISTS was used together with another join. @@ -6492,8 +6492,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE An example is - SELECT COUNT(ss.*) FROM ... LEFT JOIN (SELECT ...) ss ON .... - Here, ss.* would be treated as ROW(NULL,NULL,...) + SELECT COUNT(ss.*) FROM ... LEFT JOIN (SELECT ...) ss ON .... + Here, ss.* would be treated as ROW(NULL,NULL,...) for null-extended join rows, which is not the same as a simple NULL. Now it is treated as a simple NULL. @@ -6506,7 +6506,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This bug led to the often-reported could not reattach - to shared memory error message. + to shared memory error message. @@ -6530,36 +6530,36 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Ensure that a fast shutdown request will forcibly terminate - open sessions, even if a smart shutdown was already in progress + Ensure that a fast shutdown request will forcibly terminate + open sessions, even if a smart shutdown was already in progress (Fujii Masao) - Avoid memory leak for array_agg() in GROUP BY + Avoid memory leak for array_agg() in GROUP BY queries (Tom) - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). Include the fractional part in the result of - EXTRACT(second) and - EXTRACT(milliseconds) for - time and time with time zone inputs (Tom) + EXTRACT(second) and + EXTRACT(milliseconds) for + time and time with time zone inputs (Tom) @@ -6570,8 +6570,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -6589,13 +6589,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix a typo that disabled commit_delay (Jeff Janes) + Fix a typo that disabled commit_delay (Jeff Janes) - Output early-startup messages to postmaster.log if the + Output early-startup messages to postmaster.log if the server is started in silent mode (Tom) @@ -6619,33 +6619,33 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix several errors in pg_dump's - --binary-upgrade mode (Bruce, Tom) + Fix several errors in pg_dump's + --binary-upgrade mode (Bruce, Tom) - pg_dump --binary-upgrade is used by pg_migrator. + pg_dump --binary-upgrade is used by pg_migrator. - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -6658,14 +6658,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Work around gcc bug that causes floating-point exception - instead of division by zero on some platforms (Tom) + Work around gcc bug that causes floating-point exception + instead of division by zero on some platforms (Tom) - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Mauritius. @@ -6687,7 +6687,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Overview - After many years of development, PostgreSQL has + After many years of development, PostgreSQL has become feature-complete in many areas. This release shows a targeted approach to adding features (e.g., authentication, monitoring, space reuse), and adds capabilities defined in the @@ -6742,7 +6742,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improved join performance for EXISTS and NOT EXISTS queries + Improved join performance for EXISTS and NOT EXISTS queries @@ -6825,15 +6825,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Previously this was selected by configure's - option. To retain + the old behavior, build with . - Remove ipcclean utility command (Bruce) + Remove ipcclean utility command (Bruce) @@ -6853,50 +6853,50 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Change default setting for - log_min_messages to warning (previously - it was notice) to reduce log file volume (Tom) + log_min_messages to warning (previously + it was notice) to reduce log file volume (Tom) - Change default setting for max_prepared_transactions to + Change default setting for max_prepared_transactions to zero (previously it was 5) (Tom) - Make debug_print_parse, debug_print_rewritten, - and debug_print_plan - output appear at LOG message level, not - DEBUG1 as formerly (Tom) + Make debug_print_parse, debug_print_rewritten, + and debug_print_plan + output appear at LOG message level, not + DEBUG1 as formerly (Tom) - Make debug_pretty_print default to on (Tom) + Make debug_pretty_print default to on (Tom) - Remove explain_pretty_print parameter (no longer needed) (Tom) + Remove explain_pretty_print parameter (no longer needed) (Tom) - Make log_temp_files settable by superusers only, like other + Make log_temp_files settable by superusers only, like other logging options (Simon Riggs) - Remove automatic appending of the epoch timestamp when no % - escapes are present in log_filename (Robert Haas) + Remove automatic appending of the epoch timestamp when no % + escapes are present in log_filename (Robert Haas) @@ -6907,22 +6907,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove log_restartpoints from recovery.conf; - instead use log_checkpoints (Simon) + Remove log_restartpoints from recovery.conf; + instead use log_checkpoints (Simon) - Remove krb_realm and krb_server_hostname; - these are now set in pg_hba.conf instead (Magnus) + Remove krb_realm and krb_server_hostname; + these are now set in pg_hba.conf instead (Magnus) There are also significant changes in pg_hba.conf, + linkend="release-8-4-pg-hba-conf">pg_hba.conf, as described below. @@ -6938,12 +6938,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Change TRUNCATE and LOCK to + Change TRUNCATE and LOCK to apply to child tables of the specified table(s) (Peter) - These commands now accept an ONLY option that prevents + These commands now accept an ONLY option that prevents processing child tables; this option must be used if the old behavior is needed. @@ -6951,8 +6951,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - SELECT DISTINCT and - UNION/INTERSECT/EXCEPT + SELECT DISTINCT and + UNION/INTERSECT/EXCEPT no longer always produce sorted output (Tom) @@ -6961,17 +6961,17 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE by means of Sort/Unique processing (i.e., sort then remove adjacent duplicates). Now they can be implemented by hashing, which will not produce sorted output. If an application relied on the output being - in sorted order, the recommended fix is to add an ORDER BY + in sorted order, the recommended fix is to add an ORDER BY clause. As a short-term workaround, the previous behavior can be - restored by disabling enable_hashagg, but that is a very - performance-expensive fix. SELECT DISTINCT ON never uses + restored by disabling enable_hashagg, but that is a very + performance-expensive fix. SELECT DISTINCT ON never uses hashing, however, so its behavior is unchanged. - Force child tables to inherit CHECK constraints from parents + Force child tables to inherit CHECK constraints from parents (Alex Hunsaker, Nikhil Sontakke, Tom) @@ -6985,14 +6985,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disallow negative LIMIT or OFFSET + Disallow negative LIMIT or OFFSET values, rather than treating them as zero (Simon) - Disallow LOCK TABLE outside a transaction block + Disallow LOCK TABLE outside a transaction block (Tom) @@ -7004,12 +7004,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Sequences now contain an additional start_value column + Sequences now contain an additional start_value column (Zoltan Boszormenyi) - This supports ALTER SEQUENCE ... RESTART. + This supports ALTER SEQUENCE ... RESTART. @@ -7025,14 +7025,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make numeric zero raised to a fractional power return - 0, rather than throwing an error, and make - numeric zero raised to the zero power return 1, + Make numeric zero raised to a fractional power return + 0, rather than throwing an error, and make + numeric zero raised to the zero power return 1, rather than error (Bruce) - This matches the longstanding float8 behavior. + This matches the longstanding float8 behavior. @@ -7042,7 +7042,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - The changed behavior is more IEEE-standard + The changed behavior is more IEEE-standard compliant. @@ -7050,7 +7050,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Throw an error if an escape character is the last character in - a LIKE pattern (i.e., it has nothing to escape) (Tom) + a LIKE pattern (i.e., it has nothing to escape) (Tom) @@ -7061,8 +7061,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove ~=~ and ~<>~ operators - formerly used for LIKE index comparisons (Tom) + Remove ~=~ and ~<>~ operators + formerly used for LIKE index comparisons (Tom) @@ -7072,7 +7072,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - xpath() now passes its arguments to libxml + xpath() now passes its arguments to libxml without any changes (Andrew) @@ -7085,7 +7085,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make xmlelement() format attribute values just like + Make xmlelement() format attribute values just like content values (Peter) @@ -7098,13 +7098,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Rewrite memory management for libxml-using functions + Rewrite memory management for libxml-using functions (Tom) This change should avoid some compatibility problems with use of - libxml in PL/Perl and other add-on code. + libxml in PL/Perl and other add-on code. @@ -7129,8 +7129,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - DateStyle no longer controls interval output - formatting; instead there is a new variable IntervalStyle + DateStyle no longer controls interval output + formatting; instead there is a new variable IntervalStyle (Ron Mayer) @@ -7138,7 +7138,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve consistency of handling of fractional seconds in - timestamp and interval output (Ron Mayer) + timestamp and interval output (Ron Mayer) @@ -7149,15 +7149,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make to_char()'s localized month/day names depend - on LC_TIME, not LC_MESSAGES (Euler + Make to_char()'s localized month/day names depend + on LC_TIME, not LC_MESSAGES (Euler Taveira de Oliveira) - Cause to_date() and to_timestamp() + Cause to_date() and to_timestamp() to more consistently report errors for invalid input (Brendan Jurd) @@ -7171,15 +7171,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix to_timestamp() to not require upper/lower case - matching for meridian (AM/PM) and era - (BC/AD) format designations (Brendan + Fix to_timestamp() to not require upper/lower case + matching for meridian (AM/PM) and era + (BC/AD) format designations (Brendan Jurd) - For example, input value ad now matches the format - string AD. + For example, input value ad now matches the format + string AD. @@ -7217,8 +7217,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow SELECT DISTINCT and - UNION/INTERSECT/EXCEPT to + Allow SELECT DISTINCT and + UNION/INTERSECT/EXCEPT to use hashing (Tom) @@ -7235,12 +7235,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This work formalizes our previous ad-hoc treatment of IN - (SELECT ...) clauses, and extends it to EXISTS and - NOT EXISTS clauses. It should result in significantly - better planning of EXISTS and NOT EXISTS - queries. In general, logically equivalent IN and - EXISTS clauses should now have similar performance, - whereas previously IN often won. + (SELECT ...) clauses, and extends it to EXISTS and + NOT EXISTS clauses. It should result in significantly + better planning of EXISTS and NOT EXISTS + queries. In general, logically equivalent IN and + EXISTS clauses should now have similar performance, + whereas previously IN often won. @@ -7258,7 +7258,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve the performance of text_position() and + Improve the performance of text_position() and related functions by using Boyer-Moore-Horspool searching (David Rowley) @@ -7283,26 +7283,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Increase the default value of default_statistics_target - from 10 to 100 (Greg Sabino Mullane, + Increase the default value of default_statistics_target + from 10 to 100 (Greg Sabino Mullane, Tom) - The maximum value was also increased from 1000 to - 10000. + The maximum value was also increased from 1000 to + 10000. - Perform constraint_exclusion checking by default - in queries involving inheritance or UNION ALL (Tom) + Perform constraint_exclusion checking by default + in queries involving inheritance or UNION ALL (Tom) - A new constraint_exclusion setting, - partition, was added to specify this behavior. + A new constraint_exclusion setting, + partition, was added to specify this behavior. @@ -7313,15 +7313,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE The amount of read-ahead is controlled by - effective_io_concurrency. This feature is available only - if the kernel has posix_fadvise() support. + effective_io_concurrency. This feature is available only + if the kernel has posix_fadvise() support. - Inline simple set-returning SQL functions in - FROM clauses (Richard Rowell) + Inline simple set-returning SQL functions in + FROM clauses (Richard Rowell) @@ -7336,7 +7336,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Reduce volume of temporary data in multi-batch hash joins - by suppressing physical tlist optimization (Michael + by suppressing physical tlist optimization (Michael Henderson, Ramon Lawrence) @@ -7344,7 +7344,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Avoid waiting for idle-in-transaction sessions during - CREATE INDEX CONCURRENTLY (Simon) + CREATE INDEX CONCURRENTLY (Simon) @@ -7368,15 +7368,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Convert many postgresql.conf settings to enumerated - values so that pg_settings can display the valid + Convert many postgresql.conf settings to enumerated + values so that pg_settings can display the valid values (Magnus) - Add cursor_tuple_fraction parameter to control the + Add cursor_tuple_fraction parameter to control the fraction of a cursor's rows that the planner assumes will be fetched (Robert Hell) @@ -7385,7 +7385,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Allow underscores in the names of custom variable - classes in postgresql.conf (Tom) + classes in postgresql.conf (Tom) @@ -7399,12 +7399,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove support for the (insecure) crypt authentication method + Remove support for the (insecure) crypt authentication method (Magnus) - This effectively obsoletes pre-PostgreSQL 7.2 client + This effectively obsoletes pre-PostgreSQL 7.2 client libraries, as there is no longer any non-plaintext password method that they can use. @@ -7412,21 +7412,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support regular expressions in pg_ident.conf + Support regular expressions in pg_ident.conf (Magnus) - Allow Kerberos/GSSAPI parameters + Allow Kerberos/GSSAPI parameters to be changed without restarting the postmaster (Magnus) - Support SSL certificate chains in server certificate + Support SSL certificate chains in server certificate file (Andrew Gierth) @@ -7440,8 +7440,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Report appropriate error message for combination of MD5 - authentication and db_user_namespace enabled (Bruce) + Report appropriate error message for combination of MD5 + authentication and db_user_namespace enabled (Bruce) @@ -7449,26 +7449,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <filename>pg_hba.conf</> + <filename>pg_hba.conf</filename> - Change all authentication options to use name=value + Change all authentication options to use name=value syntax (Magnus) - This makes incompatible changes to the ldap, - pam and ident authentication methods. All - pg_hba.conf entries with these methods need to be + This makes incompatible changes to the ldap, + pam and ident authentication methods. All + pg_hba.conf entries with these methods need to be rewritten using the new format. - Remove the ident sameuser option, instead making that + Remove the ident sameuser option, instead making that behavior the default if no usermap is specified (Magnus) @@ -7480,14 +7480,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Previously a usermap was only supported for ident + Previously a usermap was only supported for ident authentication. - Add clientcert option to control requesting of a + Add clientcert option to control requesting of a client certificate (Magnus) @@ -7499,13 +7499,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add cert authentication method to allow - user authentication via SSL certificates + Add cert authentication method to allow + user authentication via SSL certificates (Magnus) - Previously SSL certificates could only verify that + Previously SSL certificates could only verify that the client had access to a certificate, not authenticate a user. @@ -7513,20 +7513,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow krb5, gssapi and sspi - realm and krb5 host settings to be specified in - pg_hba.conf (Magnus) + Allow krb5, gssapi and sspi + realm and krb5 host settings to be specified in + pg_hba.conf (Magnus) - These override the settings in postgresql.conf. + These override the settings in postgresql.conf. - Add include_realm parameter for krb5, - gssapi, and sspi methods (Magnus) + Add include_realm parameter for krb5, + gssapi, and sspi methods (Magnus) @@ -7537,7 +7537,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Parse pg_hba.conf fully when it is loaded, + Parse pg_hba.conf fully when it is loaded, so that errors are reported immediately (Magnus) @@ -7552,15 +7552,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Show all parsing errors in pg_hba.conf instead of + Show all parsing errors in pg_hba.conf instead of aborting after the first one (Selena Deckelmann) - Support ident authentication over Unix-domain sockets - on Solaris (Garick Hamlin) + Support ident authentication over Unix-domain sockets + on Solaris (Garick Hamlin) @@ -7574,7 +7574,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Provide an option to pg_start_backup() to force its + Provide an option to pg_start_backup() to force its implied checkpoint to finish as quickly as possible (Tom) @@ -7586,13 +7586,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make pg_stop_backup() wait for modified WAL + Make pg_stop_backup() wait for modified WAL files to be archived (Simon) This guarantees that the backup is valid at the time - pg_stop_backup() completes. + pg_stop_backup() completes. @@ -7606,22 +7606,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Delay smart shutdown while a continuous archiving base backup + Delay smart shutdown while a continuous archiving base backup is in progress (Laurenz Albe) - Cancel a continuous archiving base backup if fast shutdown + Cancel a continuous archiving base backup if fast shutdown is requested (Laurenz Albe) - Allow recovery.conf boolean variables to take the - same range of string values as postgresql.conf + Allow recovery.conf boolean variables to take the + same range of string values as postgresql.conf boolean variables (Bruce) @@ -7637,20 +7637,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_conf_load_time() to report when - the PostgreSQL configuration files were last loaded + Add pg_conf_load_time() to report when + the PostgreSQL configuration files were last loaded (George Gensure) - Add pg_terminate_backend() to safely terminate a - backend (the SIGTERM signal works also) (Tom, Bruce) + Add pg_terminate_backend() to safely terminate a + backend (the SIGTERM signal works also) (Tom, Bruce) - While it's always been possible to SIGTERM a single + While it's always been possible to SIGTERM a single backend, this was previously considered unsupported; and testing of the case found some bugs that are now fixed. @@ -7664,30 +7664,30 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Function statistics appear in a new system view, - pg_stat_user_functions. Tracking is controlled - by the new parameter track_functions. + pg_stat_user_functions. Tracking is controlled + by the new parameter track_functions. Allow specification of the maximum query string size in - pg_stat_activity via new - track_activity_query_size parameter (Thomas Lee) + pg_stat_activity via new + track_activity_query_size parameter (Thomas Lee) - Increase the maximum line length sent to syslog, in + Increase the maximum line length sent to syslog, in hopes of improving performance (Tom) - Add read-only configuration variables segment_size, - wal_block_size, and wal_segment_size + Add read-only configuration variables segment_size, + wal_block_size, and wal_segment_size (Bernd Helmle) @@ -7701,7 +7701,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_stat_get_activity(pid) function to return + Add pg_stat_get_activity(pid) function to return information about a specific process id (Magnus) @@ -7709,14 +7709,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Allow the location of the server's statistics file to be specified - via stats_temp_directory (Magnus) + via stats_temp_directory (Magnus) This allows the statistics file to be placed in a - RAM-resident directory to reduce I/O requirements. + RAM-resident directory to reduce I/O requirements. On startup/shutdown, the file is copied to its traditional location - ($PGDATA/global/) so it is preserved across restarts. + ($PGDATA/global/) so it is preserved across restarts. @@ -7732,45 +7732,45 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for WINDOW functions (Hitoshi Harada) + Add support for WINDOW functions (Hitoshi Harada) - Add support for WITH clauses (CTEs), including WITH - RECURSIVE (Yoshiyuki Asaba, Tatsuo Ishii, Tom) + Add support for WITH clauses (CTEs), including WITH + RECURSIVE (Yoshiyuki Asaba, Tatsuo Ishii, Tom) - Add TABLE command (Peter) + Add TABLE command (Peter) - TABLE tablename is a SQL standard short-hand for - SELECT * FROM tablename. + TABLE tablename is a SQL standard short-hand for + SELECT * FROM tablename. - Allow AS to be optional when specifying a - SELECT (or RETURNING) column output + Allow AS to be optional when specifying a + SELECT (or RETURNING) column output label (Hiroshi Saito) This works so long as the column label is not any - PostgreSQL keyword; otherwise AS is still + PostgreSQL keyword; otherwise AS is still needed. - Support set-returning functions in SELECT result lists + Support set-returning functions in SELECT result lists even for functions that return their result via a tuplestore (Tom) @@ -7789,22 +7789,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow SELECT FOR UPDATE/SHARE to work + Allow SELECT FOR UPDATE/SHARE to work on inheritance trees (Tom) - Add infrastructure for SQL/MED (Martin Pihlak, + Add infrastructure for SQL/MED (Martin Pihlak, Peter) - There are no remote or external SQL/MED capabilities + There are no remote or external SQL/MED capabilities yet, but this change provides a standardized and future-proof system for managing connection information for modules like - dblink and plproxy. + dblink and plproxy. @@ -7827,7 +7827,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This allows constructs such as - row(1, 1.1) = any (array[row(7, 7.7), row(1, 1.0)]). + row(1, 1.1) = any (array[row(7, 7.7), row(1, 1.0)]). This is particularly useful in recursive queries. @@ -7835,14 +7835,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add support for Unicode string literal and identifier specifications - using code points, e.g. U&'d\0061t\+000061' + using code points, e.g. U&'d\0061t\+000061' (Peter) - Reject \000 in string literals and COPY data + Reject \000 in string literals and COPY data (Tom) @@ -7866,37 +7866,37 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>TRUNCATE</> + <command>TRUNCATE</command> - Support statement-level ON TRUNCATE triggers (Simon) + Support statement-level ON TRUNCATE triggers (Simon) - Add RESTART/CONTINUE IDENTITY options - for TRUNCATE TABLE + Add RESTART/CONTINUE IDENTITY options + for TRUNCATE TABLE (Zoltan Boszormenyi) The start value of a sequence can be changed by ALTER - SEQUENCE START WITH. + SEQUENCE START WITH. - Allow TRUNCATE tab1, tab1 to succeed (Bruce) + Allow TRUNCATE tab1, tab1 to succeed (Bruce) - Add a separate TRUNCATE permission (Robert Haas) + Add a separate TRUNCATE permission (Robert Haas) @@ -7905,38 +7905,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>EXPLAIN</> + <command>EXPLAIN</command> - Make EXPLAIN VERBOSE show the output columns of each + Make EXPLAIN VERBOSE show the output columns of each plan node (Tom) - Previously EXPLAIN VERBOSE output an internal + Previously EXPLAIN VERBOSE output an internal representation of the query plan. (That behavior is now - available via debug_print_plan.) + available via debug_print_plan.) - Make EXPLAIN identify subplans and initplans with + Make EXPLAIN identify subplans and initplans with individual labels (Tom) - Make EXPLAIN honor debug_print_plan (Tom) + Make EXPLAIN honor debug_print_plan (Tom) - Allow EXPLAIN on CREATE TABLE AS (Peter) + Allow EXPLAIN on CREATE TABLE AS (Peter) @@ -7945,25 +7945,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <literal>LIMIT</>/<literal>OFFSET</> + <literal>LIMIT</literal>/<literal>OFFSET</literal> - Allow sub-selects in LIMIT and OFFSET (Tom) + Allow sub-selects in LIMIT and OFFSET (Tom) - Add SQL-standard syntax for - LIMIT/OFFSET capabilities (Peter) + Add SQL-standard syntax for + LIMIT/OFFSET capabilities (Peter) To wit, OFFSET num {ROW|ROWS} FETCH {FIRST|NEXT} [num] {ROW|ROWS} - ONLY. + ONLY. @@ -7986,20 +7986,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Refactor multi-object DROP operations to reduce the - need for CASCADE (Alex Hunsaker) + Refactor multi-object DROP operations to reduce the + need for CASCADE (Alex Hunsaker) - For example, if table B has a dependency on table - A, the command DROP TABLE A, B no longer - requires the CASCADE option. + For example, if table B has a dependency on table + A, the command DROP TABLE A, B no longer + requires the CASCADE option. - Fix various problems with concurrent DROP commands + Fix various problems with concurrent DROP commands by ensuring that locks are taken before we begin to drop dependencies of an object (Tom) @@ -8007,15 +8007,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve reporting of dependencies during DROP + Improve reporting of dependencies during DROP commands (Tom) - Add WITH [NO] DATA clause to CREATE TABLE - AS, per the SQL standard (Peter, Tom) + Add WITH [NO] DATA clause to CREATE TABLE + AS, per the SQL standard (Peter, Tom) @@ -8027,14 +8027,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow CREATE AGGREGATE to use an internal + Allow CREATE AGGREGATE to use an internal transition datatype (Tom) - Add LIKE clause to CREATE TYPE (Tom) + Add LIKE clause to CREATE TYPE (Tom) @@ -8045,7 +8045,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow specification of the type category and preferred + Allow specification of the type category and preferred status for user-defined base types (Tom) @@ -8057,7 +8057,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow CREATE OR REPLACE VIEW to add columns to the + Allow CREATE OR REPLACE VIEW to add columns to the end of a view (Robert Haas) @@ -8065,25 +8065,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>ALTER</> + <command>ALTER</command> - Add ALTER TYPE RENAME (Petr Jelinek) + Add ALTER TYPE RENAME (Petr Jelinek) - Add ALTER SEQUENCE ... RESTART (with no parameter) to + Add ALTER SEQUENCE ... RESTART (with no parameter) to reset a sequence to its initial value (Zoltan Boszormenyi) - Modify the ALTER TABLE syntax to allow all reasonable + Modify the ALTER TABLE syntax to allow all reasonable combinations for tables, indexes, sequences, and views (Tom) @@ -8093,28 +8093,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - ALTER SEQUENCE OWNER TO + ALTER SEQUENCE OWNER TO - ALTER VIEW ALTER COLUMN SET/DROP DEFAULT + ALTER VIEW ALTER COLUMN SET/DROP DEFAULT - ALTER VIEW OWNER TO + ALTER VIEW OWNER TO - ALTER VIEW SET SCHEMA + ALTER VIEW SET SCHEMA There is no actual new functionality here, but formerly - you had to say ALTER TABLE to do these things, + you had to say ALTER TABLE to do these things, which was confusing. @@ -8122,24 +8122,24 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add support for the syntax ALTER TABLE ... ALTER COLUMN - ... SET DATA TYPE (Peter) + ... SET DATA TYPE (Peter) - This is SQL-standard syntax for functionality that + This is SQL-standard syntax for functionality that was already supported. - Make ALTER TABLE SET WITHOUT OIDS rewrite the table - to physically remove OID values (Tom) + Make ALTER TABLE SET WITHOUT OIDS rewrite the table + to physically remove OID values (Tom) - Also, add ALTER TABLE SET WITH OIDS to rewrite the - table to add OIDs. + Also, add ALTER TABLE SET WITH OIDS to rewrite the + table to add OIDs. @@ -8154,7 +8154,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve reporting of - CREATE/DROP/RENAME DATABASE + CREATE/DROP/RENAME DATABASE failure when uncommitted prepared transactions are the cause (Tom) @@ -8162,7 +8162,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make LC_COLLATE and LC_CTYPE into + Make LC_COLLATE and LC_CTYPE into per-database settings (Radek Strnad, Heikki) @@ -8175,20 +8175,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve checks that the database encoding, collation - (LC_COLLATE), and character classes - (LC_CTYPE) match (Heikki, Tom) + (LC_COLLATE), and character classes + (LC_CTYPE) match (Heikki, Tom) Note in particular that a new database's encoding and locale - settings can be changed only when copying from template0. + settings can be changed only when copying from template0. This prevents possibly copying data that doesn't match the settings. - Add ALTER DATABASE SET TABLESPACE to move a database + Add ALTER DATABASE SET TABLESPACE to move a database to a new tablespace (Guillaume Lelarge, Bernd Helmle) @@ -8206,8 +8206,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a VERBOSE option to the CLUSTER command and - clusterdb (Jim Cox) + Add a VERBOSE option to the CLUSTER command and + clusterdb (Jim Cox) @@ -8261,8 +8261,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - xxx_pattern_ops indexes can now be used for simple - equality comparisons, not only for LIKE (Tom) + xxx_pattern_ops indexes can now be used for simple + equality comparisons, not only for LIKE (Tom) @@ -8276,19 +8276,19 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove the requirement to use @@@ when doing - GIN weighted lookups on full text indexes (Tom, Teodor) + Remove the requirement to use @@@ when doing + GIN weighted lookups on full text indexes (Tom, Teodor) - The normal @@ text search operator can be used + The normal @@ text search operator can be used instead. - Add an optimizer selectivity function for @@ text + Add an optimizer selectivity function for @@ text search operations (Jan Urbanski) @@ -8302,7 +8302,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support multi-column GIN indexes (Teodor Sigaev) + Support multi-column GIN indexes (Teodor Sigaev) @@ -8317,18 +8317,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>VACUUM</> + <command>VACUUM</command> - Track free space in separate per-relation fork files (Heikki) + Track free space in separate per-relation fork files (Heikki) - Free space discovered by VACUUM is now recorded in - *_fsm files, rather than in a fixed-sized shared memory - area. The max_fsm_pages and max_fsm_relations + Free space discovered by VACUUM is now recorded in + *_fsm files, rather than in a fixed-sized shared memory + area. The max_fsm_pages and max_fsm_relations settings have been removed, greatly simplifying administration of free space management. @@ -8341,16 +8341,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This allows VACUUM to avoid scanning all of + This allows VACUUM to avoid scanning all of a table when only a portion of the table needs vacuuming. - The visibility map is stored in per-relation fork files. + The visibility map is stored in per-relation fork files. - Add vacuum_freeze_table_age parameter to control - when VACUUM should ignore the visibility map and + Add vacuum_freeze_table_age parameter to control + when VACUUM should ignore the visibility map and do a full table scan to freeze tuples (Heikki) @@ -8361,15 +8361,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This improves VACUUM's ability to reclaim space + This improves VACUUM's ability to reclaim space in the presence of long-running transactions. - Add ability to specify per-relation autovacuum and TOAST - parameters in CREATE TABLE (Alvaro, Euler Taveira de + Add ability to specify per-relation autovacuum and TOAST + parameters in CREATE TABLE (Alvaro, Euler Taveira de Oliveira) @@ -8380,7 +8380,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add --freeze option to vacuumdb + Add --freeze option to vacuumdb (Bruce) @@ -8397,20 +8397,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a CaseSensitive option for text search synonym + Add a CaseSensitive option for text search synonym dictionaries (Simon) - Improve the precision of NUMERIC division (Tom) + Improve the precision of NUMERIC division (Tom) - Add basic arithmetic operators for int2 with int8 + Add basic arithmetic operators for int2 with int8 (Tom) @@ -8421,22 +8421,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow UUID input to accept an optional hyphen after + Allow UUID input to accept an optional hyphen after every fourth digit (Robert Haas) - Allow on/off as input for the boolean data type + Allow on/off as input for the boolean data type (Itagaki Takahiro) - Allow spaces around NaN in the input string for - type numeric (Sam Mason) + Allow spaces around NaN in the input string for + type numeric (Sam Mason) @@ -8448,53 +8448,53 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Reject year 0 BC and years 000 and - 0000 (Tom) + Reject year 0 BC and years 000 and + 0000 (Tom) - Previously these were interpreted as 1 BC. - (Note: years 0 and 00 are still assumed to be + Previously these were interpreted as 1 BC. + (Note: years 0 and 00 are still assumed to be the year 2000.) - Include SGT (Singapore time) in the default list of + Include SGT (Singapore time) in the default list of known time zone abbreviations (Tom) - Support infinity and -infinity as - values of type date (Tom) + Support infinity and -infinity as + values of type date (Tom) - Make parsing of interval literals more standard-compliant + Make parsing of interval literals more standard-compliant (Tom, Ron Mayer) - For example, INTERVAL '1' YEAR now does what it's + For example, INTERVAL '1' YEAR now does what it's supposed to. - Allow interval fractional-seconds precision to be specified - after the second keyword, for SQL standard + Allow interval fractional-seconds precision to be specified + after the second keyword, for SQL standard compliance (Tom) Formerly the precision had to be specified after the keyword - interval. (For backwards compatibility, this syntax is still + interval. (For backwards compatibility, this syntax is still supported, though deprecated.) Data type definitions will now be output using the standard format. @@ -8502,26 +8502,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support the IS0 8601 interval syntax (Ron + Support the IS0 8601 interval syntax (Ron Mayer, Kevin Grittner) - For example, INTERVAL 'P1Y2M3DT4H5M6.7S' is now + For example, INTERVAL 'P1Y2M3DT4H5M6.7S' is now supported. - Add IntervalStyle parameter - which controls how interval values are output (Ron Mayer) + Add IntervalStyle parameter + which controls how interval values are output (Ron Mayer) - Valid values are: postgres, postgres_verbose, - sql_standard, iso_8601. This setting also - controls the handling of negative interval input when only + Valid values are: postgres, postgres_verbose, + sql_standard, iso_8601. This setting also + controls the handling of negative interval input when only some fields have positive/negative designations. @@ -8529,7 +8529,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve consistency of handling of fractional seconds in - timestamp and interval output (Ron Mayer) + timestamp and interval output (Ron Mayer) @@ -8543,38 +8543,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve the handling of casts applied to ARRAY[] - constructs, such as ARRAY[...]::integer[] + Improve the handling of casts applied to ARRAY[] + constructs, such as ARRAY[...]::integer[] (Brendan Jurd) - Formerly PostgreSQL attempted to determine a data type - for the ARRAY[] construct without reference to the ensuing + Formerly PostgreSQL attempted to determine a data type + for the ARRAY[] construct without reference to the ensuing cast. This could fail unnecessarily in many cases, in particular when - the ARRAY[] construct was empty or contained only - ambiguous entries such as NULL. Now the cast is consulted + the ARRAY[] construct was empty or contained only + ambiguous entries such as NULL. Now the cast is consulted to determine the type that the array elements must be. - Make SQL-syntax ARRAY dimensions optional - to match the SQL standard (Peter) + Make SQL-syntax ARRAY dimensions optional + to match the SQL standard (Peter) - Add array_ndims() to return the number + Add array_ndims() to return the number of dimensions of an array (Robert Haas) - Add array_length() to return the length + Add array_length() to return the length of an array for a specified dimension (Jim Nasby, Robert Haas, Peter Eisentraut) @@ -8582,7 +8582,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add aggregate function array_agg(), which + Add aggregate function array_agg(), which returns all aggregated values as a single array (Robert Haas, Jeff Davis, Peter) @@ -8590,25 +8590,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add unnest(), which converts an array to + Add unnest(), which converts an array to individual row values (Tom) - This is the opposite of array_agg(). + This is the opposite of array_agg(). - Add array_fill() to create arrays initialized with + Add array_fill() to create arrays initialized with a value (Pavel Stehule) - Add generate_subscripts() to simplify generating + Add generate_subscripts() to simplify generating the range of an array's subscripts (Pavel Stehule) @@ -8618,19 +8618,19 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Wide-Value Storage (<acronym>TOAST</>) + Wide-Value Storage (<acronym>TOAST</acronym>) - Consider TOAST compression on values as short as + Consider TOAST compression on values as short as 32 bytes (previously 256 bytes) (Greg Stark) - Require 25% minimum space savings before using TOAST + Require 25% minimum space savings before using TOAST compression (previously 20% for small values and any-savings-at-all for large values) (Greg) @@ -8638,7 +8638,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve TOAST heuristics for rows that have a mix of large + Improve TOAST heuristics for rows that have a mix of large and small toastable fields, so that we prefer to push large values out of line and don't compress small values unnecessarily (Greg, Tom) @@ -8656,52 +8656,52 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Document that setseed() allows values from - -1 to 1 (not just 0 to - 1), and enforce the valid range (Kris Jurka) + Document that setseed() allows values from + -1 to 1 (not just 0 to + 1), and enforce the valid range (Kris Jurka) - Add server-side function lo_import(filename, oid) + Add server-side function lo_import(filename, oid) (Tatsuo) - Add quote_nullable(), which behaves like - quote_literal() but returns the string NULL for + Add quote_nullable(), which behaves like + quote_literal() but returns the string NULL for a null argument (Brendan Jurd) - Improve full text search headline() function to + Improve full text search headline() function to allow extracting several fragments of text (Sushant Sinha) - Add suppress_redundant_updates_trigger() trigger + Add suppress_redundant_updates_trigger() trigger function to avoid overhead for non-data-changing updates (Andrew) - Add div(numeric, numeric) to perform numeric + Add div(numeric, numeric) to perform numeric division without rounding (Tom) - Add timestamp and timestamptz versions of - generate_series() (Hitoshi Harada) + Add timestamp and timestamptz versions of + generate_series() (Hitoshi Harada) @@ -8713,54 +8713,54 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Implement current_query() for use by functions + Implement current_query() for use by functions that need to know the currently running query (Tomas Doran) - Add pg_get_keywords() to return a list of the + Add pg_get_keywords() to return a list of the parser keywords (Dave Page) - Add pg_get_functiondef() to see a function's + Add pg_get_functiondef() to see a function's definition (Abhijit Menon-Sen) - Allow the second argument of pg_get_expr() to be zero + Allow the second argument of pg_get_expr() to be zero when deparsing an expression that does not contain variables (Tom) - Modify pg_relation_size() to use regclass + Modify pg_relation_size() to use regclass (Heikki) - pg_relation_size(data_type_name) no longer works. + pg_relation_size(data_type_name) no longer works. - Add boot_val and reset_val columns to - pg_settings output (Greg Smith) + Add boot_val and reset_val columns to + pg_settings output (Greg Smith) Add source file name and line number columns to - pg_settings output for variables set in a configuration + pg_settings output for variables set in a configuration file (Magnus, Alvaro) @@ -8771,26 +8771,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for CURRENT_CATALOG, - CURRENT_SCHEMA, SET CATALOG, SET - SCHEMA (Peter) + Add support for CURRENT_CATALOG, + CURRENT_SCHEMA, SET CATALOG, SET + SCHEMA (Peter) - These provide SQL-standard syntax for existing features. + These provide SQL-standard syntax for existing features. - Add pg_typeof() which returns the data type + Add pg_typeof() which returns the data type of any value (Brendan Jurd) - Make version() return information about whether + Make version() return information about whether the server is a 32- or 64-bit binary (Bruce) @@ -8798,7 +8798,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix the behavior of information schema columns - is_insertable_into and is_updatable to + is_insertable_into and is_updatable to be consistent (Peter) @@ -8806,13 +8806,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve the behavior of information schema - datetime_precision columns (Peter) + datetime_precision columns (Peter) - These columns now show zero for date columns, and 6 - (the default precision) for time, timestamp, and - interval without a declared precision, rather than showing + These columns now show zero for date columns, and 6 + (the default precision) for time, timestamp, and + interval without a declared precision, rather than showing null as formerly. @@ -8820,28 +8820,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Convert remaining builtin set-returning functions to use - OUT parameters (Jaime Casanova) + OUT parameters (Jaime Casanova) This makes it possible to call these functions without specifying - a column list: pg_show_all_settings(), - pg_lock_status(), pg_prepared_xact(), - pg_prepared_statement(), pg_cursor() + a column list: pg_show_all_settings(), + pg_lock_status(), pg_prepared_xact(), + pg_prepared_statement(), pg_cursor() - Make pg_*_is_visible() and - has_*_privilege() functions return NULL + Make pg_*_is_visible() and + has_*_privilege() functions return NULL for invalid OIDs, rather than reporting an error (Tom) - Extend has_*_privilege() functions to allow inquiring + Extend has_*_privilege() functions to allow inquiring about the OR of multiple privileges in one call (Stephen Frost, Tom) @@ -8849,8 +8849,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add has_column_privilege() and - has_any_column_privilege() functions (Stephen + Add has_column_privilege() and + has_any_column_privilege() functions (Stephen Frost, Tom) @@ -8883,16 +8883,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add CREATE FUNCTION ... RETURNS TABLE clause (Pavel + Add CREATE FUNCTION ... RETURNS TABLE clause (Pavel Stehule) - Allow SQL-language functions to return the output - of an INSERT/UPDATE/DELETE - RETURNING clause (Tom) + Allow SQL-language functions to return the output + of an INSERT/UPDATE/DELETE + RETURNING clause (Tom) @@ -8906,38 +8906,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support EXECUTE USING for easier insertion of data + Support EXECUTE USING for easier insertion of data values into a dynamic query string (Pavel Stehule) - Allow looping over the results of a cursor using a FOR + Allow looping over the results of a cursor using a FOR loop (Pavel Stehule) - Support RETURN QUERY EXECUTE (Pavel + Support RETURN QUERY EXECUTE (Pavel Stehule) - Improve the RAISE command (Pavel Stehule) + Improve the RAISE command (Pavel Stehule) - Support DETAIL and HINT fields + Support DETAIL and HINT fields - Support specification of the SQLSTATE error code + Support specification of the SQLSTATE error code @@ -8947,7 +8947,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow RAISE without parameters in an exception + Allow RAISE without parameters in an exception block to re-throw the current error @@ -8957,45 +8957,45 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow specification of SQLSTATE codes - in EXCEPTION lists (Pavel Stehule) + Allow specification of SQLSTATE codes + in EXCEPTION lists (Pavel Stehule) - This is useful for handling custom SQLSTATE codes. + This is useful for handling custom SQLSTATE codes. - Support the CASE statement (Pavel Stehule) + Support the CASE statement (Pavel Stehule) - Make RETURN QUERY set the special FOUND and - GET DIAGNOSTICS ROW_COUNT variables + Make RETURN QUERY set the special FOUND and + GET DIAGNOSTICS ROW_COUNT variables (Pavel Stehule) - Make FETCH and MOVE set the - GET DIAGNOSTICS ROW_COUNT variable + Make FETCH and MOVE set the + GET DIAGNOSTICS ROW_COUNT variable (Andrew Gierth) - Make EXIT without a label always exit the innermost + Make EXIT without a label always exit the innermost loop (Tom) - Formerly, if there were a BEGIN block more closely nested + Formerly, if there were a BEGIN block more closely nested than any loop, it would exit that block instead. The new behavior matches Oracle(TM) and is also what was previously stated by our own documentation. @@ -9009,11 +9009,11 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - In particular, the format string in RAISE now works + In particular, the format string in RAISE now works the same as any other string literal, including being subject - to standard_conforming_strings. This change also + to standard_conforming_strings. This change also fixes other cases in which valid commands would fail when - standard_conforming_strings is on. + standard_conforming_strings is on. @@ -9037,28 +9037,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix pg_ctl restart to preserve command-line arguments + Fix pg_ctl restart to preserve command-line arguments (Bruce) - Add -w/--no-password option that + Add -w/--no-password option that prevents password prompting in all utilities that have a - -W/--password option (Peter) + -W/--password option (Peter) - Remove - These options have had no effect since PostgreSQL + These options have had no effect since PostgreSQL 8.3. @@ -9066,41 +9066,41 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>psql</> + <application>psql</application> - Remove verbose startup banner; now just suggest help + Remove verbose startup banner; now just suggest help (Joshua Drake) - Make help show common backslash commands (Greg + Make help show common backslash commands (Greg Sabino Mullane) - Add \pset format wrapped mode to wrap output to the - screen width, or file/pipe output too if \pset columns + Add \pset format wrapped mode to wrap output to the + screen width, or file/pipe output too if \pset columns is set (Bryce Nesbitt) - Allow all supported spellings of boolean values in \pset, - rather than just on and off (Bruce) + Allow all supported spellings of boolean values in \pset, + rather than just on and off (Bruce) - Formerly, any string other than off was silently taken - to mean true. psql will now complain - about unrecognized spellings (but still take them as true). + Formerly, any string other than off was silently taken + to mean true. psql will now complain + about unrecognized spellings (but still take them as true). @@ -9130,8 +9130,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add optional on/off argument for - \timing (David Fetter) + Add optional on/off argument for + \timing (David Fetter) @@ -9144,20 +9144,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make \l show database access privileges (Andrew Gilligan) + Make \l show database access privileges (Andrew Gilligan) - Make \l+ show database sizes, if permissions + Make \l+ show database sizes, if permissions allow (Andrew Gilligan) - Add the \ef command to edit function definitions + Add the \ef command to edit function definitions (Abhijit Menon-Sen) @@ -9167,28 +9167,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>psql</> \d* commands + <application>psql</application> \d* commands - Make \d* commands that do not have a pattern argument - show system objects only if the S modifier is specified + Make \d* commands that do not have a pattern argument + show system objects only if the S modifier is specified (Greg Sabino Mullane, Bruce) The former behavior was inconsistent across different variants - of \d, and in most cases it provided no easy way to see + of \d, and in most cases it provided no easy way to see just user objects. - Improve \d* commands to work with older - PostgreSQL server versions (back to 7.4), + Improve \d* commands to work with older + PostgreSQL server versions (back to 7.4), not only the current server version (Guillaume Lelarge) @@ -9196,14 +9196,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make \d show foreign-key constraints that reference + Make \d show foreign-key constraints that reference the selected table (Kenneth D'Souza) - Make \d on a sequence show its column values + Make \d on a sequence show its column values (Euler Taveira de Oliveira) @@ -9211,43 +9211,43 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add column storage type and other relation options to the - \d+ display (Gregory Stark, Euler Taveira de + \d+ display (Gregory Stark, Euler Taveira de Oliveira) - Show relation size in \dt+ output (Dickson S. + Show relation size in \dt+ output (Dickson S. Guedes) - Show the possible values of enum types in \dT+ + Show the possible values of enum types in \dT+ (David Fetter) - Allow \dC to accept a wildcard pattern, which matches + Allow \dC to accept a wildcard pattern, which matches either datatype involved in the cast (Tom) - Add a function type column to \df's output, and add + Add a function type column to \df's output, and add options to list only selected types of functions (David Fetter) - Make \df not hide functions that take or return - type cstring (Tom) + Make \df not hide functions that take or return + type cstring (Tom) @@ -9263,13 +9263,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>pg_dump</> + <application>pg_dump</application> - Add a --no-tablespaces option to - pg_dump/pg_dumpall/pg_restore + Add a --no-tablespaces option to + pg_dump/pg_dumpall/pg_restore so that dumps can be restored to clusters that have non-matching tablespace layouts (Gavin Roy) @@ -9277,23 +9277,23 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove These options were too frequently confused with the option to - select a database name in other PostgreSQL + select a database name in other PostgreSQL client applications. The functionality is still available, but you must now spell out the long option name - or . - Remove @@ -9305,15 +9305,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disable statement_timeout during dump and restore + Disable statement_timeout during dump and restore (Joshua Drake) - Add pg_dump/pg_dumpall option - (David Gould) @@ -9324,7 +9324,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Reorder pg_dump --data-only output + Reorder pg_dump --data-only output to dump tables referenced by foreign keys before the referencing tables (Tom) @@ -9332,27 +9332,27 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This allows data loads when foreign keys are already present. If circular references make a safe ordering impossible, a - NOTICE is issued. + NOTICE is issued. - Allow pg_dump, pg_dumpall, and - pg_restore to use a specified role (Benedek + Allow pg_dump, pg_dumpall, and + pg_restore to use a specified role (Benedek László) - Allow pg_restore to use multiple concurrent + Allow pg_restore to use multiple concurrent connections to do the restore (Andrew) The number of concurrent connections is controlled by the option - --jobs. This is supported only for custom-format archives. + --jobs. This is supported only for custom-format archives. @@ -9366,24 +9366,24 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Programming Tools - <application>libpq</> + <application>libpq</application> - Allow the OID to be specified when importing a large - object, via new function lo_import_with_oid() (Tatsuo) + Allow the OID to be specified when importing a large + object, via new function lo_import_with_oid() (Tatsuo) - Add events support (Andrew Chernow, Merlin Moncure) + Add events support (Andrew Chernow, Merlin Moncure) This adds the ability to register callbacks to manage private - data associated with PGconn and PGresult + data associated with PGconn and PGresult objects. @@ -9397,18 +9397,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make PQexecParams() and related functions return - PGRES_EMPTY_QUERY for an empty query (Tom) + Make PQexecParams() and related functions return + PGRES_EMPTY_QUERY for an empty query (Tom) - They previously returned PGRES_COMMAND_OK. + They previously returned PGRES_COMMAND_OK. - Document how to avoid the overhead of WSACleanup() + Document how to avoid the overhead of WSACleanup() on Windows (Andrew Chernow) @@ -9434,22 +9434,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>libpq</> <acronym>SSL</> (Secure Sockets Layer) + <title><application>libpq</application> <acronym>SSL</acronym> (Secure Sockets Layer) support - Fix certificate validation for SSL connections + Fix certificate validation for SSL connections (Magnus) - libpq now supports verifying both the certificate - and the name of the server when making SSL + libpq now supports verifying both the certificate + and the name of the server when making SSL connections. If a root certificate is not available to use for - verification, SSL connections will fail. The - sslmode parameter is used to enable certificate + verification, SSL connections will fail. The + sslmode parameter is used to enable certificate verification and set the level of checking. The default is still not to do any verification, allowing connections to SSL-enabled servers without requiring a root certificate on the @@ -9463,7 +9463,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - If a certificate CN starts with *, it will + If a certificate CN starts with *, it will be treated as a wildcard when matching the hostname, allowing the use of the same certificate for multiple servers. @@ -9478,21 +9478,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a PQinitOpenSSL function to allow greater control + Add a PQinitOpenSSL function to allow greater control over OpenSSL/libcrypto initialization (Andrew Chernow) - Make libpq unregister its OpenSSL + Make libpq unregister its OpenSSL callbacks when no database connections remain open (Bruce, Magnus, Russell Smith) This is required for applications that unload the libpq library, - otherwise invalid OpenSSL callbacks will remain. + otherwise invalid OpenSSL callbacks will remain. @@ -9501,7 +9501,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>ecpg</> + <application>ecpg</application> @@ -9527,7 +9527,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Server Programming Interface (<acronym>SPI</>) + Server Programming Interface (<acronym>SPI</acronym>) @@ -9539,8 +9539,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add new SPI_OK_REWRITTEN return code for - SPI_execute() (Heikki) + Add new SPI_OK_REWRITTEN return code for + SPI_execute() (Heikki) @@ -9551,12 +9551,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove unnecessary inclusions from executor/spi.h (Tom) + Remove unnecessary inclusions from executor/spi.h (Tom) - SPI-using modules might need to add some #include - lines if they were depending on spi.h to include + SPI-using modules might need to add some #include + lines if they were depending on spi.h to include things for them. @@ -9573,13 +9573,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Update build system to use Autoconf 2.61 (Peter) + Update build system to use Autoconf 2.61 (Peter) - Require GNU bison for source code builds (Peter) + Require GNU bison for source code builds (Peter) @@ -9590,63 +9590,63 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_config --htmldir option + Add pg_config --htmldir option (Peter) - Pass float4 by value inside the server (Zoltan + Pass float4 by value inside the server (Zoltan Boszormenyi) - Add configure option - --disable-float4-byval to use the old behavior. + Add configure option + --disable-float4-byval to use the old behavior. External C functions that use old-style (version 0) call convention - and pass or return float4 values will be broken by this - change, so you may need the configure option if you + and pass or return float4 values will be broken by this + change, so you may need the configure option if you have such functions and don't want to update them. - Pass float8, int8, and related datatypes + Pass float8, int8, and related datatypes by value inside the server on 64-bit platforms (Zoltan Boszormenyi) - Add configure option - --disable-float8-byval to use the old behavior. + Add configure option + --disable-float8-byval to use the old behavior. As above, this change might break old-style external C functions. - Add configure options --with-segsize, - --with-blocksize, --with-wal-blocksize, - --with-wal-segsize (Zdenek Kotala, Tom) + Add configure options --with-segsize, + --with-blocksize, --with-wal-blocksize, + --with-wal-segsize (Zdenek Kotala, Tom) This simplifies build-time control over several constants that previously could only be changed by editing - pg_config_manual.h. + pg_config_manual.h. - Allow threaded builds on Solaris 2.5 (Bruce) + Allow threaded builds on Solaris 2.5 (Bruce) - Use the system's getopt_long() on Solaris + Use the system's getopt_long() on Solaris (Zdenek Kotala, Tom) @@ -9658,16 +9658,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for the Sun Studio compiler on - Linux (Julius Stroffek) + Add support for the Sun Studio compiler on + Linux (Julius Stroffek) - Append the major version number to the backend gettext - domain, and the soname major version number to - libraries' gettext domain (Peter) + Append the major version number to the backend gettext + domain, and the soname major version number to + libraries' gettext domain (Peter) @@ -9677,21 +9677,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for code coverage testing with gcov + Add support for code coverage testing with gcov (Michelle Caisse) - Allow out-of-tree builds on Mingw and - Cygwin (Richard Evans) + Allow out-of-tree builds on Mingw and + Cygwin (Richard Evans) - Fix the use of Mingw as a cross-compiling source + Fix the use of Mingw as a cross-compiling source platform (Peter) @@ -9710,20 +9710,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This adds support for daylight saving time (DST) + This adds support for daylight saving time (DST) calculations beyond the year 2038. - Deprecate use of platform's time_t data type (Tom) + Deprecate use of platform's time_t data type (Tom) - Some platforms have migrated to 64-bit time_t, some have + Some platforms have migrated to 64-bit time_t, some have not, and Windows can't make up its mind what it's doing. Define - pg_time_t to have the same meaning as time_t, + pg_time_t to have the same meaning as time_t, but always be 64 bits (unless the platform has no 64-bit integer type), and use that type in all module APIs and on-disk data formats. @@ -9745,7 +9745,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve gettext support to allow better translation + Improve gettext support to allow better translation of plurals (Peter) @@ -9758,44 +9758,44 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add more DTrace probes (Robert Lor) + Add more DTrace probes (Robert Lor) - Enable DTrace support on macOS - Leopard and other non-Solaris platforms (Robert Lor) + Enable DTrace support on macOS + Leopard and other non-Solaris platforms (Robert Lor) Simplify and standardize conversions between C strings and - text datums, by providing common functions for the purpose + text datums, by providing common functions for the purpose (Brendan Jurd, Tom) - Clean up the include/catalog/ header files so that + Clean up the include/catalog/ header files so that frontend programs can include them without including - postgres.h + postgres.h (Zdenek Kotala) - Make name char-aligned, and suppress zero-padding of - name entries in indexes (Tom) + Make name char-aligned, and suppress zero-padding of + name entries in indexes (Tom) - Recover better if dynamically-loaded code executes exit() + Recover better if dynamically-loaded code executes exit() (Tom) @@ -9816,55 +9816,55 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add shmem_startup_hook() for custom shared memory + Add shmem_startup_hook() for custom shared memory requirements (Tom) - Replace the index access method amgetmulti entry point - with amgetbitmap, and extend the API for - amgettuple to support run-time determination of + Replace the index access method amgetmulti entry point + with amgetbitmap, and extend the API for + amgettuple to support run-time determination of operator lossiness (Heikki, Tom, Teodor) - The API for GIN and GiST opclass consistent functions + The API for GIN and GiST opclass consistent functions has been extended as well. - Add support for partial-match searches in GIN indexes + Add support for partial-match searches in GIN indexes (Teodor Sigaev, Oleg Bartunov) - Replace pg_class column reltriggers - with boolean relhastriggers (Simon) + Replace pg_class column reltriggers + with boolean relhastriggers (Simon) - Also remove unused pg_class columns - relukeys, relfkeys, and - relrefs. + Also remove unused pg_class columns + relukeys, relfkeys, and + relrefs. - Add a relistemp column to pg_class + Add a relistemp column to pg_class to ease identification of temporary tables (Tom) - Move platform FAQs into the main documentation + Move platform FAQs into the main documentation (Peter) @@ -9878,7 +9878,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for the KOI8U (Ukrainian) encoding + Add support for the KOI8U (Ukrainian) encoding (Peter) @@ -9895,8 +9895,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix problem when setting LC_MESSAGES on - MSVC-built systems (Hiroshi Inoue, Hiroshi + Fix problem when setting LC_MESSAGES on + MSVC-built systems (Hiroshi Inoue, Hiroshi Saito, Magnus) @@ -9912,65 +9912,65 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add contrib/auto_explain to automatically run - EXPLAIN on queries exceeding a specified duration + Add contrib/auto_explain to automatically run + EXPLAIN on queries exceeding a specified duration (Itagaki Takahiro, Tom) - Add contrib/btree_gin to allow GIN indexes to + Add contrib/btree_gin to allow GIN indexes to handle more datatypes (Oleg, Teodor) - Add contrib/citext to provide a case-insensitive, + Add contrib/citext to provide a case-insensitive, multibyte-aware text data type (David Wheeler) - Add contrib/pg_stat_statements for server-wide + Add contrib/pg_stat_statements for server-wide tracking of statement execution statistics (Itagaki Takahiro) - Add duration and query mode options to contrib/pgbench + Add duration and query mode options to contrib/pgbench (Itagaki Takahiro) - Make contrib/pgbench use table names - pgbench_accounts, pgbench_branches, - pgbench_history, and pgbench_tellers, - rather than just accounts, branches, - history, and tellers (Tom) + Make contrib/pgbench use table names + pgbench_accounts, pgbench_branches, + pgbench_history, and pgbench_tellers, + rather than just accounts, branches, + history, and tellers (Tom) This is to reduce the risk of accidentally destroying real data - by running pgbench. + by running pgbench. - Fix contrib/pgstattuple to handle tables and + Fix contrib/pgstattuple to handle tables and indexes with over 2 billion pages (Tatsuhito Kasahara) - In contrib/fuzzystrmatch, add a version of the + In contrib/fuzzystrmatch, add a version of the Levenshtein string-distance function that allows the user to specify the costs of insertion, deletion, and substitution (Volkan Yazici) @@ -9979,28 +9979,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make contrib/ltree support multibyte encodings + Make contrib/ltree support multibyte encodings (laser) - Enable contrib/dblink to use connection information + Enable contrib/dblink to use connection information stored in the SQL/MED catalogs (Joe Conway) - Improve contrib/dblink's reporting of errors from + Improve contrib/dblink's reporting of errors from the remote server (Joe Conway) - Make contrib/dblink set client_encoding + Make contrib/dblink set client_encoding to match the local database's encoding (Joe Conway) @@ -10012,9 +10012,9 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make sure contrib/dblink uses a password supplied + Make sure contrib/dblink uses a password supplied by the user, and not accidentally taken from the server's - .pgpass file (Joe Conway) + .pgpass file (Joe Conway) @@ -10024,51 +10024,51 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add fsm_page_contents() - to contrib/pageinspect (Heikki) + Add fsm_page_contents() + to contrib/pageinspect (Heikki) - Modify get_raw_page() to support free space map - (*_fsm) files. Also update - contrib/pg_freespacemap. + Modify get_raw_page() to support free space map + (*_fsm) files. Also update + contrib/pg_freespacemap. - Add support for multibyte encodings to contrib/pg_trgm + Add support for multibyte encodings to contrib/pg_trgm (Teodor) - Rewrite contrib/intagg to use new - functions array_agg() and unnest() + Rewrite contrib/intagg to use new + functions array_agg() and unnest() (Tom) - Make contrib/pg_standby recover all available WAL before + Make contrib/pg_standby recover all available WAL before failover (Fujii Masao, Simon, Heikki) To make this work safely, you now need to set the new - recovery_end_command option in recovery.conf - to clean up the trigger file after failover. pg_standby + recovery_end_command option in recovery.conf + to clean up the trigger file after failover. pg_standby will no longer remove the trigger file itself. - contrib/pg_standby's option is now a no-op, because it is unsafe to use a symlink (Simon) diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index f7c63fc567..e09f38e180 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 9.0.X series. Users are encouraged to update to a newer release branch soon. @@ -42,8 +42,8 @@ - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -69,13 +69,13 @@ - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -93,7 +93,7 @@ - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -108,13 +108,13 @@ too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -126,14 +126,14 @@ - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -141,21 +141,21 @@ Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -168,7 +168,7 @@ Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -212,22 +212,22 @@ - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -240,9 +240,9 @@ These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -263,12 +263,12 @@ During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -288,7 +288,7 @@ - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -321,30 +321,30 @@ Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -352,61 +352,61 @@ Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) When dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -415,23 +415,23 @@ - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -439,14 +439,14 @@ - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -458,38 +458,38 @@ Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -513,7 +513,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -544,7 +544,7 @@ With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -555,13 +555,13 @@ Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -587,7 +587,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -613,12 +613,12 @@ - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -632,29 +632,29 @@ - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -681,7 +681,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -727,7 +727,7 @@ - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -737,7 +737,7 @@ - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -747,15 +747,15 @@ - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -786,7 +786,7 @@ This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -814,7 +814,7 @@ This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -822,7 +822,7 @@ Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -836,14 +836,14 @@ - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -863,13 +863,13 @@ - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -910,9 +910,9 @@ - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. @@ -925,20 +925,20 @@ - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -951,7 +951,7 @@ crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) @@ -960,14 +960,14 @@ While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -976,25 +976,25 @@ buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -1007,37 +1007,37 @@ - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -1049,7 +1049,7 @@ - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -1057,28 +1057,28 @@ - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -1086,7 +1086,7 @@ - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -1098,7 +1098,7 @@ - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -1145,15 +1145,15 @@ - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -1163,27 +1163,27 @@ - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -1191,12 +1191,12 @@ - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -1237,7 +1237,7 @@ Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -1263,21 +1263,21 @@ Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -1289,22 +1289,22 @@ - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -1312,12 +1312,12 @@ - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -1327,7 +1327,7 @@ Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -1339,7 +1339,7 @@ - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -1352,19 +1352,19 @@ - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -1384,7 +1384,7 @@ - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -1415,14 +1415,14 @@ - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -1431,7 +1431,7 @@ Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -1444,8 +1444,8 @@ - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -1471,7 +1471,7 @@ the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -1491,19 +1491,19 @@ Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -1511,14 +1511,14 @@ Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -1527,7 +1527,7 @@ case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -1541,32 +1541,32 @@ - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -1574,14 +1574,14 @@ - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -1589,32 +1589,32 @@ Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -1624,7 +1624,7 @@ This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -1632,17 +1632,17 @@ - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -1650,9 +1650,9 @@ - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) @@ -1666,7 +1666,7 @@ - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) @@ -1674,7 +1674,7 @@ Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -1686,7 +1686,7 @@ - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -1695,24 +1695,24 @@ Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -1739,29 +1739,29 @@ With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -1773,15 +1773,15 @@ - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -1793,9 +1793,9 @@ Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -1804,21 +1804,21 @@ - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -1877,15 +1877,15 @@ - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -1917,7 +1917,7 @@ Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -1944,13 +1944,13 @@ This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -1965,7 +1965,7 @@ Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -1978,7 +1978,7 @@ - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -1991,19 +1991,19 @@ This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -2011,7 +2011,7 @@ - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -2023,7 +2023,7 @@ This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -2031,7 +2031,7 @@ Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -2040,16 +2040,16 @@ the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -2085,15 +2085,15 @@ - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -2104,17 +2104,17 @@ - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -2122,15 +2122,15 @@ - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -2138,20 +2138,20 @@ - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -2159,20 +2159,20 @@ - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -2232,7 +2232,7 @@ Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -2256,7 +2256,7 @@ - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -2269,17 +2269,17 @@ - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -2305,26 +2305,26 @@ - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -2370,19 +2370,19 @@ - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -2395,7 +2395,7 @@ The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -2415,7 +2415,7 @@ If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -2429,12 +2429,12 @@ - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -2461,7 +2461,7 @@ - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -2473,35 +2473,35 @@ - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -2516,7 +2516,7 @@ The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -2536,20 +2536,20 @@ was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -2572,25 +2572,25 @@ Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -2614,7 +2614,7 @@ - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -2634,26 +2634,26 @@ A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -2661,21 +2661,21 @@ - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -2700,12 +2700,12 @@ - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -2713,51 +2713,51 @@ - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -2768,7 +2768,7 @@ - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -2782,28 +2782,28 @@ - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -2812,20 +2812,20 @@ the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -2877,13 +2877,13 @@ - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -2895,18 +2895,18 @@ The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -2932,7 +2932,7 @@ - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -2954,8 +2954,8 @@ - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -2972,7 +2972,7 @@ This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -2990,13 +2990,13 @@ - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -3010,7 +3010,7 @@ In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -3024,7 +3024,7 @@ - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -3035,10 +3035,10 @@ - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -3048,21 +3048,21 @@ - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -3114,7 +3114,7 @@ - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -3122,7 +3122,7 @@ Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -3135,7 +3135,7 @@ - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -3160,29 +3160,29 @@ - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) @@ -3196,37 +3196,37 @@ - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -3250,14 +3250,14 @@ - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -3269,16 +3269,16 @@ Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) @@ -3292,7 +3292,7 @@ - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -3308,14 +3308,14 @@ Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -3329,7 +3329,7 @@ - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -3364,7 +3364,7 @@ However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -3388,7 +3388,7 @@ A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -3402,41 +3402,41 @@ This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -3451,21 +3451,21 @@ These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -3476,21 +3476,21 @@ - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -3511,28 +3511,28 @@ - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -3543,7 +3543,7 @@ - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -3564,29 +3564,29 @@ - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -3595,26 +3595,26 @@ a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -3622,12 +3622,12 @@ Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -3672,7 +3672,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -3742,19 +3742,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -3766,13 +3766,13 @@ Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -3780,13 +3780,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -3799,7 +3799,7 @@ - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) @@ -3810,55 +3810,55 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -3878,15 +3878,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -3935,13 +3935,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -3949,8 +3949,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -3987,13 +3987,13 @@ This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -4026,8 +4026,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -4045,10 +4045,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -4056,7 +4056,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -4069,7 +4069,7 @@ - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -4081,14 +4081,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -4102,7 +4102,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -4110,7 +4110,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -4132,14 +4132,14 @@ Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. @@ -4153,7 +4153,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -4166,8 +4166,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -4177,33 +4177,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -4214,48 +4214,48 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -4266,7 +4266,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -4318,7 +4318,7 @@ These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -4341,10 +4341,10 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. @@ -4358,12 +4358,12 @@ - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -4382,7 +4382,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -4390,26 +4390,26 @@ - Fix pg_upgrade's handling of line endings on Windows + Fix pg_upgrade's handling of line endings on Windows (Andrew Dunstan) - Previously, pg_upgrade might add or remove carriage + Previously, pg_upgrade might add or remove carriage returns in places such as function bodies. - On Windows, make pg_upgrade use backslash path + On Windows, make pg_upgrade use backslash path separators in the scripts it emits (Andrew Dunstan) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -4459,7 +4459,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -4472,22 +4472,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -4515,21 +4515,21 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Fix txid_current() to report the correct epoch when not + Fix txid_current() to report the correct epoch when not in hot standby (Heikki Linnakangas) @@ -4546,14 +4546,14 @@ This mistake led to failures reported as out-of-order XID - insertion in KnownAssignedXids. + insertion in KnownAssignedXids. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -4564,7 +4564,7 @@ WAL sender background processes neglected to establish a - SIGALRM handler, meaning they would wait forever in + SIGALRM handler, meaning they would wait forever in some corner cases where a timeout ought to happen. @@ -4583,15 +4583,15 @@ - Fix LISTEN/NOTIFY to cope better with I/O + Fix LISTEN/NOTIFY to cope better with I/O problems, such as out of disk space (Tom Lane) After a write failure, all subsequent attempts to send more - NOTIFY messages would fail with messages like - Could not read from file "pg_notify/nnnn" at - offset nnnnn: Success. + NOTIFY messages would fail with messages like + Could not read from file "pg_notify/nnnn" at + offset nnnnn: Success. @@ -4604,7 +4604,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -4616,15 +4616,15 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -4632,24 +4632,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -4657,7 +4657,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -4669,7 +4669,7 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -4677,9 +4677,9 @@ Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) @@ -4708,14 +4708,14 @@ - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -4761,12 +4761,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -4777,7 +4777,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -4789,7 +4789,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -4815,7 +4815,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -4823,13 +4823,13 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Ensure txid_current() reports the correct epoch when + Ensure txid_current() reports the correct epoch when executed in hot standby (Simon Riggs) @@ -4844,7 +4844,7 @@ This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -4852,13 +4852,13 @@ - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -4879,8 +4879,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -4907,12 +4907,12 @@ - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. @@ -4925,7 +4925,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -4944,7 +4944,7 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) @@ -4957,33 +4957,33 @@ - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -4991,7 +4991,7 @@ - Fix pg_upgrade for the case that a database stored in a + Fix pg_upgrade for the case that a database stored in a non-default tablespace contains a table in the cluster's default tablespace (Bruce Momjian) @@ -4999,41 +4999,41 @@ - In ecpg, fix rare memory leaks and possible overwrite - of one byte after the sqlca_t structure (Peter Eisentraut) + In ecpg, fix rare memory leaks and possible overwrite + of one byte after the sqlca_t structure (Peter Eisentraut) - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Fix contrib/vacuumlo to use multiple transactions when + Fix contrib/vacuumlo to use multiple transactions when dropping many large objects (Tim Lewis, Robert Haas, Tom Lane) - This change avoids exceeding max_locks_per_transaction when + This change avoids exceeding max_locks_per_transaction when many objects need to be dropped. The behavior can be adjusted with the - new -l (limit) option. + new -l (limit) option. - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -5081,14 +5081,14 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -5100,7 +5100,7 @@ - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -5115,12 +5115,12 @@ - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -5136,10 +5136,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -5158,7 +5158,7 @@ that the contents were transiently invalid. In hot standby mode this can result in a query that's executing in parallel seeing garbage data. Various symptoms could result from that, but the most common one seems - to be invalid memory alloc request size. + to be invalid memory alloc request size. @@ -5176,13 +5176,13 @@ - Fix CLUSTER/VACUUM FULL handling of toast + Fix CLUSTER/VACUUM FULL handling of toast values owned by recently-updated rows (Tom Lane) This oversight could lead to duplicate key value violates unique - constraint errors being reported against the toast table's index + constraint errors being reported against the toast table's index during one of these commands. @@ -5204,11 +5204,11 @@ Support foreign data wrappers and foreign servers in - REASSIGN OWNED (Alvaro Herrera) + REASSIGN OWNED (Alvaro Herrera) - This command failed with unexpected classid errors if + This command failed with unexpected classid errors if it needed to change the ownership of any such objects. @@ -5216,16 +5216,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -5249,7 +5249,7 @@ Recover from errors occurring during WAL replay of DROP - TABLESPACE (Tom Lane) + TABLESPACE (Tom Lane) @@ -5271,7 +5271,7 @@ Sometimes a lock would be logged as being held by transaction - zero. This is at least known to produce assertion failures on + zero. This is at least known to produce assertion failures on slave servers, and might be the cause of more serious problems. @@ -5293,7 +5293,7 @@ - Prevent emitting misleading consistent recovery state reached + Prevent emitting misleading consistent recovery state reached log message at the beginning of crash recovery (Heikki Linnakangas) @@ -5301,7 +5301,7 @@ Fix initial value of - pg_stat_replication.replay_location + pg_stat_replication.replay_location (Fujii Masao) @@ -5313,7 +5313,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -5327,18 +5327,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -5346,8 +5346,8 @@ - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -5381,32 +5381,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Fix trigger WHEN conditions when both BEFORE and - AFTER triggers exist (Tom Lane) + Fix trigger WHEN conditions when both BEFORE and + AFTER triggers exist (Tom Lane) - Evaluation of WHEN conditions for AFTER ROW - UPDATE triggers could crash if there had been a BEFORE - ROW trigger fired for the same update. + Evaluation of WHEN conditions for AFTER ROW + UPDATE triggers could crash if there had been a BEFORE + ROW trigger fired for the same update. @@ -6202,7 +6202,7 @@ - Allow nested EXISTS queries to be optimized properly (Tom + Allow nested EXISTS queries to be optimized properly (Tom Lane) @@ -6222,19 +6222,19 @@ - Fix EXPLAIN to handle gating Result nodes within + Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans (Tom Lane) - The usual symptom of this oversight was bogus varno errors. + The usual symptom of this oversight was bogus varno errors. - Fix btree preprocessing of indexedcol IS - NULL conditions (Dean Rasheed) + Fix btree preprocessing of indexedcol IS + NULL conditions (Dean Rasheed) @@ -6257,13 +6257,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -6273,8 +6273,8 @@ - Fix VACUUM so that it always updates - pg_class.reltuples/relpages (Tom + Fix VACUUM so that it always updates + pg_class.reltuples/relpages (Tom Lane) @@ -6293,7 +6293,7 @@ - Fix cases where CLUSTER might attempt to access + Fix cases where CLUSTER might attempt to access already-removed TOAST data (Tom Lane) @@ -6308,7 +6308,7 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) @@ -6320,20 +6320,20 @@ The typical symptom of this problem was The function requested is - not supported errors during SSPI login. + not supported errors during SSPI login. Fix failure when adding a new variable of a custom variable class to - postgresql.conf (Tom Lane) + postgresql.conf (Tom Lane) - Throw an error if pg_hba.conf contains hostssl + Throw an error if pg_hba.conf contains hostssl but SSL is disabled (Tom Lane) @@ -6345,19 +6345,19 @@ - Fix failure when DROP OWNED BY attempts to remove default + Fix failure when DROP OWNED BY attempts to remove default privileges on sequences (Shigeru Hanada) - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -6365,25 +6365,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -6391,7 +6391,7 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) @@ -6409,19 +6409,19 @@ - Correctly handle quotes in locale names during initdb + Correctly handle quotes in locale names during initdb (Heikki Linnakangas) The case can arise with some Windows locales, such as People's - Republic of China. + Republic of China. - In pg_upgrade, avoid dumping orphaned temporary tables + In pg_upgrade, avoid dumping orphaned temporary tables (Bruce Momjian) @@ -6433,54 +6433,54 @@ - Fix pg_upgrade to preserve toast tables' relfrozenxids + Fix pg_upgrade to preserve toast tables' relfrozenxids during an upgrade from 8.3 (Bruce Momjian) - Failure to do this could lead to pg_clog files being + Failure to do this could lead to pg_clog files being removed too soon after the upgrade. - In pg_upgrade, fix the -l (log) option to + In pg_upgrade, fix the -l (log) option to work on Windows (Bruce Momjian) - In pg_ctl, support silent mode for service registrations + In pg_ctl, support silent mode for service registrations on Windows (MauMau) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. Be more user-friendly about unsupported cases for parallel - pg_restore (Tom Lane) + pg_restore (Tom Lane) @@ -6491,14 +6491,14 @@ - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -6510,36 +6510,36 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Fix PQsetvalue() to avoid possible crash when adding a new - tuple to a PGresult originally obtained from a server + Fix PQsetvalue() to avoid possible crash when adding a new + tuple to a PGresult originally obtained from a server query (Andrew Chernow) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -6551,7 +6551,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -6559,13 +6559,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -6585,7 +6585,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -6618,10 +6618,10 @@ However, if your installation was upgraded from a previous major - release by running pg_upgrade, you should take + release by running pg_upgrade, you should take action to prevent possible data loss due to a now-fixed bug in - pg_upgrade. The recommended solution is to run - VACUUM FREEZE on all TOAST tables. + pg_upgrade. The recommended solution is to run + VACUUM FREEZE on all TOAST tables. More information is available at http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix. @@ -6636,36 +6636,36 @@ - Fix pg_upgrade's handling of TOAST tables + Fix pg_upgrade's handling of TOAST tables (Bruce Momjian) - The pg_class.relfrozenxid value for + The pg_class.relfrozenxid value for TOAST tables was not correctly copied into the new installation - during pg_upgrade. This could later result in - pg_clog files being discarded while they were still + during pg_upgrade. This could later result in + pg_clog files being discarded while they were still needed to validate tuples in the TOAST tables, leading to - could not access status of transaction failures. + could not access status of transaction failures. This error poses a significant risk of data loss for installations - that have been upgraded with pg_upgrade. This patch - corrects the problem for future uses of pg_upgrade, + that have been upgraded with pg_upgrade. This patch + corrects the problem for future uses of pg_upgrade, but does not in itself cure the issue in installations that have been - processed with a buggy version of pg_upgrade. + processed with a buggy version of pg_upgrade. - Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set + Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set warning (Heikki Linnakangas) - VACUUM would sometimes issue this warning in cases that + VACUUM would sometimes issue this warning in cases that are actually valid. @@ -6680,8 +6680,8 @@ All retryable conflict errors now have an error code that indicates that a retry is possible. Also, session closure due to the database being dropped on the master is now reported as - ERRCODE_DATABASE_DROPPED, rather than - ERRCODE_ADMIN_SHUTDOWN, so that connection poolers can + ERRCODE_DATABASE_DROPPED, rather than + ERRCODE_ADMIN_SHUTDOWN, so that connection poolers can handle the situation correctly. @@ -6726,15 +6726,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -6742,25 +6742,25 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. - Allow replication as a user name in - pg_hba.conf (Andrew Dunstan) + Allow replication as a user name in + pg_hba.conf (Andrew Dunstan) - replication is special in the database name column, but it + replication is special in the database name column, but it was mistakenly also treated as special in the user name column. @@ -6781,13 +6781,13 @@ - Fix handling of SELECT FOR UPDATE in a sub-SELECT + Fix handling of SELECT FOR UPDATE in a sub-SELECT (Tom Lane) This bug typically led to cannot extract system attribute from - virtual tuple errors. + virtual tuple errors. @@ -6813,7 +6813,7 @@ - Allow libpq's SSL initialization to succeed when + Allow libpq's SSL initialization to succeed when user's home directory is unavailable (Tom Lane) @@ -6826,34 +6826,34 @@ - Fix libpq to return a useful error message for errors - detected in conninfo_array_parse (Joseph Adams) + Fix libpq to return a useful error message for errors + detected in conninfo_array_parse (Joseph Adams) A typo caused the library to return NULL, rather than the - PGconn structure containing the error message, to the + PGconn structure containing the error message, to the application. - Fix ecpg preprocessor's handling of float constants + Fix ecpg preprocessor's handling of float constants (Heikki Linnakangas) - Fix parallel pg_restore to handle comments on + Fix parallel pg_restore to handle comments on POST_DATA items correctly (Arnd Hannemann) - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -6899,14 +6899,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -6917,14 +6917,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -6966,7 +6966,7 @@ - Before exiting walreceiver, ensure all the received WAL + Before exiting walreceiver, ensure all the received WAL is fsync'd to disk (Heikki Linnakangas) @@ -6978,27 +6978,27 @@ - Avoid excess fsync activity in walreceiver + Avoid excess fsync activity in walreceiver (Heikki Linnakangas) - Make ALTER TABLE revalidate uniqueness and exclusion + Make ALTER TABLE revalidate uniqueness and exclusion constraints when needed (Noah Misch) This was broken in 9.0 by a change that was intended to suppress - revalidation during VACUUM FULL and CLUSTER, - but unintentionally affected ALTER TABLE as well. + revalidation during VACUUM FULL and CLUSTER, + but unintentionally affected ALTER TABLE as well. - Fix EvalPlanQual for UPDATE of an inheritance tree in which + Fix EvalPlanQual for UPDATE of an inheritance tree in which the tables are not all alike (Tom Lane) @@ -7013,15 +7013,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -7046,8 +7046,8 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. @@ -7060,29 +7060,29 @@ - Remove ecpg's fixed length limit for constants defining + Remove ecpg's fixed length limit for constants defining an array dimension (Michael Meskes) - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -7094,16 +7094,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -7143,23 +7143,23 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. - Fix too many KnownAssignedXids error during Hot Standby + Fix too many KnownAssignedXids error during Hot Standby replay (Heikki Linnakangas) @@ -7188,7 +7188,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -7214,7 +7214,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -7240,19 +7240,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -7268,7 +7268,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -7295,16 +7295,16 @@ Certain cases where a large number of tuples needed to be read in - advance, but work_mem was large enough to allow them all + advance, but work_mem was large enough to allow them all to be held in memory, were unexpectedly slow. - percent_rank(), cume_dist() and - ntile() in particular were subject to this problem. + percent_rank(), cume_dist() and + ntile() in particular were subject to this problem. - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -7316,21 +7316,21 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Add missing support in DROP OWNED BY for removing foreign + Add missing support in DROP OWNED BY for removing foreign data wrapper/server privileges belonging to a user (Heikki Linnakangas) - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -7349,28 +7349,28 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Make the OFF keyword unreserved (Heikki Linnakangas) + Make the OFF keyword unreserved (Heikki Linnakangas) - This prevents problems with using off as a variable name in - PL/pgSQL. That worked before 9.0, but was now broken - because PL/pgSQL now treats all core reserved words + This prevents problems with using off as a variable name in + PL/pgSQL. That worked before 9.0, but was now broken + because PL/pgSQL now treats all core reserved words as reserved. - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -7381,7 +7381,7 @@ - Fix could not find pathkey item to sort planner failure + Fix could not find pathkey item to sort planner failure with comparison of whole-row Vars (Tom Lane) @@ -7389,7 +7389,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -7408,7 +7408,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -7420,11 +7420,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -7444,46 +7444,46 @@ Fix incorrect calculation of transaction status in - ecpg (Itagaki Takahiro) + ecpg (Itagaki Takahiro) - Fix errors in psql's Unicode-escape support (Tom Lane) + Fix errors in psql's Unicode-escape support (Tom Lane) - Speed up parallel pg_restore when the archive + Speed up parallel pg_restore when the archive contains many large objects (blobs) (Tom Lane) - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/pgSQL's error reporting for no-such-column + Fix PL/pgSQL's error reporting for no-such-column cases (Tom Lane) As of 9.0, it would sometimes report missing FROM-clause entry - for table foo when record foo has no field bar would be + for table foo when record foo has no field bar would be more appropriate. - Fix PL/Python to honor typmod (i.e., length or + Fix PL/Python to honor typmod (i.e., length or precision restrictions) when assigning to tuple fields (Tom Lane) @@ -7494,7 +7494,7 @@ - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -7506,22 +7506,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -7529,26 +7529,26 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix buffer overrun in contrib/pg_upgrade (Hernan Gonzalez) + Fix buffer overrun in contrib/pg_upgrade (Hernan Gonzalez) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -7597,7 +7597,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -7626,7 +7626,7 @@ - Improve pg_get_expr() security fix so that the function + Improve pg_get_expr() security fix so that the function can still be used on the output of a sub-select (Tom Lane) @@ -7651,7 +7651,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -7676,14 +7676,14 @@ - Input such as 'J100000'::date worked before 8.4, + Input such as 'J100000'::date worked before 8.4, but was unintentionally broken by added error-checking. - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -7714,12 +7714,12 @@ This release of - PostgreSQL adds features that have been requested + PostgreSQL adds features that have been requested for years, such as easy-to-use replication, a mass permission-changing facility, and anonymous code blocks. While past major releases have been conservative in their scope, this release shows a bold new desire to provide facilities that new and existing - users of PostgreSQL will embrace. This has all + users of PostgreSQL will embrace. This has all been done with few incompatibilities. Major enhancements include: @@ -7732,7 +7732,7 @@ Built-in replication based on log shipping. This advance consists of two features: Streaming Replication, allowing continuous archive - (WAL) files to be streamed over a network connection to a + (WAL) files to be streamed over a network connection to a standby server, and Hot Standby, allowing continuous archive standby servers to execute read-only queries. The net effect is to support a single master with multiple read-only slave servers. @@ -7742,10 +7742,10 @@ Easier database object permissions management. GRANT/REVOKE IN - SCHEMA supports mass permissions changes on existing objects, + linkend="SQL-GRANT">GRANT/REVOKE IN + SCHEMA supports mass permissions changes on existing objects, while ALTER DEFAULT - PRIVILEGES allows control of privileges for objects created in + PRIVILEGES allows control of privileges for objects created in the future. Large objects (BLOBs) now support permissions management as well. @@ -7754,8 +7754,8 @@ Broadly enhanced stored procedure support. - The DO statement supports - ad-hoc or anonymous code blocks. + The DO statement supports + ad-hoc or anonymous code blocks. Functions can now be called using named parameters. PL/pgSQL is now installed by default, and PL/Perl and Full support for 64-bit - Windows. + Windows. More advanced reporting queries, including additional windowing options - (PRECEDING and FOLLOWING) and the ability to + (PRECEDING and FOLLOWING) and the ability to control the order in which values are fed to aggregate functions. @@ -7808,7 +7808,7 @@ New and enhanced security features, including RADIUS authentication, LDAP authentication improvements, and a new contrib module - passwordcheck + passwordcheck for testing password strength. @@ -7816,10 +7816,10 @@ New high-performance implementation of the - LISTEN/NOTIFY feature. + LISTEN/NOTIFY feature. Pending events are now stored in a memory-based queue rather than - a table. Also, a payload string can be sent with each + a table. Also, a payload string can be sent with each event, rather than transmitting just an event name as before. @@ -7827,7 +7827,7 @@ New implementation of - VACUUM FULL. + VACUUM FULL. This command now rewrites the entire table and indexes, rather than moving individual rows to compact space. It is substantially faster in most cases, and no longer results in index bloat. @@ -7837,7 +7837,7 @@ New contrib module - pg_upgrade + pg_upgrade to support in-place upgrades from 8.3 or 8.4 to 9.0. @@ -7853,7 +7853,7 @@ - EXPLAIN enhancements. + EXPLAIN enhancements. The output is now available in JSON, XML, or YAML format, and includes buffer utilization and other data not previously available. @@ -7861,7 +7861,7 @@ - hstore improvements, + hstore improvements, including new functions and greater data capacity. @@ -7901,34 +7901,34 @@ - Remove server parameter add_missing_from, which was + Remove server parameter add_missing_from, which was defaulted to off for many years (Tom Lane) - Remove server parameter regex_flavor, which + Remove server parameter regex_flavor, which was defaulted to advanced + linkend="posix-syntax-details">advanced for many years (Tom Lane) - archive_mode + archive_mode now only affects archive_command; + linkend="guc-archive-command">archive_command; a new setting, wal_level, affects + linkend="guc-wal-level">wal_level, affects the contents of the write-ahead log (Heikki Linnakangas) - log_temp_files + log_temp_files now uses default file size units of kilobytes (Robert Haas) @@ -7967,13 +7967,13 @@ - bytea output now + bytea output now appears in hex format by default (Peter Eisentraut) The server parameter bytea_output can be + linkend="guc-bytea-output">bytea_output can be used to select the traditional output format if needed for compatibility. @@ -7995,18 +7995,18 @@ Improve standards compliance of SIMILAR TO - patterns and SQL-style substring() patterns (Tom Lane) + linkend="functions-similarto-regexp">SIMILAR TO + patterns and SQL-style substring() patterns (Tom Lane) - This includes treating ? and {...} as + This includes treating ? and {...} as pattern metacharacters, while they were simple literal characters before; that corresponds to new features added in SQL:2008. - Also, ^ and $ are now treated as simple + Also, ^ and $ are now treated as simple literal characters; formerly they were treated as metacharacters, as if the pattern were following POSIX rather than SQL rules. - Also, in SQL-standard substring(), use of parentheses + Also, in SQL-standard substring(), use of parentheses for nesting no longer interferes with capturing of a substring. Also, processing of bracket expressions (character classes) is now more standards-compliant. @@ -8016,14 +8016,14 @@ Reject negative length values in 3-parameter substring() + linkend="functions-string-sql">substring() for bit strings, per the SQL standard (Tom Lane) - Make date_trunc truncate rather than round when reducing + Make date_trunc truncate rather than round when reducing precision of fractional seconds (Tom Lane) @@ -8044,7 +8044,7 @@ - Tighten enforcement of column name consistency during RENAME + Tighten enforcement of column name consistency during RENAME when a child table inherits the same column from multiple unrelated parents (KaiGai Kohei) @@ -8100,8 +8100,8 @@ situations. Although it's recommended that functions encountering this type of error be modified to remove the conflict, the old behavior can be restored if necessary via the configuration parameter plpgsql.variable_conflict, - or via the per-function option #variable_conflict. + linkend="plpgsql-var-subst">plpgsql.variable_conflict, + or via the per-function option #variable_conflict. @@ -8126,8 +8126,8 @@ For example, if a column of the result type is declared as - NUMERIC(30,2), it is no longer acceptable to return a - NUMERIC of some other precision in that column. Previous + NUMERIC(30,2), it is no longer acceptable to return a + NUMERIC of some other precision in that column. Previous versions neglected to check the type modifier and would thus allow result rows that didn't actually conform to the declared restrictions. @@ -8141,33 +8141,33 @@ Formerly, a statement like - SELECT ... INTO rec.fld FROM ... + SELECT ... INTO rec.fld FROM ... was treated as a scalar assignment even if the record field - fld was of composite type. Now it is treated as a - record assignment, the same as when the INTO target is a + fld was of composite type. Now it is treated as a + record assignment, the same as when the INTO target is a regular variable of composite type. So the values to be assigned to the field's subfields should be written as separate columns of the - SELECT list, not as a ROW(...) construct as in + SELECT list, not as a ROW(...) construct as in previous versions. If you need to do this in a way that will work in both 9.0 and previous releases, you can write something like - rec.fld := ROW(...) FROM .... + rec.fld := ROW(...) FROM .... - Remove PL/pgSQL's RENAME declaration (Tom Lane) + Remove PL/pgSQL's RENAME declaration (Tom Lane) - Instead of RENAME, use ALIAS, + Instead of RENAME, use ALIAS, which can now create an alias for any variable, not only dollar sign - parameter names (such as $1) as before. + parameter names (such as $1) as before. @@ -8181,11 +8181,11 @@ - Deprecate use of => as an operator name (Robert Haas) + Deprecate use of => as an operator name (Robert Haas) - Future versions of PostgreSQL will probably reject + Future versions of PostgreSQL will probably reject this operator name entirely, in order to support the SQL-standard notation for named function parameters. For the moment, it is still allowed, but a warning is emitted when such an operator is @@ -8240,7 +8240,7 @@ This feature is called Hot Standby. There are new - postgresql.conf and recovery.conf + postgresql.conf and recovery.conf settings to control this feature, as well as extensive documentation. @@ -8248,18 +8248,18 @@ - Allow write-ahead log (WAL) data to be streamed to a + Allow write-ahead log (WAL) data to be streamed to a standby server (Fujii Masao, Heikki Linnakangas) This feature is called Streaming Replication. - Previously WAL data could be sent to standby servers only - in units of entire WAL files (normally 16 megabytes each). + Previously WAL data could be sent to standby servers only + in units of entire WAL files (normally 16 megabytes each). Streaming Replication eliminates this inefficiency and allows updates on the master to be propagated to standby servers with very little - delay. There are new postgresql.conf and - recovery.conf settings to control this feature, as well as + delay. There are new postgresql.conf and + recovery.conf settings to control this feature, as well as extensive documentation. @@ -8267,9 +8267,9 @@ Add pg_last_xlog_receive_location() - and pg_last_xlog_replay_location(), which - can be used to monitor standby server WAL + linkend="functions-recovery-info-table">pg_last_xlog_receive_location() + and pg_last_xlog_replay_location(), which + can be used to monitor standby server WAL activity (Simon Riggs, Fujii Masao, Heikki Linnakangas) @@ -8286,9 +8286,9 @@ Allow per-tablespace values to be set for sequential and random page - cost estimates (seq_page_cost/random_page_cost) + cost estimates (seq_page_cost/random_page_cost) via ALTER TABLESPACE - ... SET/RESET (Robert Haas) + ... SET/RESET (Robert Haas) @@ -8299,8 +8299,8 @@ - UPDATE, DELETE, and SELECT FOR - UPDATE/SHARE queries that involve joins will now behave much better + UPDATE, DELETE, and SELECT FOR + UPDATE/SHARE queries that involve joins will now behave much better when encountering freshly-updated rows. @@ -8308,7 +8308,7 @@ Improve performance of TRUNCATE when + linkend="SQL-TRUNCATE">TRUNCATE when the table was created or truncated earlier in the same transaction (Tom Lane) @@ -8345,12 +8345,12 @@ - Allow IS NOT NULL restrictions to use indexes (Tom Lane) + Allow IS NOT NULL restrictions to use indexes (Tom Lane) This is particularly useful for finding - MAX()/MIN() values in indexes that + MAX()/MIN() values in indexes that contain many null values. @@ -8358,7 +8358,7 @@ Improve the optimizer's choices about when to use materialize nodes, - and when to use sorting versus hashing for DISTINCT + and when to use sorting versus hashing for DISTINCT (Tom Lane) @@ -8366,7 +8366,7 @@ Improve the optimizer's equivalence detection for expressions involving - boolean <> operators (Tom Lane) + boolean <> operators (Tom Lane) @@ -8387,7 +8387,7 @@ While the Genetic Query Optimizer (GEQO) still selects random plans, it now always selects the same random plans for identical queries, thus giving more consistent performance. You can modify geqo_seed to experiment with + linkend="guc-geqo-seed">geqo_seed to experiment with alternative plans. @@ -8398,7 +8398,7 @@ - This avoids the rare error failed to make a valid plan, + This avoids the rare error failed to make a valid plan, and should also improve planning speed. @@ -8414,7 +8414,7 @@ - Improve ANALYZE + Improve ANALYZE to support inheritance-tree statistics (Tom Lane) @@ -8451,14 +8451,14 @@ Allow setting of number-of-distinct-values statistics using ALTER TABLE + linkend="SQL-ALTERTABLE">ALTER TABLE (Robert Haas) This allows users to override the estimated number or percentage of distinct values for a column. This statistic is normally computed by - ANALYZE, but the estimate can be poor, especially on tables + ANALYZE, but the estimate can be poor, especially on tables with very large numbers of rows. @@ -8475,7 +8475,7 @@ Add support for RADIUS (Remote + linkend="auth-radius">RADIUS (Remote Authentication Dial In User Service) authentication (Magnus Hagander) @@ -8483,28 +8483,28 @@ - Allow LDAP + Allow LDAP (Lightweight Directory Access Protocol) authentication - to operate in search/bind mode + to operate in search/bind mode (Robert Fleming, Magnus Hagander) This allows the user to be looked up first, then the system uses - the DN (Distinguished Name) returned for that user. + the DN (Distinguished Name) returned for that user. Add samehost - and samenet designations to - pg_hba.conf (Stef Walter) + linkend="auth-pg-hba-conf">samehost + and samenet designations to + pg_hba.conf (Stef Walter) - These match the server's IP address and subnet address + These match the server's IP address and subnet address respectively. @@ -8530,7 +8530,7 @@ Add the ability for clients to set an application name, which is displayed in - pg_stat_activity (Dave Page) + pg_stat_activity (Dave Page) @@ -8541,8 +8541,8 @@ - Add a SQLSTATE option (%e) to log_line_prefix + Add a SQLSTATE option (%e) to log_line_prefix (Guillaume Smet) @@ -8555,7 +8555,7 @@ - Write to the Windows event log in UTF16 encoding + Write to the Windows event log in UTF16 encoding (Itagaki Takahiro) @@ -8577,7 +8577,7 @@ Add pg_stat_reset_shared('bgwriter') + linkend="monitoring-stats-funcs-table">pg_stat_reset_shared('bgwriter') to reset the cluster-wide shared statistics for the background writer (Greg Smith) @@ -8586,8 +8586,8 @@ Add pg_stat_reset_single_table_counters() - and pg_stat_reset_single_function_counters() + linkend="monitoring-stats-funcs-table">pg_stat_reset_single_table_counters() + and pg_stat_reset_single_function_counters() to allow resetting the statistics counters for individual tables and functions (Magnus Hagander) @@ -8612,10 +8612,10 @@ Previously only per-database and per-role settings were possible, not combinations. All role and database settings are now stored - in the new pg_db_role_setting system catalog. A new - psql command \drds shows these settings. - The legacy system views pg_roles, - pg_shadow, and pg_user + in the new pg_db_role_setting system catalog. A new + psql command \drds shows these settings. + The legacy system views pg_roles, + pg_shadow, and pg_user do not show combination settings, and therefore no longer completely represent the configuration for a user or database. @@ -8624,9 +8624,9 @@ Add server parameter bonjour, which + linkend="guc-bonjour">bonjour, which controls whether a Bonjour-enabled server advertises - itself via Bonjour (Tom Lane) + itself via Bonjour (Tom Lane) @@ -8639,7 +8639,7 @@ Add server parameter enable_material, which + linkend="guc-enable-material">enable_material, which controls the use of materialize nodes in the optimizer (Robert Haas) @@ -8654,7 +8654,7 @@ Change server parameter log_temp_files to + linkend="guc-log-temp-files">log_temp_files to use default file size units of kilobytes (Robert Haas) @@ -8666,14 +8666,14 @@ - Log changes of parameter values when postgresql.conf is + Log changes of parameter values when postgresql.conf is reloaded (Peter Eisentraut) This lets administrators and security staff audit changes of database settings, and is also very convenient for checking the effects of - postgresql.conf edits. + postgresql.conf edits. @@ -8685,10 +8685,10 @@ Non-superusers can no longer issue ALTER - ROLE/DATABASE SET for parameters that are not currently + ROLE/DATABASE SET for parameters that are not currently known to the server. This allows the server to correctly check that superuser-only parameters are only set by superusers. Previously, - the SET would be allowed and then ignored at session start, + the SET would be allowed and then ignored at session start, making superuser-only custom parameters much less useful than they should be. @@ -8708,24 +8708,24 @@ Perform SELECT - FOR UPDATE/SHARE processing after - applying LIMIT, so the number of rows returned + FOR UPDATE/SHARE processing after + applying LIMIT, so the number of rows returned is always predictable (Tom Lane) Previously, changes made by concurrent transactions could cause a - SELECT FOR UPDATE to unexpectedly return fewer rows than - specified by its LIMIT. FOR UPDATE in combination - with ORDER BY can still produce surprising results, but that - can be corrected by placing FOR UPDATE in a subquery. + SELECT FOR UPDATE to unexpectedly return fewer rows than + specified by its LIMIT. FOR UPDATE in combination + with ORDER BY can still produce surprising results, but that + can be corrected by placing FOR UPDATE in a subquery. Allow mixing of traditional and SQL-standard LIMIT/OFFSET + linkend="SQL-LIMIT">LIMIT/OFFSET syntax (Tom Lane) @@ -8738,15 +8738,15 @@ - Frames can now start with CURRENT ROW, and the ROWS - n PRECEDING/FOLLOWING options are now + Frames can now start with CURRENT ROW, and the ROWS + n PRECEDING/FOLLOWING options are now supported. - Make SELECT INTO and CREATE TABLE AS return + Make SELECT INTO and CREATE TABLE AS return row counts to the client in their command tags (Boszormenyi Zoltan) @@ -8769,7 +8769,7 @@ Support Unicode surrogate pairs (dual 16-bit representation) in U& + linkend="sql-syntax-strings-uescape">U& strings and identifiers (Peter Eisentraut) @@ -8777,7 +8777,7 @@ Support Unicode escapes in E'...' + linkend="sql-syntax-strings-escape">E'...' strings (Marko Kreen) @@ -8796,7 +8796,7 @@ Speed up CREATE - DATABASE by deferring flushes to disk (Andres + DATABASE by deferring flushes to disk (Andres Freund, Greg Stark) @@ -8805,7 +8805,7 @@ Allow comments on columns of tables, views, and composite types only, not other - relation types such as indexes and TOAST tables (Tom Lane) + relation types such as indexes and TOAST tables (Tom Lane) @@ -8819,12 +8819,12 @@ - Let values of columns having storage type MAIN remain on + Let values of columns having storage type MAIN remain on the main heap page unless the row cannot fit on a page (Kevin Grittner) - Previously MAIN values were forced out to TOAST + Previously MAIN values were forced out to TOAST tables until the row size was less than one-quarter of the page size. @@ -8832,26 +8832,26 @@ - <command>ALTER TABLE</> + <command>ALTER TABLE</command> - Implement IF EXISTS for ALTER TABLE DROP COLUMN - and ALTER TABLE DROP CONSTRAINT (Andres Freund) + Implement IF EXISTS for ALTER TABLE DROP COLUMN + and ALTER TABLE DROP CONSTRAINT (Andres Freund) - Allow ALTER TABLE commands that rewrite tables to skip - WAL logging (Itagaki Takahiro) + Allow ALTER TABLE commands that rewrite tables to skip + WAL logging (Itagaki Takahiro) Such operations either produce a new copy of the table or are rolled - back, so WAL archiving can be skipped, unless running in + back, so WAL archiving can be skipped, unless running in continuous archiving mode. This reduces I/O overhead and improves performance. @@ -8859,8 +8859,8 @@ - Fix failure of ALTER TABLE table ADD COLUMN - col serial when done by non-owner of table + Fix failure of ALTER TABLE table ADD COLUMN + col serial when done by non-owner of table (Tom Lane) @@ -8870,14 +8870,14 @@ - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> - Add support for copying COMMENTS and STORAGE - settings in CREATE TABLE ... LIKE commands + Add support for copying COMMENTS and STORAGE + settings in CREATE TABLE ... LIKE commands (Itagaki Takahiro) @@ -8885,14 +8885,14 @@ Add a shortcut for copying all properties in CREATE - TABLE ... LIKE commands (Itagaki Takahiro) + TABLE ... LIKE commands (Itagaki Takahiro) Add the SQL-standard - CREATE TABLE ... OF type command + CREATE TABLE ... OF type command (Peter Eisentraut) @@ -8920,10 +8920,10 @@ This allows mass updates, such as - UPDATE tab SET col = col + 1, + UPDATE tab SET col = col + 1, to work reliably on columns that have unique indexes or are marked as primary keys. - If the constraint is specified as DEFERRABLE it will be + If the constraint is specified as DEFERRABLE it will be checked at the end of the statement, rather than after each row is updated. The constraint check can also be deferred until the end of the current transaction, allowing such updates to be spread over multiple @@ -8942,7 +8942,7 @@ Exclusion constraints generalize uniqueness constraints by allowing arbitrary comparison operators, not just equality. They are created with the CREATE - TABLE CONSTRAINT ... EXCLUDE clause. + TABLE CONSTRAINT ... EXCLUDE clause. The most common use of exclusion constraints is to specify that column entries must not overlap, rather than simply not be equal. This is useful for time periods and other ranges, as well as arrays. @@ -8959,7 +8959,7 @@ For example, a uniqueness constraint violation might now report - Key (x)=(2) already exists. + Key (x)=(2) already exists. @@ -8976,8 +8976,8 @@ Add the ability to make mass permission changes across a whole schema using the new GRANT/REVOKE - IN SCHEMA clause (Petr Jelinek) + linkend="SQL-GRANT">GRANT/REVOKE + IN SCHEMA clause (Petr Jelinek) @@ -8990,7 +8990,7 @@ Add ALTER - DEFAULT PRIVILEGES command to control privileges + DEFAULT PRIVILEGES command to control privileges of objects created later (Petr Jelinek) @@ -9005,7 +9005,7 @@ Add the ability to control large object (BLOB) permissions with - GRANT/REVOKE (KaiGai Kohei) + GRANT/REVOKE (KaiGai Kohei) @@ -9028,8 +9028,8 @@ - Make LISTEN/NOTIFY store pending events + Make LISTEN/NOTIFY store pending events in a memory queue, rather than in a system table (Joachim Wieland) @@ -9042,21 +9042,21 @@ - Allow NOTIFY - to pass an optional payload string to listeners + Allow NOTIFY + to pass an optional payload string to listeners (Joachim Wieland) This greatly improves the usefulness of - LISTEN/NOTIFY as a + LISTEN/NOTIFY as a general-purpose event queue system. - Allow CLUSTER + Allow CLUSTER on all per-database system catalogs (Tom Lane) @@ -9068,30 +9068,30 @@ - <link linkend="SQL-COPY"><command>COPY</></link> + <link linkend="SQL-COPY"><command>COPY</command></link> - Accept COPY ... CSV FORCE QUOTE * + Accept COPY ... CSV FORCE QUOTE * (Itagaki Takahiro) - Now * can be used as shorthand for all columns - in the FORCE QUOTE clause. + Now * can be used as shorthand for all columns + in the FORCE QUOTE clause. - Add new COPY syntax that allows options to be + Add new COPY syntax that allows options to be specified inside parentheses (Robert Haas, Emmanuel Cecchet) - This allows greater flexibility for future COPY options. + This allows greater flexibility for future COPY options. The old syntax is still supported, but only for pre-existing options. @@ -9101,27 +9101,27 @@ - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</></link> + <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> - Allow EXPLAIN to output in XML, - JSON, or YAML format (Robert Haas, Greg + Allow EXPLAIN to output in XML, + JSON, or YAML format (Robert Haas, Greg Sabino Mullane) The new output formats are easily machine-readable, supporting the - development of new tools for analysis of EXPLAIN output. + development of new tools for analysis of EXPLAIN output. - Add new BUFFERS option to report query - buffer usage during EXPLAIN ANALYZE (Itagaki Takahiro) + Add new BUFFERS option to report query + buffer usage during EXPLAIN ANALYZE (Itagaki Takahiro) @@ -9134,19 +9134,19 @@ - Add hash usage information to EXPLAIN output (Robert + Add hash usage information to EXPLAIN output (Robert Haas) - Add new EXPLAIN syntax that allows options to be + Add new EXPLAIN syntax that allows options to be specified inside parentheses (Robert Haas) - This allows greater flexibility for future EXPLAIN options. + This allows greater flexibility for future EXPLAIN options. The old syntax is still supported, but only for pre-existing options. @@ -9156,13 +9156,13 @@ - <link linkend="SQL-VACUUM"><command>VACUUM</></link> + <link linkend="SQL-VACUUM"><command>VACUUM</command></link> - Change VACUUM FULL to rewrite the entire table and + Change VACUUM FULL to rewrite the entire table and rebuild its indexes, rather than moving individual rows around to compact space (Itagaki Takahiro, Tom Lane) @@ -9170,7 +9170,7 @@ The previous method was usually slower and caused index bloat. Note that the new method will use more disk space transiently - during VACUUM FULL; potentially as much as twice + during VACUUM FULL; potentially as much as twice the space normally occupied by the table and its indexes. @@ -9178,12 +9178,12 @@ - Add new VACUUM syntax that allows options to be + Add new VACUUM syntax that allows options to be specified inside parentheses (Itagaki Takahiro) - This allows greater flexibility for future VACUUM options. + This allows greater flexibility for future VACUUM options. The old syntax is still supported, but only for pre-existing options. @@ -9200,7 +9200,7 @@ Allow an index to be named automatically by omitting the index name in - CREATE INDEX + CREATE INDEX (Tom Lane) @@ -9228,22 +9228,22 @@ - Add point_ops operator class for GiST + Add point_ops operator class for GiST (Teodor Sigaev) - This feature permits GiST indexing of point + This feature permits GiST indexing of point columns. The index can be used for several types of queries - such as point <@ polygon + such as point <@ polygon (point is in polygon). This should make many - PostGIS queries faster. + PostGIS queries faster. - Use red-black binary trees for GIN index creation + Use red-black binary trees for GIN index creation (Teodor Sigaev) @@ -9267,16 +9267,16 @@ - Allow bytea values + Allow bytea values to be written in hex notation (Peter Eisentraut) The server parameter bytea_output controls - whether hex or traditional format is used for bytea - output. Libpq's PQescapeByteaConn() function automatically - uses the hex format when connected to PostgreSQL 9.0 + linkend="guc-bytea-output">bytea_output controls + whether hex or traditional format is used for bytea + output. Libpq's PQescapeByteaConn() function automatically + uses the hex format when connected to PostgreSQL 9.0 or newer servers. However, pre-9.0 libpq versions will not correctly process hex format from newer servers. @@ -9293,20 +9293,20 @@ Allow server parameter extra_float_digits - to be increased to 3 (Tom Lane) + to be increased to 3 (Tom Lane) - The previous maximum extra_float_digits setting was - 2. There are cases where 3 digits are needed to dump and - restore float4 values exactly. pg_dump will + The previous maximum extra_float_digits setting was + 2. There are cases where 3 digits are needed to dump and + restore float4 values exactly. pg_dump will now use the setting of 3 when dumping from a server that allows it. - Tighten input checking for int2vector values (Caleb + Tighten input checking for int2vector values (Caleb Welton) @@ -9320,14 +9320,14 @@ - Add prefix support in synonym dictionaries + Add prefix support in synonym dictionaries (Teodor Sigaev) - Add filtering dictionaries (Teodor Sigaev) + Add filtering dictionaries (Teodor Sigaev) @@ -9344,7 +9344,7 @@ - Use more standards-compliant rules for parsing URL tokens + Use more standards-compliant rules for parsing URL tokens (Tom Lane) @@ -9367,9 +9367,9 @@ - For example, if a function is defined to take parameters a - and b, it can be called with func(a := 7, b - := 12) or func(b := 12, a := 7). + For example, if a function is defined to take parameters a + and b, it can be called with func(a := 7, b + := 12) or func(b := 12, a := 7). @@ -9377,24 +9377,24 @@ Support locale-specific regular expression - processing with UTF-8 server encoding (Tom Lane) + processing with UTF-8 server encoding (Tom Lane) Locale-specific regular expression functionality includes case-insensitive matching and locale-specific character classes. - Previously, these features worked correctly for non-ASCII + Previously, these features worked correctly for non-ASCII characters only if the database used a single-byte server encoding (such as LATIN1). They will still misbehave in multi-byte encodings other - than UTF-8. + than UTF-8. Add support for scientific notation in to_char() - (EEEE + linkend="functions-formatting">to_char() + (EEEE specification) (Pavel Stehule, Brendan Jurd) @@ -9402,21 +9402,21 @@ - Make to_char() honor FM - (fill mode) in Y, YY, and - YYY specifications (Bruce Momjian, Tom Lane) + Make to_char() honor FM + (fill mode) in Y, YY, and + YYY specifications (Bruce Momjian, Tom Lane) - It was already honored by YYYY. + It was already honored by YYYY. - Fix to_char() to output localized numeric and monetary - strings in the correct encoding on Windows + Fix to_char() to output localized numeric and monetary + strings in the correct encoding on Windows (Hiroshi Inoue, Itagaki Takahiro, Bruce Momjian) @@ -9429,12 +9429,12 @@ - The polygon && (overlaps) operator formerly just + The polygon && (overlaps) operator formerly just checked to see if the two polygons' bounding boxes overlapped. It now - does a more correct check. The polygon @> and - <@ (contains/contained by) operators formerly checked + does a more correct check. The polygon @> and + <@ (contains/contained by) operators formerly checked to see if one polygon's vertexes were all contained in the other; - this can wrongly report true for some non-convex polygons. + this can wrongly report true for some non-convex polygons. Now they check that all line segments of one polygon are contained in the other. @@ -9450,12 +9450,12 @@ Allow aggregate functions to use ORDER BY (Andrew Gierth) + linkend="syntax-aggregates">ORDER BY (Andrew Gierth) For example, this is now supported: array_agg(a ORDER BY - b). This is useful with aggregates for which the order of input + b). This is useful with aggregates for which the order of input values is significant, and eliminates the need to use a nonstandard subquery to determine the ordering. @@ -9463,7 +9463,7 @@ - Multi-argument aggregate functions can now use DISTINCT + Multi-argument aggregate functions can now use DISTINCT (Andrew Gierth) @@ -9471,7 +9471,7 @@ Add the string_agg() + linkend="functions-aggregate-table">string_agg() aggregate function to combine values into a single string (Pavel Stehule) @@ -9479,15 +9479,15 @@ - Aggregate functions that are called with DISTINCT are + Aggregate functions that are called with DISTINCT are now passed NULL values if the aggregate transition function is - not marked as STRICT (Andrew Gierth) + not marked as STRICT (Andrew Gierth) - For example, agg(DISTINCT x) might pass a NULL x - value to agg(). This is more consistent with the behavior - in non-DISTINCT cases. + For example, agg(DISTINCT x) might pass a NULL x + value to agg(). This is more consistent with the behavior + in non-DISTINCT cases. @@ -9503,9 +9503,9 @@ Add get_bit() - and set_bit() functions for bit - strings, mirroring those for bytea (Leonardo + linkend="functions-binarystring-other">get_bit() + and set_bit() functions for bit + strings, mirroring those for bytea (Leonardo F) @@ -9513,8 +9513,8 @@ Implement OVERLAY() - (replace) for bit strings and bytea + linkend="functions-string-sql">OVERLAY() + (replace) for bit strings and bytea (Leonardo F) @@ -9531,9 +9531,9 @@ Add pg_table_size() - and pg_indexes_size() to provide a more - user-friendly interface to the pg_relation_size() + linkend="functions-admin-dbsize">pg_table_size() + and pg_indexes_size() to provide a more + user-friendly interface to the pg_relation_size() function (Bernd Helmle) @@ -9541,7 +9541,7 @@ Add has_sequence_privilege() + linkend="functions-info-access-table">has_sequence_privilege() for sequence permission checking (Abhijit Menon-Sen) @@ -9556,15 +9556,15 @@ - Make the information_schema views correctly display maximum - octet lengths for char and varchar columns (Peter + Make the information_schema views correctly display maximum + octet lengths for char and varchar columns (Peter Eisentraut) - Speed up information_schema privilege views + Speed up information_schema privilege views (Joachim Wieland) @@ -9581,7 +9581,7 @@ Support execution of anonymous code blocks using the DO statement + linkend="SQL-DO">DO statement (Petr Jelinek, Joshua Tolley, Hannu Valtonen) @@ -9601,22 +9601,22 @@ Such triggers are fired only when the specified column(s) are affected - by the query, e.g. appear in an UPDATE's SET + by the query, e.g. appear in an UPDATE's SET list. - Add the WHEN clause to CREATE TRIGGER + Add the WHEN clause to CREATE TRIGGER to allow control over whether a trigger is fired (Itagaki Takahiro) While the same type of check can always be performed inside the - trigger, doing it in an external WHEN clause can have + trigger, doing it in an external WHEN clause can have performance benefits. @@ -9634,8 +9634,8 @@ - Add the OR REPLACE clause to CREATE LANGUAGE + Add the OR REPLACE clause to CREATE LANGUAGE (Tom Lane) @@ -9677,8 +9677,8 @@ The default behavior is now to throw an error when there is a conflict, so as to avoid surprising behaviors. This can be modified, via the configuration parameter plpgsql.variable_conflict - or the per-function option #variable_conflict, to allow + linkend="plpgsql-var-subst">plpgsql.variable_conflict + or the per-function option #variable_conflict, to allow either the variable or the query-supplied column to be used. In any case PL/pgSQL will no longer attempt to substitute variables in places where they would not be syntactically valid. @@ -9731,7 +9731,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Formerly, input parameters were treated as being declared - CONST, so the function's code could not change their + CONST, so the function's code could not change their values. This restriction has been removed to simplify porting of functions from other DBMSes that do not impose the equivalent restriction. An input parameter now acts like a local @@ -9747,26 +9747,26 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add count and ALL options to MOVE - FORWARD/BACKWARD in PL/pgSQL (Pavel Stehule) + Add count and ALL options to MOVE + FORWARD/BACKWARD in PL/pgSQL (Pavel Stehule) - Allow PL/pgSQL's WHERE CURRENT OF to use a cursor + Allow PL/pgSQL's WHERE CURRENT OF to use a cursor variable (Tom Lane) - Allow PL/pgSQL's OPEN cursor FOR EXECUTE to + Allow PL/pgSQL's OPEN cursor FOR EXECUTE to use parameters (Pavel Stehule, Itagaki Takahiro) - This is accomplished with a new USING clause. + This is accomplished with a new USING clause. @@ -9782,28 +9782,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add new PL/Perl functions: quote_literal(), - quote_nullable(), quote_ident(), - encode_bytea(), decode_bytea(), - looks_like_number(), - encode_array_literal(), - encode_array_constructor() (Tim Bunce) + linkend="plperl-utility-functions">quote_literal(), + quote_nullable(), quote_ident(), + encode_bytea(), decode_bytea(), + looks_like_number(), + encode_array_literal(), + encode_array_constructor() (Tim Bunce) Add server parameter plperl.on_init to + linkend="guc-plperl-on-init">plperl.on_init to specify a PL/Perl initialization function (Tim Bunce) plperl.on_plperl_init + linkend="guc-plperl-on-plperl-init">plperl.on_plperl_init and plperl.on_plperlu_init + linkend="guc-plperl-on-plperl-init">plperl.on_plperlu_init are also available for initialization that is specific to the trusted or untrusted language respectively. @@ -9811,29 +9811,29 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Support END blocks in PL/Perl (Tim Bunce) + Support END blocks in PL/Perl (Tim Bunce) - END blocks do not currently allow database access. + END blocks do not currently allow database access. - Allow use strict in PL/Perl (Tim Bunce) + Allow use strict in PL/Perl (Tim Bunce) - Perl strict checks can also be globally enabled with the + Perl strict checks can also be globally enabled with the new server parameter plperl.use_strict. + linkend="guc-plperl-use-strict">plperl.use_strict. - Allow require in PL/Perl (Tim Bunce) + Allow require in PL/Perl (Tim Bunce) @@ -9845,7 +9845,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Allow use feature in PL/Perl if Perl version 5.10 or + Allow use feature in PL/Perl if Perl version 5.10 or later is used (Tim Bunce) @@ -9879,13 +9879,13 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Improve bytea support in PL/Python (Caleb Welton) + Improve bytea support in PL/Python (Caleb Welton) - Bytea values passed into PL/Python are now represented as - binary, rather than the PostgreSQL bytea text format. - Bytea values containing null bytes are now also output + Bytea values passed into PL/Python are now represented as + binary, rather than the PostgreSQL bytea text format. + Bytea values containing null bytes are now also output properly from PL/Python. Passing of boolean, integer, and float values was also improved. @@ -9906,14 +9906,14 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add Python 3 support to PL/Python (Peter Eisentraut) + Add Python 3 support to PL/Python (Peter Eisentraut) The new server-side language is called plpython3u. This + linkend="plpython-python23">plpython3u. This cannot be used in the same session with the - Python 2 server-side language. + Python 2 server-side language. @@ -9936,8 +9936,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add an @@ -9945,21 +9945,21 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Add support for quoting/escaping the values of psql + Add support for quoting/escaping the values of psql variables as SQL strings or identifiers (Pavel Stehule, Robert Haas) - For example, :'var' will produce the value of - var quoted and properly escaped as a literal string, while - :"var" will produce its value quoted and escaped as an + For example, :'var' will produce the value of + var quoted and properly escaped as a literal string, while + :"var" will produce its value quoted and escaped as an identifier. @@ -9967,11 +9967,11 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Ignore a leading UTF-8-encoded Unicode byte-order marker in - script files read by psql (Itagaki Takahiro) + script files read by psql (Itagaki Takahiro) - This is enabled when the client encoding is UTF-8. + This is enabled when the client encoding is UTF-8. It improves compatibility with certain editors, mostly on Windows, that insist on inserting such markers. @@ -9979,57 +9979,57 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Fix psql --file - to properly honor (Bruce Momjian) - Avoid overwriting of psql's command-line history when - two psql sessions are run concurrently (Tom Lane) + Avoid overwriting of psql's command-line history when + two psql sessions are run concurrently (Tom Lane) - Improve psql's tab completion support (Itagaki + Improve psql's tab completion support (Itagaki Takahiro) - Show \timing output when it is enabled, regardless of - quiet mode (Peter Eisentraut) + Show \timing output when it is enabled, regardless of + quiet mode (Peter Eisentraut) - <application>psql</> Display + <application>psql</application> Display - Improve display of wrapped columns in psql (Roger + Improve display of wrapped columns in psql (Roger Leigh) This behavior is now the default. The previous formatting is available by using \pset linestyle - old-ascii. + old-ascii. - Allow psql to use fancy Unicode line-drawing - characters via \pset linestyle unicode (Roger Leigh) + Allow psql to use fancy Unicode line-drawing + characters via \pset linestyle unicode (Roger Leigh) @@ -10038,27 +10038,27 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <application>psql</> <link - linkend="APP-PSQL-meta-commands"><command>\d</></link> + <title><application>psql</application> <link + linkend="APP-PSQL-meta-commands"><command>\d</command></link> Commands - Make \d show child tables that inherit from the specified + Make \d show child tables that inherit from the specified parent (Damien Clochard) - \d shows only the number of child tables, while - \d+ shows the names of all child tables. + \d shows only the number of child tables, while + \d+ shows the names of all child tables. - Show definitions of index columns in \d index_name + Show definitions of index columns in \d index_name (Khee Chin) @@ -10070,7 +10070,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Show a view's defining query only in - \d+, not in \d (Peter Eisentraut) + \d+, not in \d (Peter Eisentraut) @@ -10084,33 +10084,33 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Make pg_dump/pg_restore - also remove large objects (Itagaki Takahiro) - Fix pg_dump to properly dump large objects when - standard_conforming_strings is enabled (Tom Lane) + Fix pg_dump to properly dump large objects when + standard_conforming_strings is enabled (Tom Lane) The previous coding could fail when dumping to an archive file - and then generating script output from pg_restore. + and then generating script output from pg_restore. - pg_restore now emits large-object data in hex format + pg_restore now emits large-object data in hex format when generating script output (Tom Lane) @@ -10123,16 +10123,16 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Allow pg_dump to dump comments attached to columns + Allow pg_dump to dump comments attached to columns of composite types (Taro Minowa (Higepon)) - Make pg_dump @@ -10143,7 +10143,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - pg_restore now complains if any command-line arguments + pg_restore now complains if any command-line arguments remain after the switches and optional file name (Tom Lane) @@ -10158,28 +10158,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then <link - linkend="app-pg-ctl"><application>pg_ctl</></link> + linkend="app-pg-ctl">pg_ctl - Allow pg_ctl to be used safely to start the - postmaster during a system reboot (Tom Lane) + Allow pg_ctl to be used safely to start the + postmaster during a system reboot (Tom Lane) - Previously, pg_ctl's parent process could have been - mistakenly identified as a running postmaster based on - a stale postmaster lock file, resulting in a transient + Previously, pg_ctl's parent process could have been + mistakenly identified as a running postmaster based on + a stale postmaster lock file, resulting in a transient failure to start the database. - Give pg_ctl the ability to initialize the database - (by invoking initdb) (Zdenek Kotala) + Give pg_ctl the ability to initialize the database + (by invoking initdb) (Zdenek Kotala) @@ -10190,25 +10190,25 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <application>Development Tools</> + <application>Development Tools</application> - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Add new libpq functions + Add new libpq functions PQconnectdbParams() - and PQconnectStartParams() (Guillaume + linkend="libpq-connect">PQconnectdbParams() + and PQconnectStartParams() (Guillaume Lelarge) - These functions are similar to PQconnectdb() and - PQconnectStart() except that they accept a null-terminated + These functions are similar to PQconnectdb() and + PQconnectStart() except that they accept a null-terminated array of connection options, rather than requiring all options to be provided in a single string. @@ -10216,22 +10216,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add libpq functions PQescapeLiteral() - and PQescapeIdentifier() (Robert Haas) + Add libpq functions PQescapeLiteral() + and PQescapeIdentifier() (Robert Haas) These functions return appropriately quoted and escaped SQL string literals and identifiers. The caller is not required to pre-allocate - the string result, as is required by PQescapeStringConn(). + the string result, as is required by PQescapeStringConn(). Add support for a per-user service file (.pg_service.conf), + linkend="libpq-pgservice">.pg_service.conf), which is checked before the site-wide service file (Peter Eisentraut) @@ -10239,7 +10239,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Properly report an error if the specified libpq service + Properly report an error if the specified libpq service cannot be found (Peter Eisentraut) @@ -10258,15 +10258,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Avoid extra system calls to block and unblock SIGPIPE - in libpq, on platforms that offer alternative methods + Avoid extra system calls to block and unblock SIGPIPE + in libpq, on platforms that offer alternative methods (Jeremy Kerr) - When a .pgpass-supplied + When a .pgpass-supplied password fails, mention where the password came from in the error message (Bruce Momjian) @@ -10288,22 +10288,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="ecpg"><application>ecpg</></link> + <link linkend="ecpg"><application>ecpg</application></link> - Add SQLDA - (SQL Descriptor Area) support to ecpg + Add SQLDA + (SQL Descriptor Area) support to ecpg (Boszormenyi Zoltan) - Add the DESCRIBE - [ OUTPUT ] statement to ecpg + Add the DESCRIBE + [ OUTPUT ] statement to ecpg (Boszormenyi Zoltan) @@ -10317,28 +10317,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add the string data type in ecpg + Add the string data type in ecpg Informix-compatibility mode (Boszormenyi Zoltan) - Allow ecpg to use new and old + Allow ecpg to use new and old variable names without restriction (Michael Meskes) - Allow ecpg to use variable names in - free() (Michael Meskes) + Allow ecpg to use variable names in + free() (Michael Meskes) - Make ecpg_dynamic_type() return zero for non-SQL3 data + Make ecpg_dynamic_type() return zero for non-SQL3 data types (Michael Meskes) @@ -10350,41 +10350,41 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Support long long types on platforms that already have 64-bit - long (Michael Meskes) + Support long long types on platforms that already have 64-bit + long (Michael Meskes) - <application>ecpg</> Cursors + <application>ecpg</application> Cursors - Add out-of-scope cursor support in ecpg's native mode + Add out-of-scope cursor support in ecpg's native mode (Boszormenyi Zoltan) - This allows DECLARE to use variables that are not in - scope when OPEN is called. This facility already existed - in ecpg's Informix-compatibility mode. + This allows DECLARE to use variables that are not in + scope when OPEN is called. This facility already existed + in ecpg's Informix-compatibility mode. - Allow dynamic cursor names in ecpg (Boszormenyi Zoltan) + Allow dynamic cursor names in ecpg (Boszormenyi Zoltan) - Allow ecpg to use noise words FROM and - IN in FETCH and MOVE (Boszormenyi + Allow ecpg to use noise words FROM and + IN in FETCH and MOVE (Boszormenyi Zoltan) @@ -10409,8 +10409,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then The thread-safety option can be disabled with configure - . @@ -10421,12 +10421,12 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Now that /proc/self/oom_adj allows disabling - of the Linux out-of-memory (OOM) + Now that /proc/self/oom_adj allows disabling + of the Linux out-of-memory (OOM) killer, it's recommendable to disable OOM kills for the postmaster. It may then be desirable to re-enable OOM kills for the postmaster's child processes. The new compile-time option LINUX_OOM_ADJ + linkend="linux-memory-overcommit">LINUX_OOM_ADJ allows the killer to be reactivated for child processes. @@ -10440,31 +10440,31 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - New Makefile targets world, - install-world, and installcheck-world + New Makefile targets world, + install-world, and installcheck-world (Andrew Dunstan) - These are similar to the existing all, install, - and installcheck targets, but they also build the - HTML documentation, build and test contrib, - and test server-side languages and ecpg. + These are similar to the existing all, install, + and installcheck targets, but they also build the + HTML documentation, build and test contrib, + and test server-side languages and ecpg. Add data and documentation installation location control to - PGXS Makefiles (Mark Cave-Ayland) + PGXS Makefiles (Mark Cave-Ayland) - Add Makefile rules to build the PostgreSQL documentation - as a single HTML file or as a single plain-text file + Add Makefile rules to build the PostgreSQL documentation + as a single HTML file or as a single plain-text file (Peter Eisentraut, Bruce Momjian) @@ -10482,12 +10482,12 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Support compiling on 64-bit - Windows and running in 64-bit + Windows and running in 64-bit mode (Tsutomu Yamada, Magnus Hagander) - This allows for large shared memory sizes on Windows. + This allows for large shared memory sizes on Windows. @@ -10495,7 +10495,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Support server builds using Visual Studio - 2008 (Magnus Hagander) + 2008 (Magnus Hagander) @@ -10518,8 +10518,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - For example, the prebuilt HTML documentation is now in - doc/src/sgml/html/; the manual pages are packaged + For example, the prebuilt HTML documentation is now in + doc/src/sgml/html/; the manual pages are packaged similarly. @@ -10543,13 +10543,13 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then User-defined constraint triggers now have entries in - pg_constraint as well as pg_trigger + pg_constraint as well as pg_trigger (Tom Lane) Because of this change, - pg_constraint.pgconstrname is now + pg_constraint.pgconstrname is now redundant and has been removed. @@ -10557,8 +10557,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add system catalog columns - pg_constraint.conindid and - pg_trigger.tgconstrindid + pg_constraint.conindid and + pg_trigger.tgconstrindid to better document the use of indexes for constraint enforcement (Tom Lane) @@ -10578,7 +10578,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Improve source code test coverage, including contrib, PL/Python, + Improve source code test coverage, including contrib, PL/Python, and PL/Perl (Peter Eisentraut, Andrew Dunstan) @@ -10598,7 +10598,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Automatically generate the initial contents of - pg_attribute for bootstrapped catalogs + pg_attribute for bootstrapped catalogs (John Naylor) @@ -10610,8 +10610,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Split the processing of - INSERT/UPDATE/DELETE operations out - of execMain.c (Marko Tiikkaja) + INSERT/UPDATE/DELETE operations out + of execMain.c (Marko Tiikkaja) @@ -10622,7 +10622,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Simplify translation of psql's SQL help text + Simplify translation of psql's SQL help text (Peter Eisentraut) @@ -10641,8 +10641,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add a new ERRCODE_INVALID_PASSWORD - SQLSTATE error code (Bruce Momjian) + linkend="errcodes-table">ERRCODE_INVALID_PASSWORD + SQLSTATE error code (Bruce Momjian) @@ -10661,23 +10661,23 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add new documentation section - about running PostgreSQL in non-durable mode + about running PostgreSQL in non-durable mode to improve performance (Bruce Momjian) - Restructure the HTML documentation - Makefile rules to make their dependency checks work + Restructure the HTML documentation + Makefile rules to make their dependency checks work correctly, avoiding unnecessary rebuilds (Peter Eisentraut) - Use DocBook XSL stylesheets for man page - building, rather than Docbook2X (Peter Eisentraut) + Use DocBook XSL stylesheets for man page + building, rather than Docbook2X (Peter Eisentraut) @@ -10711,22 +10711,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Require Autoconf 2.63 to build - configure (Peter Eisentraut) + Require Autoconf 2.63 to build + configure (Peter Eisentraut) - Require Flex 2.5.31 or later to build - from a CVS checkout (Tom Lane) + Require Flex 2.5.31 or later to build + from a CVS checkout (Tom Lane) - Require Perl version 5.8 or later to build - from a CVS checkout (John Naylor, Andrew Dunstan) + Require Perl version 5.8 or later to build + from a CVS checkout (John Naylor, Andrew Dunstan) @@ -10741,25 +10741,25 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Use a more modern API for Bonjour (Tom Lane) + Use a more modern API for Bonjour (Tom Lane) - Bonjour support now requires macOS 10.3 or later. + Bonjour support now requires macOS 10.3 or later. The older API has been deprecated by Apple. - Add spinlock support for the SuperH + Add spinlock support for the SuperH architecture (Nobuhiro Iwamatsu) - Allow non-GCC compilers to use inline functions if + Allow non-GCC compilers to use inline functions if they support them (Kurt Harriman) @@ -10773,14 +10773,14 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Restructure use of LDFLAGS to be more consistent + Restructure use of LDFLAGS to be more consistent across platforms (Tom Lane) - LDFLAGS is now used for linking both executables and shared - libraries, and we add on LDFLAGS_EX when linking - executables, or LDFLAGS_SL when linking shared libraries. + LDFLAGS is now used for linking both executables and shared + libraries, and we add on LDFLAGS_EX when linking + executables, or LDFLAGS_SL when linking shared libraries. @@ -10795,15 +10795,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Make backend header files safe to include in C++ + Make backend header files safe to include in C++ (Kurt Harriman, Peter Eisentraut) These changes remove keyword conflicts that previously made - C++ usage difficult in backend code. However, there - are still other complexities when using C++ for backend - functions. extern "C" { } is still necessary in + C++ usage difficult in backend code. However, there + are still other complexities when using C++ for backend + functions. extern "C" { } is still necessary in appropriate places, and memory management and error handling are still problematic. @@ -10812,15 +10812,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add AggCheckCallContext() - for use in detecting if a C function is + linkend="xaggr">AggCheckCallContext() + for use in detecting if a C function is being called as an aggregate (Hitoshi Harada) - Change calling convention for SearchSysCache() and related + Change calling convention for SearchSysCache() and related functions to avoid hard-wiring the maximum number of cache keys (Robert Haas) @@ -10833,8 +10833,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Require calls of fastgetattr() and - heap_getattr() backend macros to provide a non-NULL fourth + Require calls of fastgetattr() and + heap_getattr() backend macros to provide a non-NULL fourth argument (Robert Haas) @@ -10842,7 +10842,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Custom typanalyze functions should no longer rely on - VacAttrStats.attr to determine the type + VacAttrStats.attr to determine the type of data they will be passed (Tom Lane) @@ -10888,7 +10888,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add contrib/pg_upgrade + Add contrib/pg_upgrade to support in-place upgrades (Bruce Momjian) @@ -10903,15 +10903,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add support for preserving relation relfilenode values + linkend="catalog-pg-class">relfilenode values during binary upgrades (Bruce Momjian) - Add support for preserving pg_type - and pg_enum OIDs during binary upgrades + Add support for preserving pg_type + and pg_enum OIDs during binary upgrades (Bruce Momjian) @@ -10919,7 +10919,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Move data files within tablespaces into - PostgreSQL-version-specific subdirectories + PostgreSQL-version-specific subdirectories (Bruce Momjian) @@ -10941,22 +10941,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add multithreading option ( - This allows multiple CPUs to be used by pgbench, + This allows multiple CPUs to be used by pgbench, reducing the risk of pgbench itself becoming the test bottleneck. - Add \shell and \setshell meta + Add \shell and \setshell meta commands to contrib/pgbench + linkend="pgbench">contrib/pgbench (Michael Paquier) @@ -10964,20 +10964,20 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then New features for contrib/dict_xsyn + linkend="dict-xsyn">contrib/dict_xsyn (Sergey Karpov) - The new options are matchorig, matchsynonyms, - and keepsynonyms. + The new options are matchorig, matchsynonyms, + and keepsynonyms. Add full text dictionary contrib/unaccent + linkend="unaccent">contrib/unaccent (Teodor Sigaev) @@ -10990,24 +10990,24 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add dblink_get_notify() - to contrib/dblink (Marcus Kempe) + linkend="CONTRIB-DBLINK-GET-NOTIFY">dblink_get_notify() + to contrib/dblink (Marcus Kempe) - This allows asynchronous notifications in dblink. + This allows asynchronous notifications in dblink. - Improve contrib/dblink's handling of dropped columns + Improve contrib/dblink's handling of dropped columns (Tom Lane) This affects dblink_build_sql_insert() + linkend="CONTRIB-DBLINK-BUILD-SQL-INSERT">dblink_build_sql_insert() and related functions. These functions now number columns according to logical not physical column numbers. @@ -11016,23 +11016,23 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Greatly increase contrib/hstore's data + linkend="hstore">contrib/hstore's data length limit, and add B-tree and hash support so GROUP - BY and DISTINCT operations are possible on - hstore columns (Andrew Gierth) + BY and DISTINCT operations are possible on + hstore columns (Andrew Gierth) New functions and operators were also added. These improvements - make hstore a full-function key-value store embedded in - PostgreSQL. + make hstore a full-function key-value store embedded in + PostgreSQL. Add contrib/passwordcheck + linkend="passwordcheck">contrib/passwordcheck to support site-specific password strength policies (Laurenz Albe) @@ -11046,7 +11046,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add contrib/pg_archivecleanup + linkend="pgarchivecleanup">contrib/pg_archivecleanup tool (Simon Riggs) @@ -11060,7 +11060,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add query text to contrib/auto_explain + linkend="auto-explain">contrib/auto_explain output (Andrew Dunstan) @@ -11068,7 +11068,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add buffer access counters to contrib/pg_stat_statements + linkend="pgstatstatements">contrib/pg_stat_statements (Itagaki Takahiro) @@ -11076,10 +11076,10 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Update contrib/start-scripts/linux - to use /proc/self/oom_adj to disable the - Linux - out-of-memory (OOM) killer (Alex + linkend="server-start">contrib/start-scripts/linux + to use /proc/self/oom_adj to disable the + Linux + out-of-memory (OOM) killer (Alex Hunsaker, Tom Lane) diff --git a/doc/src/sgml/release-9.1.sgml b/doc/src/sgml/release-9.1.sgml index c354b7d1bc..2939631609 100644 --- a/doc/src/sgml/release-9.1.sgml +++ b/doc/src/sgml/release-9.1.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 9.1.X series. Users are encouraged to update to a newer release branch soon. @@ -68,13 +68,13 @@ - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. @@ -82,15 +82,15 @@ Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -112,7 +112,7 @@ - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -126,7 +126,7 @@ Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -137,26 +137,26 @@ - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -170,17 +170,17 @@ If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -193,15 +193,15 @@ or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -226,7 +226,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.1.X release series in September 2016. Users are encouraged to update to a newer release branch soon. @@ -253,17 +253,17 @@ Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -277,7 +277,7 @@ - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -286,22 +286,22 @@ Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -311,40 +311,40 @@ These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -355,12 +355,12 @@ - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -370,7 +370,7 @@ Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -383,12 +383,12 @@ - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -397,12 +397,12 @@ - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -410,8 +410,8 @@ - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -423,7 +423,7 @@ - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -451,8 +451,8 @@ - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -465,21 +465,21 @@ It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -491,13 +491,13 @@ Branch: REL9_1_STABLE [d56c02f1a] 2016-06-19 13:45:03 -0400 Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 --> - Revert to the old heuristic timeout for pg_ctl start -w + Revert to the old heuristic timeout for pg_ctl start -w (Tom Lane) The new method adopted as of release 9.1.20 does not work - when silent_mode is enabled, so go back to the old way. + when silent_mode is enabled, so go back to the old way. @@ -530,7 +530,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -542,7 +542,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -568,7 +568,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.1.X release series in September 2016. Users are encouraged to update to a newer release branch soon. @@ -604,7 +604,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -613,7 +613,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -621,8 +621,8 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -634,28 +634,28 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -663,23 +663,23 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -692,12 +692,12 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -705,9 +705,9 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -754,56 +754,56 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -829,27 +829,27 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -863,20 +863,20 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -897,21 +897,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -972,25 +972,25 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -999,7 +999,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -1018,21 +1018,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -1041,7 +1041,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -1063,14 +1063,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -1122,7 +1122,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -1135,14 +1135,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -1155,7 +1155,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -1181,13 +1181,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -1195,15 +1195,15 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -1211,21 +1211,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -1233,23 +1233,23 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -1257,18 +1257,18 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -1278,44 +1278,44 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -1323,22 +1323,22 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -1346,42 +1346,42 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -1393,19 +1393,19 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -1413,11 +1413,11 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -1426,7 +1426,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -1472,8 +1472,8 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -1499,13 +1499,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -1523,7 +1523,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -1535,7 +1535,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - This substantially improves performance when pg_dump + This substantially improves performance when pg_dump tries to dump a large number of tables. @@ -1550,13 +1550,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -1568,14 +1568,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -1583,21 +1583,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -1610,7 +1610,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -1654,22 +1654,22 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -1682,9 +1682,9 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -1723,12 +1723,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -1748,7 +1748,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -1781,44 +1781,44 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -1830,7 +1830,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -1838,68 +1838,68 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) When dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -1908,18 +1908,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -1927,11 +1927,11 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -1939,14 +1939,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -1958,38 +1958,38 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -2038,7 +2038,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -2049,13 +2049,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -2101,12 +2101,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -2120,29 +2120,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -2176,8 +2176,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -2215,7 +2215,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -2225,7 +2225,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -2235,15 +2235,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -2252,16 +2252,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -2269,16 +2269,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -2306,7 +2306,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -2334,7 +2334,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -2342,7 +2342,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -2356,14 +2356,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -2383,13 +2383,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -2430,18 +2430,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -2454,20 +2454,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -2480,20 +2480,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -2501,14 +2501,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -2517,25 +2517,25 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -2548,38 +2548,38 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -2592,14 +2592,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -2611,7 +2611,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -2619,28 +2619,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -2648,15 +2648,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -2668,7 +2668,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -2715,15 +2715,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -2733,27 +2733,27 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -2761,12 +2761,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -2807,7 +2807,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -2833,21 +2833,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -2855,14 +2855,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -2870,7 +2870,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -2882,22 +2882,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -2905,12 +2905,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -2920,7 +2920,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -2932,7 +2932,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -2945,19 +2945,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -2977,7 +2977,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -3008,14 +3008,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -3024,7 +3024,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -3037,8 +3037,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -3064,7 +3064,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -3097,19 +3097,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -3117,14 +3117,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -3133,7 +3133,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -3147,32 +3147,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -3180,14 +3180,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -3195,32 +3195,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -3230,7 +3230,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -3238,17 +3238,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -3256,16 +3256,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -3280,21 +3280,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -3302,7 +3302,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -3314,7 +3314,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -3322,7 +3322,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -3334,24 +3334,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -3359,7 +3359,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -3393,29 +3393,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -3427,15 +3427,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -3447,9 +3447,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -3458,21 +3458,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -3531,15 +3531,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -3578,14 +3578,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -3613,13 +3613,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -3634,7 +3634,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -3647,7 +3647,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -3668,19 +3668,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -3688,7 +3688,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -3700,14 +3700,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -3722,7 +3722,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -3731,16 +3731,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -3776,15 +3776,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -3795,17 +3795,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -3813,15 +3813,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -3829,20 +3829,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -3850,20 +3850,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -3923,7 +3923,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -3947,7 +3947,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -3960,17 +3960,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -3989,8 +3989,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -4003,13 +4003,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -4022,14 +4022,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -4075,19 +4075,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -4100,7 +4100,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -4120,7 +4120,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -4134,12 +4134,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -4166,7 +4166,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -4178,35 +4178,35 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -4221,7 +4221,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -4241,20 +4241,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -4272,8 +4272,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -4286,7 +4286,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -4299,19 +4299,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -4335,7 +4335,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -4355,7 +4355,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -4369,19 +4369,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -4389,21 +4389,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -4428,12 +4428,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -4441,8 +4441,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -4453,31 +4453,31 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -4485,7 +4485,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -4498,20 +4498,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -4522,7 +4522,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -4536,28 +4536,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -4566,20 +4566,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -4631,13 +4631,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -4649,18 +4649,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -4686,7 +4686,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -4708,8 +4708,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -4726,7 +4726,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -4756,13 +4756,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -4776,7 +4776,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -4790,7 +4790,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -4801,10 +4801,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -4814,28 +4814,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -4887,7 +4887,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -4895,7 +4895,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -4908,7 +4908,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -4939,46 +4939,46 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) - Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) + Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) - Previously such cases could cause a pg_upgrade error. + Previously such cases could cause a pg_upgrade error. - Reorder pg_dump processing of extension-related + Reorder pg_dump processing of extension-related rules and event triggers (Joe Conway) @@ -4986,7 +4986,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Force dumping of extension tables if specified by pg_dump - -t or -n (Joe Conway) + -t or -n (Joe Conway) @@ -4999,19 +4999,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_restore -l with the directory archive to display + Fix pg_restore -l with the directory archive to display the correct format name (Fujii Masao) - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. @@ -5041,26 +5041,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -5084,14 +5084,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -5103,24 +5103,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) - Make pg_upgrade use pg_dump - --quote-all-identifiers to avoid problems with keyword changes + Make pg_upgrade use pg_dump + --quote-all-identifiers to avoid problems with keyword changes between releases (Tom Lane) @@ -5134,7 +5134,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -5143,21 +5143,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -5171,7 +5171,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -5206,7 +5206,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -5230,7 +5230,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -5244,9 +5244,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) @@ -5259,7 +5259,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 An unprivileged database user could exploit this mistake to call - pg_start_backup() or pg_stop_backup(), + pg_start_backup() or pg_stop_backup(), thus possibly interfering with creation of routine backups. (CVE-2013-1901) @@ -5267,32 +5267,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -5307,21 +5307,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -5332,21 +5332,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -5367,28 +5367,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -5406,7 +5406,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -5427,29 +5427,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -5458,15 +5458,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - In pg_basebackup, include only the current server + In pg_basebackup, include only the current server version's subdirectory when backing up a tablespace (Heikki Linnakangas) @@ -5474,26 +5474,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a server version check in pg_basebackup and - pg_receivexlog, so they fail cleanly with version + Add a server version check in pg_basebackup and + pg_receivexlog, so they fail cleanly with version combinations that won't work (Heikki Linnakangas) - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -5501,12 +5501,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -5551,7 +5551,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -5635,19 +5635,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -5659,13 +5659,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -5673,13 +5673,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -5699,13 +5699,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) - Fix pg_extension_config_dump() to handle + Fix pg_extension_config_dump() to handle extension-update cases properly (Tom Lane) @@ -5729,13 +5729,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) @@ -5743,61 +5743,61 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible error if a relation file is removed while - pg_basebackup is running (Heikki Linnakangas) + pg_basebackup is running (Heikki Linnakangas) - Make pg_dump exclude data of unlogged tables when + Make pg_dump exclude data of unlogged tables when running on a hot-standby server (Magnus Hagander) This would fail anyway because the data is not available on the standby server, so it seems most convenient to assume - automatically. - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -5817,15 +5817,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -5874,13 +5874,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -5888,8 +5888,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -5926,13 +5926,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -5990,20 +5990,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. - Fix SELECT DISTINCT with index-optimized - MIN/MAX on an inheritance tree (Tom Lane) + Fix SELECT DISTINCT with index-optimized + MIN/MAX on an inheritance tree (Tom Lane) The planner would fail with failed to re-find MinMaxAggInfo - record given this combination of factors. + record given this combination of factors. @@ -6021,10 +6021,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -6032,12 +6032,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) In very unusual circumstances, this oversight could result in passing - incorrect data to a trigger WHEN condition, or to the + incorrect data to a trigger WHEN condition, or to the precheck logic for a foreign-key enforcement trigger. That could result in a crash, or in an incorrect decision about whether to fire the trigger. @@ -6046,7 +6046,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -6058,7 +6058,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix ALTER EXTENSION SET SCHEMA's failure to move some + Fix ALTER EXTENSION SET SCHEMA's failure to move some subsidiary objects into the new schema (Álvaro Herrera, Dimitri Fontaine) @@ -6066,14 +6066,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -6087,7 +6087,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -6095,7 +6095,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -6117,14 +6117,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. @@ -6132,7 +6132,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix display of - pg_stat_replication.sync_state at a + pg_stat_replication.sync_state at a page boundary (Kyotaro Horiguchi) @@ -6146,7 +6146,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -6159,8 +6159,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -6170,15 +6170,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Make pg_dump dump SEQUENCE SET items in + Make pg_dump dump SEQUENCE SET items in the data not pre-data section of the archive (Tom Lane) @@ -6190,25 +6190,25 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -6219,67 +6219,67 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix tar files emitted by pg_basebackup to + Fix tar files emitted by pg_basebackup to be POSIX conformant (Brian Weaver, Tom Lane) - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Ensure that make install for an extension creates the - extension installation directory (Cédric Villemain) + Ensure that make install for an extension creates the + extension installation directory (Cédric Villemain) - Previously, this step was missed if MODULEDIR was set in + Previously, this step was missed if MODULEDIR was set in the extension's Makefile. - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -6290,7 +6290,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -6323,7 +6323,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, you may need to perform REINDEX operations to + However, you may need to perform REINDEX operations to recover from the effects of the data corruption bug described in the first changelog item below. @@ -6354,7 +6354,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 likely to occur on standby slave servers since those perform much more WAL replay. There is a low probability of corruption of btree and GIN indexes. There is a much higher probability of corruption of - table visibility maps. Fortunately, visibility maps are + table visibility maps. Fortunately, visibility maps are non-critical data in 9.1, so the worst consequence of such corruption in 9.1 installations is transient inefficiency of vacuuming. Table data proper cannot be corrupted by this bug. @@ -6363,18 +6363,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While no index corruption due to this bug is known to have occurred in the field, as a precautionary measure it is recommended that - production installations REINDEX all btree and GIN + production installations REINDEX all btree and GIN indexes at a convenient time after upgrading to 9.1.6. Also, if you intend to do an in-place upgrade to 9.2.X, before doing - so it is recommended to perform a VACUUM of all tables + so it is recommended to perform a VACUUM of all tables while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will ensure that any lingering wrong data in the visibility maps is corrected before 9.2.X can depend on it. vacuum_cost_delay + linkend="guc-vacuum-cost-delay">vacuum_cost_delay can be adjusted to reduce the performance impact of vacuuming, while causing it to take longer to finish. @@ -6388,15 +6388,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. Fix misbehavior when default_transaction_isolation - is set to serializable (Kevin Grittner, Tom Lane, Heikki + linkend="guc-default-transaction-isolation">default_transaction_isolation + is set to serializable (Kevin Grittner, Tom Lane, Heikki Linnakangas) @@ -6409,7 +6409,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Improve selectivity estimation for text search queries involving - prefixes, i.e. word:* patterns (Tom Lane) + prefixes, i.e. word:* patterns (Tom Lane) @@ -6432,10 +6432,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. @@ -6448,7 +6448,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This situation creates circular dependencies that confuse - pg_dump and probably other things. It's confusing + pg_dump and probably other things. It's confusing for humans too, so disallow it. @@ -6462,7 +6462,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make configure probe for mbstowcs_l (Tom + Make configure probe for mbstowcs_l (Tom Lane) @@ -6473,12 +6473,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -6497,7 +6497,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -6505,45 +6505,45 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in contrib/pg_trgm's LIKE pattern + Fix bugs in contrib/pg_trgm's LIKE pattern analysis code (Fujii Masao) - LIKE queries using a trigram index could produce wrong - results if the pattern contained LIKE escape characters. + LIKE queries using a trigram index could produce wrong + results if the pattern contained LIKE escape characters. - Fix pg_upgrade's handling of line endings on Windows + Fix pg_upgrade's handling of line endings on Windows (Andrew Dunstan) - Previously, pg_upgrade might add or remove carriage + Previously, pg_upgrade might add or remove carriage returns in places such as function bodies. - On Windows, make pg_upgrade use backslash path + On Windows, make pg_upgrade use backslash path separators in the scripts it emits (Andrew Dunstan) - Remove unnecessary dependency on pg_config from - pg_upgrade (Peter Eisentraut) + Remove unnecessary dependency on pg_config from + pg_upgrade (Peter Eisentraut) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -6593,7 +6593,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -6606,22 +6606,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -6649,21 +6649,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Fix race condition in enum-type value comparisons (Robert + Fix race condition in enum-type value comparisons (Robert Haas, Tom Lane) @@ -6675,7 +6675,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix txid_current() to report the correct epoch when not + Fix txid_current() to report the correct epoch when not in hot standby (Heikki Linnakangas) @@ -6692,7 +6692,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The master might improperly choose pseudo-servers such as - pg_receivexlog or pg_basebackup + pg_receivexlog or pg_basebackup as the synchronous standby, and then wait indefinitely for them. @@ -6705,14 +6705,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This mistake led to failures reported as out-of-order XID - insertion in KnownAssignedXids. + insertion in KnownAssignedXids. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -6723,7 +6723,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 WAL sender background processes neglected to establish a - SIGALRM handler, meaning they would wait forever in + SIGALRM handler, meaning they would wait forever in some corner cases where a timeout ought to happen. @@ -6742,15 +6742,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix LISTEN/NOTIFY to cope better with I/O + Fix LISTEN/NOTIFY to cope better with I/O problems, such as out of disk space (Tom Lane) After a write failure, all subsequent attempts to send more - NOTIFY messages would fail with messages like - Could not read from file "pg_notify/nnnn" at - offset nnnnn: Success. + NOTIFY messages would fail with messages like + Could not read from file "pg_notify/nnnn" at + offset nnnnn: Success. @@ -6763,7 +6763,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -6775,15 +6775,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -6791,44 +6791,44 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) Fix dependencies generated during ALTER TABLE ... ADD - CONSTRAINT USING INDEX (Tom Lane) + CONSTRAINT USING INDEX (Tom Lane) - This command left behind a redundant pg_depend entry + This command left behind a redundant pg_depend entry for the index, which could confuse later operations, notably - ALTER TABLE ... ALTER COLUMN TYPE on one of the indexed + ALTER TABLE ... ALTER COLUMN TYPE on one of the indexed columns. - Fix REASSIGN OWNED to work on extensions (Alvaro Herrera) + Fix REASSIGN OWNED to work on extensions (Alvaro Herrera) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -6836,7 +6836,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -6860,7 +6860,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -6868,26 +6868,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) - Fix pg_dump to better handle views containing partial - GROUP BY lists (Tom Lane) + Fix pg_dump to better handle views containing partial + GROUP BY lists (Tom Lane) - A view that lists only a primary key column in GROUP BY, + A view that lists only a primary key column in GROUP BY, but uses other table columns as if they were grouped, gets marked as depending on the primary key. Improper handling of such primary key - dependencies in pg_dump resulted in poorly-ordered + dependencies in pg_dump resulted in poorly-ordered dumps, which at best would be inefficient to restore and at worst could result in outright failure of a parallel - pg_restore run. + pg_restore run. @@ -6923,14 +6923,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -6962,13 +6962,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, if you use the citext data type, and you upgraded - from a previous major release by running pg_upgrade, - you should run CREATE EXTENSION citext FROM unpackaged - to avoid collation-related failures in citext operations. + However, if you use the citext data type, and you upgraded + from a previous major release by running pg_upgrade, + you should run CREATE EXTENSION citext FROM unpackaged + to avoid collation-related failures in citext operations. The same is necessary if you restore a dump from a pre-9.1 database - that contains an instance of the citext data type. - If you've already run the CREATE EXTENSION command before + that contains an instance of the citext data type. + If you've already run the CREATE EXTENSION command before upgrading to 9.1.4, you will instead need to do manual catalog updates as explained in the third changelog item below. @@ -6988,12 +6988,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -7004,7 +7004,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -7016,16 +7016,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make contrib/citext's upgrade script fix collations of - citext arrays and domains over citext + Make contrib/citext's upgrade script fix collations of + citext arrays and domains over citext (Tom Lane) - Release 9.1.2 provided a fix for collations of citext columns + Release 9.1.2 provided a fix for collations of citext columns and indexes in databases upgraded or reloaded from pre-9.1 installations, but that fix was incomplete: it neglected to handle arrays - and domains over citext. This release extends the module's + and domains over citext. This release extends the module's upgrade script to handle these cases. As before, if you have already run the upgrade script, you'll need to run the collation update commands by hand instead. See the 9.1.2 release notes for more @@ -7035,7 +7035,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -7061,7 +7061,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -7069,13 +7069,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Ensure txid_current() reports the correct epoch when + Ensure txid_current() reports the correct epoch when executed in hot standby (Simon Riggs) @@ -7090,7 +7090,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -7098,26 +7098,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix planning of UNION ALL subqueries with output columns + Fix planning of UNION ALL subqueries with output columns that are not simple variables (Tom Lane) Planning of such cases got noticeably worse in 9.1 as a result of a misguided fix for MergeAppend child's targetlist doesn't match - MergeAppend errors. Revert that fix and do it another way. + MergeAppend errors. Revert that fix and do it another way. - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -7138,8 +7138,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -7166,31 +7166,31 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. - Fix EXPLAIN VERBOSE for writable CTEs containing - RETURNING clauses (Tom Lane) + Fix EXPLAIN VERBOSE for writable CTEs containing + RETURNING clauses (Tom Lane) - Fix PREPARE TRANSACTION to work correctly in the presence + Fix PREPARE TRANSACTION to work correctly in the presence of advisory locks (Tom Lane) - Historically, PREPARE TRANSACTION has simply ignored any + Historically, PREPARE TRANSACTION has simply ignored any session-level advisory locks the session holds, but this case was accidentally broken in 9.1. @@ -7205,14 +7205,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ignore missing schemas during non-interactive assignments of - search_path (Tom Lane) + search_path (Tom Lane) This re-aligns 9.1's behavior with that of older branches. Previously 9.1 would throw an error for nonexistent schemas mentioned in - search_path settings obtained from places such as - ALTER DATABASE SET. + search_path settings obtained from places such as + ALTER DATABASE SET. @@ -7223,7 +7223,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This includes cases such as a rewriting ALTER TABLE within + This includes cases such as a rewriting ALTER TABLE within an extension update script, since that uses a transient table behind the scenes. @@ -7237,7 +7237,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -7256,13 +7256,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) - Fix too many LWLocks taken failure in GiST indexes (Heikki + Fix too many LWLocks taken failure in GiST indexes (Heikki Linnakangas) @@ -7296,35 +7296,35 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix error handling in pg_basebackup + Fix error handling in pg_basebackup (Thomas Ogrisegg, Fujii Masao) - Fix walsender to not go into a busy loop if connection + Fix walsender to not go into a busy loop if connection is terminated (Fujii Masao) - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Ensure that PL/Perl package-qualifies the _TD variable + Ensure that PL/Perl package-qualifies the _TD variable (Alex Hunsaker) @@ -7349,19 +7349,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -7369,14 +7369,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory and file descriptor leaks in pg_restore + Fix memory and file descriptor leaks in pg_restore when reading a directory-format archive (Peter Eisentraut) - Fix pg_upgrade for the case that a database stored in a + Fix pg_upgrade for the case that a database stored in a non-default tablespace contains a table in the cluster's default tablespace (Bruce Momjian) @@ -7384,41 +7384,41 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In ecpg, fix rare memory leaks and possible overwrite - of one byte after the sqlca_t structure (Peter Eisentraut) + In ecpg, fix rare memory leaks and possible overwrite + of one byte after the sqlca_t structure (Peter Eisentraut) - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Fix contrib/vacuumlo to use multiple transactions when + Fix contrib/vacuumlo to use multiple transactions when dropping many large objects (Tim Lewis, Robert Haas, Tom Lane) - This change avoids exceeding max_locks_per_transaction when + This change avoids exceeding max_locks_per_transaction when many objects need to be dropped. The behavior can be adjusted with the - new -l (limit) option. + new -l (limit) option. - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -7466,14 +7466,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -7485,7 +7485,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -7500,12 +7500,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -7521,10 +7521,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -7543,22 +7543,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 that the contents were transiently invalid. In hot standby mode this can result in a query that's executing in parallel seeing garbage data. Various symptoms could result from that, but the most common one seems - to be invalid memory alloc request size. + to be invalid memory alloc request size. - Fix handling of data-modifying WITH subplans in - READ COMMITTED rechecking (Tom Lane) + Fix handling of data-modifying WITH subplans in + READ COMMITTED rechecking (Tom Lane) - A WITH clause containing - INSERT/UPDATE/DELETE would crash - if the parent UPDATE or DELETE command needed + A WITH clause containing + INSERT/UPDATE/DELETE would crash + if the parent UPDATE or DELETE command needed to be re-evaluated at one or more rows due to concurrent updates - in READ COMMITTED mode. + in READ COMMITTED mode. @@ -7589,13 +7589,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix CLUSTER/VACUUM FULL handling of toast + Fix CLUSTER/VACUUM FULL handling of toast values owned by recently-updated rows (Tom Lane) This oversight could lead to duplicate key value violates unique - constraint errors being reported against the toast table's index + constraint errors being reported against the toast table's index during one of these commands. @@ -7617,11 +7617,11 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support foreign data wrappers and foreign servers in - REASSIGN OWNED (Alvaro Herrera) + REASSIGN OWNED (Alvaro Herrera) - This command failed with unexpected classid errors if + This command failed with unexpected classid errors if it needed to change the ownership of any such objects. @@ -7629,24 +7629,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. - Fix unsupported node type error caused by COLLATE - in an INSERT expression (Tom Lane) + Fix unsupported node type error caused by COLLATE + in an INSERT expression (Tom Lane) @@ -7669,7 +7669,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Recover from errors occurring during WAL replay of DROP - TABLESPACE (Tom Lane) + TABLESPACE (Tom Lane) @@ -7691,7 +7691,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Sometimes a lock would be logged as being held by transaction - zero. This is at least known to produce assertion failures on + zero. This is at least known to produce assertion failures on slave servers, and might be the cause of more serious problems. @@ -7713,7 +7713,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent emitting misleading consistent recovery state reached + Prevent emitting misleading consistent recovery state reached log message at the beginning of crash recovery (Heikki Linnakangas) @@ -7721,7 +7721,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix initial value of - pg_stat_replication.replay_location + pg_stat_replication.replay_location (Fujii Masao) @@ -7733,7 +7733,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -7747,18 +7747,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -7767,7 +7767,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix planner's ability to push down index-expression restrictions - through UNION ALL (Tom Lane) + through UNION ALL (Tom Lane) @@ -7778,19 +7778,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix planning of WITH clauses referenced in - UPDATE/DELETE on an inherited table + Fix planning of WITH clauses referenced in + UPDATE/DELETE on an inherited table (Tom Lane) - This bug led to could not find plan for CTE failures. + This bug led to could not find plan for CTE failures. - Fix GIN cost estimation to handle column IN (...) + Fix GIN cost estimation to handle column IN (...) index conditions (Marti Raudsepp) @@ -7813,8 +7813,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -7853,14 +7853,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This function crashes when handed a typeglob or certain read-only - objects such as $^V. Make plperl avoid passing those to + objects such as $^V. Make plperl avoid passing those to it. - In pg_dump, don't dump contents of an extension's + In pg_dump, don't dump contents of an extension's configuration tables if the extension itself is not being dumped (Tom Lane) @@ -7868,32 +7868,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Cope with invalid pre-existing search_path settings during - CREATE EXTENSION (Tom Lane) + Cope with invalid pre-existing search_path settings during + CREATE EXTENSION (Tom Lane) @@ -8453,14 +8453,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure walsender processes respond promptly to SIGTERM + Ensure walsender processes respond promptly to SIGTERM (Magnus Hagander) - Exclude postmaster.opts from base backups + Exclude postmaster.opts from base backups (Magnus Hagander) @@ -8473,20 +8473,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Formerly, these would not be displayed correctly in the - pg_settings view. + pg_settings view. - Fix incorrect field alignment in ecpg's SQLDA area + Fix incorrect field alignment in ecpg's SQLDA area (Zoltan Boszormenyi) - Preserve blank lines within commands in psql's command + Preserve blank lines within commands in psql's command history (Robert Haas) @@ -8498,41 +8498,41 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid platform-specific infinite loop in pg_dump + Avoid platform-specific infinite loop in pg_dump (Steve Singer) - Fix compression of plain-text output format in pg_dump + Fix compression of plain-text output format in pg_dump (Adrian Klaver and Tom Lane) - pg_dump has historically understood -Z with - no -F switch to mean that it should emit a gzip-compressed + pg_dump has historically understood -Z with + no -F switch to mean that it should emit a gzip-compressed version of its plain text output. Restore that behavior. - Fix pg_dump to dump user-defined casts between + Fix pg_dump to dump user-defined casts between auto-generated types, such as table rowtypes (Tom Lane) - Fix missed quoting of foreign server names in pg_dump + Fix missed quoting of foreign server names in pg_dump (Tom Lane) - Assorted fixes for pg_upgrade (Bruce Momjian) + Assorted fixes for pg_upgrade (Bruce Momjian) @@ -8556,15 +8556,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Restore the pre-9.1 behavior that PL/Perl functions returning - void ignore the result value of their last Perl statement; + void ignore the result value of their last Perl statement; 9.1.0 would throw an error if that statement returned a reference. Also, make sure it works to return a string value for a composite type, so long as the string meets the type's input format. In addition, throw errors for attempts to return Perl arrays or hashes when the function's declared result type is not an array or composite type, respectively. (Pre-9.1 versions rather uselessly returned - strings like ARRAY(0x221a9a0) or - HASH(0x221aa90) in such cases.) + strings like ARRAY(0x221a9a0) or + HASH(0x221aa90) in such cases.) @@ -8577,7 +8577,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Use the preferred version of xsubpp to build PL/Perl, + Use the preferred version of xsubpp to build PL/Perl, not necessarily the operating system's main copy (David Wheeler and Alex Hunsaker) @@ -8599,14 +8599,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Change all the contrib extension script files to report - a useful error message if they are fed to psql + Change all the contrib extension script files to report + a useful error message if they are fed to psql (Andrew Dunstan and Tom Lane) This should help teach people about the new method of using - CREATE EXTENSION to load these files. In most cases, + CREATE EXTENSION to load these files. In most cases, sourcing the scripts directly would fail anyway, but with harder-to-interpret messages. @@ -8614,19 +8614,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix incorrect coding in contrib/dict_int and - contrib/dict_xsyn (Tom Lane) + Fix incorrect coding in contrib/dict_int and + contrib/dict_xsyn (Tom Lane) Some functions incorrectly assumed that memory returned by - palloc() is guaranteed zeroed. + palloc() is guaranteed zeroed. - Remove contrib/sepgsql tests from the regular regression + Remove contrib/sepgsql tests from the regular regression test mechanism (Tom Lane) @@ -8639,14 +8639,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix assorted errors in contrib/unaccent's configuration + Fix assorted errors in contrib/unaccent's configuration file parsing (Tom Lane) - Honor query cancel interrupts promptly in pgstatindex() + Honor query cancel interrupts promptly in pgstatindex() (Robert Haas) @@ -8660,7 +8660,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Revert unintentional enabling of WAL_DEBUG (Robert Haas) + Revert unintentional enabling of WAL_DEBUG (Robert Haas) @@ -8695,15 +8695,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -8744,7 +8744,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pg_options_to_table return NULL for an option with no + Make pg_options_to_table return NULL for an option with no value (Tom Lane) @@ -8768,8 +8768,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix explicit reference to pg_temp schema in CREATE - TEMPORARY TABLE (Robert Haas) + Fix explicit reference to pg_temp schema in CREATE + TEMPORARY TABLE (Robert Haas) @@ -8794,9 +8794,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Overview - This release shows PostgreSQL moving beyond the + This release shows PostgreSQL moving beyond the traditional relational-database feature set with new, ground-breaking - functionality that is unique to PostgreSQL. + functionality that is unique to PostgreSQL. The streaming replication feature introduced in release 9.0 is significantly enhanced by adding a synchronous-replication option, streaming backups, and monitoring improvements. @@ -8831,7 +8831,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add extensions which - simplify packaging of additions to PostgreSQL + simplify packaging of additions to PostgreSQL @@ -8844,32 +8844,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Support unlogged tables using the UNLOGGED + Support unlogged tables using the UNLOGGED option in CREATE - TABLE + TABLE Allow data-modification commands - (INSERT/UPDATE/DELETE) in - WITH clauses + (INSERT/UPDATE/DELETE) in + WITH clauses Add nearest-neighbor (order-by-operator) searching to GiST indexes + linkend="GiST">GiST indexes Add a SECURITY - LABEL command and support for - SELinux permissions control + LABEL command and support for + SELinux permissions control @@ -8912,7 +8912,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change the default value of standard_conforming_strings + linkend="guc-standard-conforming-strings">standard_conforming_strings to on (Robert Haas) @@ -8920,8 +8920,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 By default, backslashes are now ordinary characters in string literals, not escape characters. This change removes a long-standing incompatibility with the SQL standard. escape_string_warning - has produced warnings about this usage for years. E'' + linkend="guc-escape-string-warning">escape_string_warning + has produced warnings about this usage for years. E'' strings are the proper way to embed backslash escapes in strings and are unaffected by this change. @@ -8955,12 +8955,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 For example, disallow - composite_value.text and - text(composite_value). + composite_value.text and + text(composite_value). Unintentional uses of this syntax have frequently resulted in bug reports; although it was not a bug, it seems better to go back to rejecting such expressions. - The CAST and :: syntaxes are still available + The CAST and :: syntaxes are still available for use when a cast of an entire composite value is actually intended. @@ -8972,10 +8972,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 When a domain is based on an array type, it is allowed to look - through the domain type to access the array elements, including + through the domain type to access the array elements, including subscripting the domain value to fetch or assign an element. Assignment to an element of such a domain value, for instance via - UPDATE ... SET domaincol[5] = ..., will now result in + UPDATE ... SET domaincol[5] = ..., will now result in rechecking the domain type's constraints, whereas before the checks were skipped. @@ -8993,7 +8993,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change string_to_array() + linkend="array-functions-table">string_to_array() to return an empty array for a zero-length string (Pavel Stehule) @@ -9006,8 +9006,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change string_to_array() - so a NULL separator splits the string into characters + linkend="array-functions-table">string_to_array() + so a NULL separator splits the string into characters (Pavel Stehule) @@ -9031,8 +9031,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Triggers can now be fired in three cases: BEFORE, - AFTER, or INSTEAD OF some action. + Triggers can now be fired in three cases: BEFORE, + AFTER, or INSTEAD OF some action. Trigger function authors should verify that their logic behaves sanely in all three cases. @@ -9040,7 +9040,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Require superuser or CREATEROLE permissions in order to + Require superuser or CREATEROLE permissions in order to set comments on roles (Tom Lane) @@ -9057,12 +9057,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change pg_last_xlog_receive_location() + linkend="functions-recovery-info-table">pg_last_xlog_receive_location() so it never moves backwards (Fujii Masao) - Previously, the value of pg_last_xlog_receive_location() + Previously, the value of pg_last_xlog_receive_location() could move backward when streaming replication is restarted. @@ -9070,7 +9070,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Have logging of replication connections honor log_connections + linkend="guc-log-connections">log_connections (Magnus Hagander) @@ -9090,12 +9090,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Change PL/pgSQL's RAISE command without parameters + Change PL/pgSQL's RAISE command without parameters to be catchable by the attached exception block (Piyush Newe) - Previously RAISE in a code block was always scoped to + Previously RAISE in a code block was always scoped to an attached exception block, so it was uncatchable at the same scope. @@ -9154,7 +9154,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 All contrib modules are now installed with CREATE EXTENSION + linkend="SQL-CREATEEXTENSION">CREATE EXTENSION rather than by manually invoking their SQL scripts (Dimitri Fontaine, Tom Lane) @@ -9164,7 +9164,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 module, use CREATE EXTENSION ... FROM unpackaged to wrap the existing contrib module's objects into an extension. When updating from a pre-9.0 version, drop the contrib module's objects - using its old uninstall script, then use CREATE EXTENSION. + using its old uninstall script, then use CREATE EXTENSION. @@ -9180,26 +9180,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Make pg_stat_reset() + linkend="monitoring-stats-funcs-table">pg_stat_reset() reset all database-level statistics (Tomas Vondra) - Some pg_stat_database counters were not being reset. + Some pg_stat_database counters were not being reset. Fix some information_schema.triggers + linkend="infoschema-triggers">information_schema.triggers column names to match the new SQL-standard names (Dean Rasheed) - Treat ECPG cursor names as case-insensitive + Treat ECPG cursor names as case-insensitive (Zoltan Boszormenyi) @@ -9228,9 +9228,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Support unlogged tables using the UNLOGGED + Support unlogged tables using the UNLOGGED option in CREATE - TABLE (Robert Haas) + TABLE (Robert Haas) @@ -9244,8 +9244,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow FULL OUTER JOIN to be implemented as a - hash join, and allow either side of a LEFT OUTER JOIN - or RIGHT OUTER JOIN to be hashed (Tom Lane) + hash join, and allow either side of a LEFT OUTER JOIN + or RIGHT OUTER JOIN to be hashed (Tom Lane) @@ -9270,7 +9270,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Improve performance of commit_siblings + linkend="guc-commit-siblings">commit_siblings (Greg Smith) @@ -9289,7 +9289,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid leaving data files open after blind writes + Avoid leaving data files open after blind writes (Alvaro Herrera) @@ -9317,7 +9317,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This allows better optimization of queries that use ORDER - BY, LIMIT, or MIN/MAX with + BY, LIMIT, or MIN/MAX with inherited tables. @@ -9346,34 +9346,34 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support host names and host suffixes - (e.g. .example.com) in pg_hba.conf + (e.g. .example.com) in pg_hba.conf (Peter Eisentraut) - Previously only host IP addresses and CIDR + Previously only host IP addresses and CIDR values were supported. - Support the key word all in the host column of pg_hba.conf + Support the key word all in the host column of pg_hba.conf (Peter Eisentraut) - Previously people used 0.0.0.0/0 or ::/0 + Previously people used 0.0.0.0/0 or ::/0 for this. - Reject local lines in pg_hba.conf + Reject local lines in pg_hba.conf on platforms that don't support Unix-socket connections (Magnus Hagander) @@ -9386,14 +9386,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow GSSAPI + Allow GSSAPI to be used to authenticate to servers via SSPI (Christian Ullrich) + linkend="sspi-auth">SSPI (Christian Ullrich) - Specifically this allows Unix-based GSSAPI clients - to do SSPI authentication with Windows servers. + Specifically this allows Unix-based GSSAPI clients + to do SSPI authentication with Windows servers. @@ -9414,14 +9414,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Rewrite peer + Rewrite peer authentication to avoid use of credential control messages (Tom Lane) This change makes the peer authentication code simpler and better-performing. However, it requires the platform to provide the - getpeereid function or an equivalent socket operation. + getpeereid function or an equivalent socket operation. So far as is known, the only platform for which peer authentication worked before and now will not is pre-5.0 NetBSD. @@ -9440,19 +9440,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add details to the logging of restartpoints and checkpoints, which is controlled by log_checkpoints + linkend="guc-log-checkpoints">log_checkpoints (Fujii Masao, Greg Smith) - New details include WAL file and sync activity. + New details include WAL file and sync activity. Add log_file_mode + linkend="guc-log-file-mode">log_file_mode which controls the permissions on log files created by the logging collector (Martin Pihlak) @@ -9460,7 +9460,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Reduce the default maximum line length for syslog + Reduce the default maximum line length for syslog logging to 900 bytes plus prefixes (Noah Misch) @@ -9482,7 +9482,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add client_hostname column to pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity (Peter Eisentraut) @@ -9494,7 +9494,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add pg_stat_xact_* + linkend="monitoring-stats-views-table">pg_stat_xact_* statistics functions and views (Joel Jacobson) @@ -9515,15 +9515,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add columns showing the number of vacuum and analyze operations in pg_stat_*_tables + linkend="monitoring-stats-views-table">pg_stat_*_tables views (Magnus Hagander) - Add buffers_backend_fsync column to pg_stat_bgwriter + Add buffers_backend_fsync column to pg_stat_bgwriter (Greg Smith) @@ -9545,13 +9545,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Provide auto-tuning of wal_buffers (Greg + linkend="guc-wal-buffers">wal_buffers (Greg Smith) - By default, the value of wal_buffers is now chosen - automatically based on the value of shared_buffers. + By default, the value of wal_buffers is now chosen + automatically based on the value of shared_buffers. @@ -9598,7 +9598,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 synchronous_standby_names setting. Synchronous replication can be enabled or disabled on a per-transaction basis using the - synchronous_commit + synchronous_commit setting. @@ -9619,13 +9619,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add - replication_timeout + replication_timeout setting (Fujii Masao, Heikki Linnakangas) Replication connections that are idle for more than the - replication_timeout interval will be terminated + replication_timeout interval will be terminated automatically. Formerly, a failed connection was typically not detected until the TCP timeout elapsed, which is inconveniently long in many situations. @@ -9635,7 +9635,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add command-line tool pg_basebackup + linkend="app-pgbasebackup">pg_basebackup for creating a new standby server or database backup (Magnus Hagander) @@ -9667,8 +9667,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add system view pg_stat_replication - which displays activity of WAL sender processes (Itagaki + linkend="monitoring-stats-views-table">pg_stat_replication + which displays activity of WAL sender processes (Itagaki Takahiro, Simon Riggs) @@ -9680,7 +9680,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add monitoring function pg_last_xact_replay_timestamp() + linkend="functions-recovery-info-table">pg_last_xact_replay_timestamp() (Fujii Masao) @@ -9702,7 +9702,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add configuration parameter hot_standby_feedback + linkend="guc-hot-standby-feedback">hot_standby_feedback to enable standbys to postpone cleanup of old row versions on the primary (Simon Riggs) @@ -9715,7 +9715,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add the pg_stat_database_conflicts + linkend="monitoring-stats-views-table">pg_stat_database_conflicts system view to show queries that have been canceled and the reason (Magnus Hagander) @@ -9728,8 +9728,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a conflicts count to pg_stat_database + Add a conflicts count to pg_stat_database (Magnus Hagander) @@ -9754,7 +9754,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add ERRCODE_T_R_DATABASE_DROPPED + linkend="errcodes-table">ERRCODE_T_R_DATABASE_DROPPED error code to report recovery conflicts due to dropped databases (Tatsuo Ishii) @@ -9780,18 +9780,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The new functions are pg_xlog_replay_pause(), + linkend="functions-recovery-control-table">pg_xlog_replay_pause(), pg_xlog_replay_resume(), + linkend="functions-recovery-control-table">pg_xlog_replay_resume(), and the status function pg_is_xlog_replay_paused(). + linkend="functions-recovery-control-table">pg_is_xlog_replay_paused(). - Add recovery.conf setting - pause_at_recovery_target + Add recovery.conf setting + pause_at_recovery_target to pause recovery at target (Simon Riggs) @@ -9804,14 +9804,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add the ability to create named restore points using pg_create_restore_point() + linkend="functions-admin-backup-table">pg_create_restore_point() (Jaime Casanova) These named restore points can be specified as recovery - targets using the new recovery.conf setting - recovery_target_name. + targets using the new recovery.conf setting + recovery_target_name. @@ -9830,7 +9830,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add restart_after_crash + linkend="guc-restart-after-crash">restart_after_crash setting which disables automatic server restart after a backend crash (Robert Haas) @@ -9844,8 +9844,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow recovery.conf - to use the same quoting behavior as postgresql.conf + linkend="recovery-config">recovery.conf + to use the same quoting behavior as postgresql.conf (Dimitri Fontaine) @@ -9877,7 +9877,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 single MVCC snapshot would be used for the entire transaction, which allowed certain documented anomalies. The old snapshot isolation behavior is still available by requesting the REPEATABLE READ + linkend="xact-repeatable-read">REPEATABLE READ isolation level. @@ -9885,30 +9885,30 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow data-modification commands - (INSERT/UPDATE/DELETE) in - WITH clauses + (INSERT/UPDATE/DELETE) in + WITH clauses (Marko Tiikkaja, Hitoshi Harada) - These commands can use RETURNING to pass data up to the + These commands can use RETURNING to pass data up to the containing query. - Allow WITH - clauses to be attached to INSERT, UPDATE, - DELETE statements (Marko Tiikkaja, Hitoshi Harada) + Allow WITH + clauses to be attached to INSERT, UPDATE, + DELETE statements (Marko Tiikkaja, Hitoshi Harada) Allow non-GROUP - BY columns in the query target list when the primary - key is specified in the GROUP BY clause (Peter + BY columns in the query target list when the primary + key is specified in the GROUP BY clause (Peter Eisentraut) @@ -9920,13 +9920,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow use of the key word DISTINCT in UNION/INTERSECT/EXCEPT + Allow use of the key word DISTINCT in UNION/INTERSECT/EXCEPT clauses (Tom Lane) - DISTINCT is the default behavior so use of this + DISTINCT is the default behavior so use of this key word is redundant, but the SQL standard allows it. @@ -9934,13 +9934,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix ordinary queries with rules to use the same snapshot behavior - as EXPLAIN ANALYZE (Marko Tiikkaja) + as EXPLAIN ANALYZE (Marko Tiikkaja) - Previously EXPLAIN ANALYZE used slightly different + Previously EXPLAIN ANALYZE used slightly different snapshot timing for queries involving rules. The - EXPLAIN ANALYZE behavior was judged to be more logical. + EXPLAIN ANALYZE behavior was judged to be more logical. @@ -9962,7 +9962,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Previously collation (the sort ordering of text strings) could only be chosen at database creation. Collation can now be set per column, domain, index, or - expression, via the SQL-standard COLLATE clause. + expression, via the SQL-standard COLLATE clause. @@ -9980,17 +9980,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add extensions which - simplify packaging of additions to PostgreSQL + simplify packaging of additions to PostgreSQL (Dimitri Fontaine, Tom Lane) Extensions are controlled by the new CREATE/ALTER/DROP EXTENSION + linkend="SQL-CREATEEXTENSION">CREATE/ALTER/DROP EXTENSION commands. This replaces ad-hoc methods of grouping objects that - are added to a PostgreSQL installation. + are added to a PostgreSQL installation. @@ -10003,7 +10003,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This allows data stored outside the database to be used like - native PostgreSQL-stored data. Foreign tables + native PostgreSQL-stored data. Foreign tables are currently read-only, however. @@ -10011,7 +10011,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow new values to be added to an existing enum type via - ALTER TYPE (Andrew + ALTER TYPE (Andrew Dunstan) @@ -10019,7 +10019,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add ALTER TYPE ... - ADD/DROP/ALTER/RENAME ATTRIBUTE (Peter Eisentraut) + ADD/DROP/ALTER/RENAME ATTRIBUTE (Peter Eisentraut) @@ -10030,28 +10030,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <command>ALTER</> Object + <command>ALTER</command> Object - Add RESTRICT/CASCADE to ALTER TYPE operations + Add RESTRICT/CASCADE to ALTER TYPE operations on typed tables (Peter Eisentraut) This controls - ADD/DROP/ALTER/RENAME - ATTRIBUTE cascading behavior. + ADD/DROP/ALTER/RENAME + ATTRIBUTE cascading behavior. - Support ALTER TABLE name {OF | NOT OF} - type + Support ALTER TABLE name {OF | NOT OF} + type (Noah Misch) @@ -10064,7 +10064,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add support for more object types in ALTER ... SET - SCHEMA commands (Dimitri Fontaine) + SCHEMA commands (Dimitri Fontaine) @@ -10079,7 +10079,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CREATETABLE"><command>CREATE/ALTER TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE/ALTER TABLE</command></link> @@ -10098,13 +10098,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow ALTER TABLE + Allow ALTER TABLE to add foreign keys without validation (Simon Riggs) - The new option is called NOT VALID. The constraint's - state can later be modified to VALIDATED and validation + The new option is called NOT VALID. The constraint's + state can later be modified to VALIDATED and validation checks performed. Together these allow you to add a foreign key with minimal impact on read and write operations. @@ -10118,17 +10118,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - For example, converting a varchar column to - text no longer requires a rewrite of the table. + For example, converting a varchar column to + text no longer requires a rewrite of the table. However, increasing the length constraint on a - varchar column still requires a table rewrite. + varchar column still requires a table rewrite. Add CREATE TABLE IF - NOT EXISTS syntax (Robert Haas) + NOT EXISTS syntax (Robert Haas) @@ -10163,7 +10163,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add a SECURITY - LABEL command (KaiGai Kohei) + LABEL command (KaiGai Kohei) @@ -10197,7 +10197,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Make TRUNCATE ... RESTART - IDENTITY restart sequences transactionally (Steve + IDENTITY restart sequences transactionally (Steve Singer) @@ -10211,26 +10211,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-COPY"><command>COPY</></link> + <link linkend="SQL-COPY"><command>COPY</command></link> - Add ENCODING option to COPY TO/FROM (Hitoshi + Add ENCODING option to COPY TO/FROM (Hitoshi Harada, Itagaki Takahiro) - This allows the encoding of the COPY file to be + This allows the encoding of the COPY file to be specified separately from client encoding. - Add bidirectional COPY + Add bidirectional COPY protocol support (Fujii Masao) @@ -10244,13 +10244,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</></link> + <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> - Make EXPLAIN VERBOSE show the function call expression + Make EXPLAIN VERBOSE show the function call expression in a FunctionScan node (Tom Lane) @@ -10260,21 +10260,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-VACUUM"><command>VACUUM</></link> + <link linkend="SQL-VACUUM"><command>VACUUM</command></link> Add additional details to the output of VACUUM FULL VERBOSE - and CLUSTER VERBOSE + linkend="SQL-VACUUM">VACUUM FULL VERBOSE + and CLUSTER VERBOSE (Itagaki Takahiro) New information includes the live and dead tuple count and - whether CLUSTER is using an index to rebuild. + whether CLUSTER is using an index to rebuild. @@ -10294,13 +10294,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CLUSTER"><command>CLUSTER</></link> + <link linkend="SQL-CLUSTER"><command>CLUSTER</command></link> - Allow CLUSTER to sort the table rather than scanning + Allow CLUSTER to sort the table rather than scanning the index when it seems likely to be cheaper (Leonardo Francalanci) @@ -10317,12 +10317,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add nearest-neighbor (order-by-operator) searching to GiST indexes (Teodor Sigaev, Tom Lane) + linkend="GiST">GiST indexes (Teodor Sigaev, Tom Lane) - This allows GiST indexes to quickly return the - N closest values in a query with LIMIT. + This allows GiST indexes to quickly return the + N closest values in a query with LIMIT. For example point '(101,456)' LIMIT 10; @@ -10334,19 +10334,19 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow GIN indexes to index null + Allow GIN indexes to index null and empty values (Tom Lane) - This allows full GIN index scans, and fixes various + This allows full GIN index scans, and fixes various corner cases in which GIN scans would fail. - Allow GIN indexes to + Allow GIN indexes to better recognize duplicate search entries (Tom Lane) @@ -10358,12 +10358,12 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Fix GiST indexes to be fully + Fix GiST indexes to be fully crash-safe (Heikki Linnakangas) - Previously there were rare cases where a REINDEX + Previously there were rare cases where a REINDEX would be required (you would be informed). @@ -10381,19 +10381,19 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow numeric to use a more compact, two-byte header + Allow numeric to use a more compact, two-byte header in common cases (Robert Haas) - Previously all numeric values had four-byte headers; + Previously all numeric values had four-byte headers; this change saves on disk storage. - Add support for dividing money by money + Add support for dividing money by money (Andy Balholm) @@ -10431,9 +10431,9 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - This avoids possible could not identify a comparison function + This avoids possible could not identify a comparison function failures at runtime, if it is possible to implement the query without - sorting. Also, ANALYZE won't try to use inappropriate + sorting. Also, ANALYZE won't try to use inappropriate statistics-gathering methods for columns of such composite types. @@ -10447,15 +10447,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for casting between money and numeric + Add support for casting between money and numeric (Andy Balholm) - Add support for casting from int4 and int8 - to money (Joey Adams) + Add support for casting from int4 and int8 + to money (Joey Adams) @@ -10476,15 +10476,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="functions-xml"><acronym>XML</></link> + <link linkend="functions-xml"><acronym>XML</acronym></link> - Add XML function XMLEXISTS and xpath_exists() + Add XML function XMLEXISTS and xpath_exists() functions (Mike Fowler) @@ -10495,17 +10495,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add XML functions xml_is_well_formed(), + Add XML functions xml_is_well_formed(), xml_is_well_formed_document(), + linkend="xml-is-well-formed">xml_is_well_formed_document(), xml_is_well_formed_content() + linkend="xml-is-well-formed">xml_is_well_formed_content() (Mike Fowler) - These check whether the input is properly-formed XML. + These check whether the input is properly-formed XML. They provide functionality that was previously available only in the deprecated contrib/xml2 module. @@ -10525,8 +10525,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add SQL function format(text, ...), which - behaves analogously to C's printf() (Pavel Stehule, + linkend="format">format(text, ...), which + behaves analogously to C's printf() (Pavel Stehule, Robert Haas) @@ -10539,13 +10539,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add string functions concat(), + linkend="functions-string-other">concat(), concat_ws(), - left(), - right(), + linkend="functions-string-other">concat_ws(), + left(), + right(), and reverse() + linkend="functions-string-other">reverse() (Pavel Stehule) @@ -10557,7 +10557,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add function pg_read_binary_file() + linkend="functions-admin-genfile">pg_read_binary_file() to read binary files (Dimitri Fontaine, Itagaki Takahiro) @@ -10565,7 +10565,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add a single-parameter version of function pg_read_file() + linkend="functions-admin-genfile">pg_read_file() to read an entire file (Dimitri Fontaine, Itagaki Takahiro) @@ -10573,9 +10573,9 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add three-parameter forms of array_to_string() + linkend="array-functions-table">array_to_string() and string_to_array() + linkend="array-functions-table">string_to_array() for null value processing control (Pavel Stehule) @@ -10590,7 +10590,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add the pg_describe_object() + linkend="functions-info-catalog-table">pg_describe_object() function (Alvaro Herrera) @@ -10619,10 +10619,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add variable quote_all_identifiers - to force the quoting of all identifiers in EXPLAIN + linkend="guc-quote-all-identifiers">quote_all_identifiers + to force the quoting of all identifiers in EXPLAIN and in system catalog functions like pg_get_viewdef() + linkend="functions-info-catalog-table">pg_get_viewdef() (Robert Haas) @@ -10635,7 +10635,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add columns to the information_schema.sequences + linkend="infoschema-sequences">information_schema.sequences system view (Peter Eisentraut) @@ -10647,8 +10647,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow public as a pseudo-role name in has_table_privilege() + Allow public as a pseudo-role name in has_table_privilege() and related functions (Alvaro Herrera) @@ -10669,7 +10669,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Support INSTEAD - OF triggers on views (Dean Rasheed) + OF triggers on views (Dean Rasheed) @@ -10694,7 +10694,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add FOREACH IN - ARRAY to PL/pgSQL + ARRAY to PL/pgSQL (Pavel Stehule) @@ -10734,7 +10734,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - PL/Perl functions can now be declared to accept type record. + PL/Perl functions can now be declared to accept type record. The behavior is the same as for any named composite type. @@ -10776,7 +10776,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - PL/Python can now return multiple OUT parameters + PL/Python can now return multiple OUT parameters and record sets. @@ -10816,10 +10816,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; These functions are plpy.quote_ident, - plpy.quote_literal, + linkend="plpython-util">plpy.quote_ident, + plpy.quote_literal, and plpy.quote_nullable. + linkend="plpython-util">plpy.quote_nullable. @@ -10831,7 +10831,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Report PL/Python errors from iterators with PLy_elog (Jan + Report PL/Python errors from iterators with PLy_elog (Jan Urbanski) @@ -10843,7 +10843,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Exception classes were previously not available in - plpy under Python 3. + plpy under Python 3. @@ -10860,7 +10860,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Mark createlang and droplang + Mark createlang and droplang as deprecated now that they just invoke extension commands (Tom Lane) @@ -10869,64 +10869,64 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Add psql command \conninfo + Add psql command \conninfo to show current connection information (David Christensen) - Add psql command \sf to + Add psql command \sf to show a function's definition (Pavel Stehule) - Add psql command \dL to list + Add psql command \dL to list languages (Fernando Ike) - Add the - \dn without S now suppresses system + \dn without S now suppresses system schemas. - Allow psql's \e and \ef + Allow psql's \e and \ef commands to accept a line number to be used to position the cursor in the editor (Pavel Stehule) This is passed to the editor according to the - PSQL_EDITOR_LINENUMBER_ARG environment variable. + PSQL_EDITOR_LINENUMBER_ARG environment variable. - Have psql set the client encoding from the + Have psql set the client encoding from the operating system locale by default (Heikki Linnakangas) - This only happens if the PGCLIENTENCODING environment + This only happens if the PGCLIENTENCODING environment variable is not set. @@ -10940,8 +10940,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Make \dt+ report pg_table_size - instead of pg_relation_size when talking to 9.0 or + Make \dt+ report pg_table_size + instead of pg_relation_size when talking to 9.0 or later servers (Bernd Helmle) @@ -10963,29 +10963,29 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add pg_dump + Add pg_dump and pg_dumpall - option to force quoting of all identifiers (Robert Haas) - Add directory format to pg_dump + Add directory format to pg_dump (Joachim Wieland, Heikki Linnakangas) - This is internally similar to the tar - pg_dump format. + This is internally similar to the tar + pg_dump format. @@ -10994,27 +10994,27 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PG-CTL"><application>pg_ctl</></link> + <link linkend="APP-PG-CTL"><application>pg_ctl</application></link> - Fix pg_ctl + Fix pg_ctl so it no longer incorrectly reports that the server is not running (Bruce Momjian) Previously this could happen if the server was running but - pg_ctl could not authenticate. + pg_ctl could not authenticate. - Improve pg_ctl start's wait - () option (Bruce Momjian, Tom Lane) @@ -11027,7 +11027,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add promote option to pg_ctl to + Add promote option to pg_ctl to switch a standby server to primary (Fujii Masao) @@ -11039,23 +11039,23 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <application>Development Tools</> + <application>Development Tools</application> - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> Add a libpq connection option client_encoding - which behaves like the PGCLIENTENCODING environment + linkend="libpq-connect-client-encoding">client_encoding + which behaves like the PGCLIENTENCODING environment variable (Heikki Linnakangas) - The value auto sets the client encoding based on + The value auto sets the client encoding based on the operating system locale. @@ -11063,13 +11063,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add PQlibVersion() + linkend="libpq-pqlibversion">PQlibVersion() function which returns the libpq library version (Magnus Hagander) - libpq already had PQserverVersion() which returns + libpq already had PQserverVersion() which returns the server version. @@ -11079,22 +11079,22 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Allow libpq-using clients to check the user name of the server process when connecting via Unix-domain sockets, with the new requirepeer + linkend="libpq-connect-requirepeer">requirepeer connection option (Peter Eisentraut) - PostgreSQL already allowed servers to check + PostgreSQL already allowed servers to check the client user name when connecting via Unix-domain sockets. - Add PQping() + Add PQping() and PQpingParams() + linkend="libpq-pqpingparams">PQpingParams() to libpq (Bruce Momjian, Tom Lane) @@ -11109,7 +11109,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="ecpg"><application>ECPG</></link> + <link linkend="ecpg"><application>ECPG</application></link> @@ -11123,7 +11123,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Make ecpglib write double values with a + Make ecpglib write double values with a precision of 15 digits, not 14 as formerly (Akira Kurosawa) @@ -11140,7 +11140,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Use +Olibmerrno compile flag with HP-UX C compilers + Use +Olibmerrno compile flag with HP-UX C compilers that accept it (Ibrar Ahmed) @@ -11163,15 +11163,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - This allows for faster compiles. Also, make -k + This allows for faster compiles. Also, make -k now works more consistently. - Require GNU make + Require GNU make 3.80 or newer (Peter Eisentraut) @@ -11182,7 +11182,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add make maintainer-check target + Add make maintainer-check target (Peter Eisentraut) @@ -11195,15 +11195,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Support make check in contrib + Support make check in contrib (Peter Eisentraut) - Formerly only make installcheck worked, but now + Formerly only make installcheck worked, but now there is support for testing in a temporary installation. - The top-level make check-world target now includes - testing contrib this way. + The top-level make check-world target now includes + testing contrib this way. @@ -11219,7 +11219,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; On Windows, allow pg_ctl to register + linkend="app-pg-ctl">pg_ctl to register the service as auto-start or start-on-demand (Quan Zongliang) @@ -11231,7 +11231,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - minidumps can now be generated by non-debug + minidumps can now be generated by non-debug Windows binaries and analyzed by standard debugging tools. @@ -11287,7 +11287,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add missing get_object_oid() functions, for consistency + Add missing get_object_oid() functions, for consistency (Robert Haas) @@ -11302,13 +11302,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for DragonFly BSD (Rumko) + Add support for DragonFly BSD (Rumko) - Expose quote_literal_cstr() for backend use + Expose quote_literal_cstr() for backend use (Robert Haas) @@ -11321,22 +11321,22 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Regression tests were previously always run with - SQL_ASCII encoding. + SQL_ASCII encoding. - Add src/tools/git_changelog to replace - cvs2cl and pgcvslog (Robert + Add src/tools/git_changelog to replace + cvs2cl and pgcvslog (Robert Haas, Tom Lane) - Add git-external-diff script to - src/tools (Bruce Momjian) + Add git-external-diff script to + src/tools (Bruce Momjian) @@ -11391,7 +11391,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Modify contrib modules and procedural + Modify contrib modules and procedural languages to install via the new extension mechanism (Tom Lane, Dimitri Fontaine) @@ -11400,21 +11400,21 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add contrib/file_fdw + Add contrib/file_fdw foreign-data wrapper (Shigeru Hanada) Foreign tables using this foreign data wrapper can read flat files - in a manner very similar to COPY. + in a manner very similar to COPY. Add nearest-neighbor search support to contrib/pg_trgm and contrib/btree_gist + linkend="pgtrgm">contrib/pg_trgm and contrib/btree_gist (Teodor Sigaev) @@ -11422,7 +11422,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add contrib/btree_gist + linkend="btree-gist">contrib/btree_gist support for searching on not-equals (Jeff Davis) @@ -11430,25 +11430,25 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Fix contrib/fuzzystrmatch's - levenshtein() function to handle multibyte characters + linkend="fuzzystrmatch">contrib/fuzzystrmatch's + levenshtein() function to handle multibyte characters (Alexander Korotkov) - Add ssl_cipher() and ssl_version() + Add ssl_cipher() and ssl_version() functions to contrib/sslinfo (Robert + linkend="sslinfo">contrib/sslinfo (Robert Haas) - Fix contrib/intarray - and contrib/hstore + Fix contrib/intarray + and contrib/hstore to give consistent results with indexed empty arrays (Tom Lane) @@ -11460,7 +11460,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow contrib/intarray + Allow contrib/intarray to work properly on multidimensional arrays (Tom Lane) @@ -11468,7 +11468,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; In - contrib/intarray, + contrib/intarray, avoid errors complaining about the presence of nulls in cases where no nulls are actually present (Tom Lane) @@ -11477,7 +11477,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; In - contrib/intarray, + contrib/intarray, fix behavior of containment operators with respect to empty arrays (Tom Lane) @@ -11490,10 +11490,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Remove contrib/xml2's + Remove contrib/xml2's arbitrary limit on the number of - parameter=value pairs that can be - handled by xslt_process() (Pavel Stehule) + parameter=value pairs that can be + handled by xslt_process() (Pavel Stehule) @@ -11503,7 +11503,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - In contrib/pageinspect, + In contrib/pageinspect, fix heap_page_item to return infomasks as 32-bit values (Alvaro Herrera) @@ -11522,13 +11522,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add contrib/sepgsql - to interface permission checks with SELinux (KaiGai Kohei) + Add contrib/sepgsql + to interface permission checks with SELinux (KaiGai Kohei) This uses the new SECURITY LABEL + linkend="SQL-SECURITY-LABEL">SECURITY LABEL facility. @@ -11536,7 +11536,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add contrib module auth_delay (KaiGai + linkend="auth-delay">auth_delay (KaiGai Kohei) @@ -11549,7 +11549,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add dummy_seclabel + Add dummy_seclabel contrib module (KaiGai Kohei) @@ -11569,17 +11569,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for LIKE and ILIKE index + Add support for LIKE and ILIKE index searches to contrib/pg_trgm (Alexander + linkend="pgtrgm">contrib/pg_trgm (Alexander Korotkov) - Add levenshtein_less_equal() function to contrib/fuzzystrmatch, + Add levenshtein_less_equal() function to contrib/fuzzystrmatch, which is optimized for small distances (Alexander Korotkov) @@ -11587,7 +11587,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Improve performance of index lookups on contrib/seg columns (Alexander + linkend="seg">contrib/seg columns (Alexander Korotkov) @@ -11595,7 +11595,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Improve performance of pg_upgrade for + linkend="pgupgrade">pg_upgrade for databases with many relations (Bruce Momjian) @@ -11603,7 +11603,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add flag to contrib/pgbench to + linkend="pgbench">contrib/pgbench to report per-statement latencies (Florian Pflug) @@ -11619,29 +11619,29 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Move src/tools/test_fsync to contrib/pg_test_fsync + Move src/tools/test_fsync to contrib/pg_test_fsync (Bruce Momjian, Tom Lane) - Add O_DIRECT support to contrib/pg_test_fsync + Add O_DIRECT support to contrib/pg_test_fsync (Bruce Momjian) - This matches the use of O_DIRECT by wal_sync_method. + This matches the use of O_DIRECT by wal_sync_method. Add new tests to contrib/pg_test_fsync + linkend="pgtestfsync">contrib/pg_test_fsync (Bruce Momjian) @@ -11659,7 +11659,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Extensive ECPG + Extensive ECPG documentation improvements (Satoshi Nagayasu) @@ -11674,7 +11674,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add documentation for exit_on_error + linkend="guc-exit-on-error">exit_on_error (Robert Haas) @@ -11686,7 +11686,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add documentation for pg_options_to_table() + linkend="functions-info-catalog-table">pg_options_to_table() (Josh Berkus) @@ -11699,7 +11699,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Document that it is possible to access all composite type fields using (compositeval).* + linkend="field-selection">(compositeval).* syntax (Peter Eisentraut) @@ -11707,16 +11707,16 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Document that translate() - removes characters in from that don't have a - corresponding to character (Josh Kupershmidt) + linkend="functions-string-other">translate() + removes characters in from that don't have a + corresponding to character (Josh Kupershmidt) - Merge documentation for CREATE CONSTRAINT TRIGGER and CREATE TRIGGER + Merge documentation for CREATE CONSTRAINT TRIGGER and CREATE TRIGGER (Alvaro Herrera) @@ -11741,12 +11741,12 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Handle non-ASCII characters consistently in HISTORY file + Handle non-ASCII characters consistently in HISTORY file (Peter Eisentraut) - While the HISTORY file is in English, we do have to deal + While the HISTORY file is in English, we do have to deal with non-ASCII letters in contributor names. These are now transliterated so that they are reasonably legible without assumptions about character set. diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index 6fa21e3759..ca8c87a4ab 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -16,7 +16,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -43,20 +43,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -95,21 +95,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -126,7 +126,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -138,13 +138,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -156,12 +156,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -185,7 +185,7 @@ CREATE OR REPLACE VIEW table_privileges AS - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -217,7 +217,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -225,11 +225,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -244,15 +244,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -283,15 +283,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -305,7 +305,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -319,16 +319,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -406,28 +406,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -436,56 +436,56 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -496,22 +496,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -522,14 +522,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -540,21 +540,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -562,7 +562,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -574,8 +574,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -587,7 +587,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -607,27 +607,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -651,7 +651,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -683,18 +683,18 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -718,7 +718,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -732,7 +732,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -745,7 +745,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -753,7 +753,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -768,19 +768,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -788,27 +788,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -822,33 +822,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -866,21 +866,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -895,20 +895,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -920,19 +920,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) @@ -967,7 +967,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -981,9 +981,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -996,15 +996,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1058,15 +1058,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1075,13 +1075,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1098,7 +1098,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In corner cases, a spurious out-of-sequence TLI error + In corner cases, a spurious out-of-sequence TLI error could be reported during recovery. @@ -1144,7 +1144,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1162,15 +1162,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1203,12 +1203,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1223,15 +1223,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1242,33 +1242,33 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1280,8 +1280,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1293,28 +1293,28 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1333,21 +1333,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1359,23 +1359,23 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1386,8 +1386,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) @@ -1414,7 +1414,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1489,71 +1489,71 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -1575,7 +1575,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -1589,7 +1589,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -1600,30 +1600,30 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -1631,8 +1631,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -1653,17 +1653,17 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -1676,15 +1676,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -1730,17 +1730,17 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -1754,7 +1754,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -1763,22 +1763,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -1788,40 +1788,40 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -1832,12 +1832,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -1847,7 +1847,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -1860,19 +1860,19 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -1881,12 +1881,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -1894,8 +1894,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -1907,7 +1907,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -1942,8 +1942,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -1956,29 +1956,29 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2012,7 +2012,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2024,7 +2024,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2080,7 +2080,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2089,7 +2089,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2103,10 +2103,10 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2114,8 +2114,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -2127,28 +2127,28 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -2156,7 +2156,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. @@ -2177,22 +2177,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -2205,12 +2205,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -2218,9 +2218,9 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -2267,56 +2267,56 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -2342,27 +2342,27 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -2376,20 +2376,20 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -2410,21 +2410,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -2485,25 +2485,25 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -2512,7 +2512,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -2531,21 +2531,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -2554,7 +2554,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -2576,14 +2576,14 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -2635,7 +2635,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -2648,14 +2648,14 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -2668,7 +2668,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -2694,13 +2694,13 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -2708,15 +2708,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -2724,21 +2724,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -2746,23 +2746,23 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -2770,18 +2770,18 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -2791,44 +2791,44 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -2837,22 +2837,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -2860,29 +2860,29 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -2890,42 +2890,42 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -2937,19 +2937,19 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -2957,11 +2957,11 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -2970,7 +2970,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -3016,8 +3016,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -3043,13 +3043,13 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -3067,7 +3067,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -3085,7 +3085,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - This substantially improves performance when pg_dump + This substantially improves performance when pg_dump tries to dump a large number of tables. @@ -3100,13 +3100,13 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -3118,14 +3118,14 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -3133,21 +3133,21 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -3160,7 +3160,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -3212,22 +3212,22 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -3240,9 +3240,9 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -3250,7 +3250,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -3271,12 +3271,12 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -3296,7 +3296,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -3333,7 +3333,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -3341,44 +3341,44 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -3395,14 +3395,14 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix contrib/sepgsql's handling of SELECT INTO + Fix contrib/sepgsql's handling of SELECT INTO statements (Kohei KaiGai) - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -3410,64 +3410,64 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -3475,11 +3475,11 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -3488,18 +3488,18 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -3507,11 +3507,11 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -3519,14 +3519,14 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -3538,38 +3538,38 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -3618,7 +3618,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -3629,13 +3629,13 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -3681,12 +3681,12 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -3700,36 +3700,36 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -3763,8 +3763,8 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -3802,7 +3802,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -3812,7 +3812,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -3822,15 +3822,15 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -3839,16 +3839,16 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -3856,16 +3856,16 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -3907,7 +3907,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -3935,7 +3935,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -3943,7 +3943,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -3964,14 +3964,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -3991,19 +3991,19 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -4050,18 +4050,18 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -4074,20 +4074,20 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -4100,20 +4100,20 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -4121,14 +4121,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -4137,32 +4137,32 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -4175,38 +4175,38 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -4219,14 +4219,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -4238,7 +4238,7 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -4246,28 +4246,28 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -4275,8 +4275,8 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) @@ -4288,18 +4288,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix failure in pg_receivexlog (Andres Freund) + Fix failure in pg_receivexlog (Andres Freund) A patch merge mistake in 9.2.10 led to could not create archive - status file errors. + status file errors. - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -4311,7 +4311,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -4346,11 +4346,11 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -4367,15 +4367,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -4385,27 +4385,27 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -4413,12 +4413,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -4459,7 +4459,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -4484,35 +4484,35 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -4520,14 +4520,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -4535,7 +4535,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -4547,22 +4547,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -4570,12 +4570,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -4585,7 +4585,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -4611,7 +4611,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -4624,19 +4624,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -4649,7 +4649,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Matching would often fail when the number of allowed iterations is - limited by a ? quantifier or a bound expression. + limited by a ? quantifier or a bound expression. @@ -4668,7 +4668,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -4699,14 +4699,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -4720,7 +4720,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In some contexts, constructs like row_to_json(tab.*) may + In some contexts, constructs like row_to_json(tab.*) may not produce the expected column names. This is fixed properly as of 9.4; in older branches, just ensure that we produce some nonempty name. (In some cases this will be the underlying table's column name @@ -4732,19 +4732,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix mishandling of system columns, - particularly tableoid, in FDW queries (Etsuro Fujita) + particularly tableoid, in FDW queries (Etsuro Fujita) - Avoid doing indexed_column = ANY - (array) as an index qualifier if that leads + Avoid doing indexed_column = ANY + (array) as an index qualifier if that leads to an inferior plan (Andrew Gierth) - In some cases, = ANY conditions applied to non-first index + In some cases, = ANY conditions applied to non-first index columns would be done as index conditions even though it would be better to use them as simple filter conditions. @@ -4753,7 +4753,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -4766,8 +4766,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -4793,7 +4793,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -4826,12 +4826,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. @@ -4845,7 +4845,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -4853,14 +4853,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -4869,7 +4869,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -4883,32 +4883,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -4916,14 +4916,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -4931,32 +4931,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -4966,7 +4966,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -4974,17 +4974,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -4992,16 +4992,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -5009,7 +5009,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_dumpall to restore its ability to dump from + Fix pg_dumpall to restore its ability to dump from pre-8.1 servers (Gilles Darold) @@ -5023,28 +5023,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) - Fix failure of contrib/auto_explain to print per-node - timing information when doing EXPLAIN ANALYZE (Tom Lane) + Fix failure of contrib/auto_explain to print per-node + timing information when doing EXPLAIN ANALYZE (Tom Lane) - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -5052,7 +5052,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -5064,7 +5064,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -5072,7 +5072,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -5084,24 +5084,24 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -5109,7 +5109,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -5143,29 +5143,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -5177,15 +5177,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -5197,9 +5197,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -5208,21 +5208,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -5281,15 +5281,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -5335,7 +5335,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -5347,14 +5347,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could result in variable not found in subplan - target lists errors, or in silently wrong query results. + target lists errors, or in silently wrong query results. - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -5375,7 +5375,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve planner to drop constant-NULL inputs - of AND/OR when possible (Tom Lane) + of AND/OR when possible (Tom Lane) @@ -5387,13 +5387,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix identification of input type category in to_json() + Fix identification of input type category in to_json() and friends (Tom Lane) - This is known to have led to inadequate quoting of money - fields in the JSON result, and there may have been wrong + This is known to have led to inadequate quoting of money + fields in the JSON result, and there may have been wrong results for other data types as well. @@ -5408,13 +5408,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -5429,7 +5429,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -5442,7 +5442,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -5463,19 +5463,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -5483,7 +5483,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -5495,14 +5495,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -5516,21 +5516,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow the root user to use postgres -C variable and - postgres --describe-config (MauMau) + Allow the root user to use postgres -C variable and + postgres --describe-config (MauMau) The prohibition on starting the server as root does not need to extend to these operations, and relaxing it prevents failure - of pg_ctl in some scenarios. + of pg_ctl in some scenarios. Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -5539,16 +5539,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -5584,15 +5584,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -5603,17 +5603,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -5621,15 +5621,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -5637,52 +5637,52 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - Fix pg_upgrade for cases where the new server creates + Fix pg_upgrade for cases where the new server creates a TOAST table but the old version did not (Bruce Momjian) - This rare situation would manifest as relation OID mismatch + This rare situation would manifest as relation OID mismatch errors. - Prevent contrib/auto_explain from changing the output of - a user's EXPLAIN (Tom Lane) + Prevent contrib/auto_explain from changing the output of + a user's EXPLAIN (Tom Lane) - If auto_explain is active, it could cause - an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless + If auto_explain is active, it could cause + an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless print timing information. - Fix query-lifespan memory leak in contrib/dblink + Fix query-lifespan memory leak in contrib/dblink (MauMau, Joe Conway) - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -5691,27 +5691,27 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Prevent use of already-freed memory in - contrib/pgstattuple's pgstat_heap() + contrib/pgstattuple's pgstat_heap() (Noah Misch) - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -5771,7 +5771,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -5795,7 +5795,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -5808,17 +5808,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -5837,8 +5837,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -5858,13 +5858,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -5877,13 +5877,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix tracking of psql script line numbers - during \copy from out-of-line data + Fix tracking of psql script line numbers + during \copy from out-of-line data (Kumar Rajeev Rastogi, Amit Khandekar) - \copy ... from incremented the script file line number + \copy ... from incremented the script file line number for each data line, even if the data was not coming from the script file. This mistake resulted in wrong line numbers being reported for any errors occurring later in the same script file. @@ -5892,14 +5892,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -5945,19 +5945,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -5970,7 +5970,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -5990,7 +5990,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -6004,12 +6004,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -6036,7 +6036,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -6048,35 +6048,35 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -6091,7 +6091,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -6111,20 +6111,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -6142,8 +6142,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -6156,7 +6156,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -6169,19 +6169,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -6205,7 +6205,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -6225,7 +6225,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -6239,19 +6239,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Fix UPDATE/DELETE of an inherited target table - that has UNION ALL subqueries (Tom Lane) + Fix UPDATE/DELETE of an inherited target table + that has UNION ALL subqueries (Tom Lane) - Without this fix, UNION ALL subqueries aren't correctly + Without this fix, UNION ALL subqueries aren't correctly inserted into the update plans for inheritance child tables after the first one, typically resulting in no update happening for those child table(s). @@ -6260,12 +6260,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -6273,21 +6273,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -6319,12 +6319,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -6332,8 +6332,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -6344,44 +6344,44 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix *-qualification of named parameters in SQL-language + Fix *-qualification of named parameters in SQL-language functions (Tom Lane) Given a composite-type parameter - named foo, $1.* worked fine, - but foo.* not so much. + named foo, $1.* worked fine, + but foo.* not so much. - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -6389,14 +6389,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix incorrect translation handling in - some psql \d commands + some psql \d commands (Peter Eisentraut, Tom Lane) - Ensure pg_basebackup's background process is killed + Ensure pg_basebackup's background process is killed when exiting its foreground process (Magnus Hagander) @@ -6404,7 +6404,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -6417,20 +6417,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -6441,15 +6441,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) - Fix contrib/pg_stat_statement's handling - of CURRENT_DATE and related constructs (Kyotaro + Fix contrib/pg_stat_statement's handling + of CURRENT_DATE and related constructs (Kyotaro Horiguchi) @@ -6463,28 +6463,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -6493,20 +6493,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -6558,19 +6558,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would need to happen before actual loss occurs, but it's not zero. In 9.2.0 and later, the probability of loss is higher, and it's also possible - to get could not access status of transaction errors as a + to get could not access status of transaction errors as a consequence of this bug. Users upgrading from releases 9.0.4 or 8.4.8 or earlier are not affected, but all later versions contain the bug. @@ -6578,18 +6578,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -6620,13 +6620,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This could lead to corruption of the lock data structures in shared - memory, causing lock already held and other odd errors. + memory, causing lock already held and other odd errors. - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -6637,14 +6637,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure an anti-wraparound VACUUM counts a page as scanned + Ensure an anti-wraparound VACUUM counts a page as scanned when it's only verified that no tuples need freezing (Sergey Burladyan, Jeff Janes) This bug could result in failing to - advance relfrozenxid, so that the table would still be + advance relfrozenxid, so that the table would still be thought to need another anti-wraparound vacuum. In the worst case the database might even shut down to prevent wraparound. @@ -6663,15 +6663,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix unexpected spgdoinsert() failure error during SP-GiST + Fix unexpected spgdoinsert() failure error during SP-GiST index creation (Teodor Sigaev) - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -6688,14 +6688,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. Fix incorrect planning in cases where the same non-strict expression - appears in multiple WHERE and outer JOIN + appears in multiple WHERE and outer JOIN equality clauses (Tom Lane) @@ -6763,13 +6763,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -6783,7 +6783,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -6797,7 +6797,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -6808,10 +6808,10 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -6821,28 +6821,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -6894,7 +6894,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -6909,7 +6909,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -6922,7 +6922,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -6964,58 +6964,58 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Fix accounting for qualifier evaluation costs in UNION ALL + Fix accounting for qualifier evaluation costs in UNION ALL and inheritance queries (Tom Lane) This fixes cases where suboptimal query plans could be chosen if - some WHERE clauses are expensive to calculate. + some WHERE clauses are expensive to calculate. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) - Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) + Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) - Previously such cases could cause a pg_upgrade error. + Previously such cases could cause a pg_upgrade error. - Reorder pg_dump processing of extension-related + Reorder pg_dump processing of extension-related rules and event triggers (Joe Conway) @@ -7023,7 +7023,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Force dumping of extension tables if specified by pg_dump - -t or -n (Joe Conway) + -t or -n (Joe Conway) @@ -7036,25 +7036,25 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_restore -l with the directory archive to display + Fix pg_restore -l with the directory archive to display the correct format name (Fujii Masao) - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. - Cause pg_basebackup -x with an empty xlog directory + Cause pg_basebackup -x with an empty xlog directory to throw an error rather than crashing (Magnus Hagander, Haruka Takatsuka) @@ -7093,13 +7093,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. @@ -7112,7 +7112,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) @@ -7124,7 +7124,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -7148,14 +7148,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -7167,31 +7167,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Print proper line number during COPY failure (Heikki + Print proper line number during COPY failure (Heikki Linnakangas) - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) - Make pg_upgrade use pg_dump - --quote-all-identifiers to avoid problems with keyword changes + Make pg_upgrade use pg_dump + --quote-all-identifiers to avoid problems with keyword changes between releases (Tom Lane) @@ -7205,7 +7205,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -7214,28 +7214,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. - Avoid unnecessary reporting when track_activities is off + Avoid unnecessary reporting when track_activities is off (Tom Lane) @@ -7249,7 +7249,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent crash when psql's PSQLRC variable + Prevent crash when psql's PSQLRC variable contains a tilde (Bruce Momjian) @@ -7262,7 +7262,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -7297,7 +7297,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -7321,7 +7321,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -7335,9 +7335,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) @@ -7350,7 +7350,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 An unprivileged database user could exploit this mistake to call - pg_start_backup() or pg_stop_backup(), + pg_start_backup() or pg_stop_backup(), thus possibly interfering with creation of routine backups. (CVE-2013-1901) @@ -7358,32 +7358,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -7398,21 +7398,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -7423,7 +7423,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. @@ -7431,7 +7431,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure we do crash recovery before entering archive recovery, if the - database was not stopped cleanly and a recovery.conf file + database was not stopped cleanly and a recovery.conf file is present (Heikki Linnakangas, Kyotaro Horiguchi, Mitsumasa Kondo) @@ -7451,14 +7451,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -7479,20 +7479,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) @@ -7506,8 +7506,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -7525,14 +7525,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF) + Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF) (Pavel Stehule) - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -7547,7 +7547,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Remove vestigial secondary-split support in - gist_box_picksplit() (Tom Lane) + gist_box_picksplit() (Tom Lane) @@ -7566,29 +7566,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -7597,15 +7597,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - In pg_basebackup, include only the current server + In pg_basebackup, include only the current server version's subdirectory when backing up a tablespace (Heikki Linnakangas) @@ -7613,16 +7613,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a server version check in pg_basebackup and - pg_receivexlog, so they fail cleanly with version + Add a server version check in pg_basebackup and + pg_receivexlog, so they fail cleanly with version combinations that won't work (Heikki Linnakangas) - Fix contrib/dblink to handle inconsistent settings of - DateStyle or IntervalStyle safely (Daniel + Fix contrib/dblink to handle inconsistent settings of + DateStyle or IntervalStyle safely (Daniel Farina, Tom Lane) @@ -7630,7 +7630,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Previously, if the remote server had different settings of these parameters, ambiguous dates might be read incorrectly. This fix ensures that datetime and interval columns fetched by a - dblink query will be interpreted correctly. Note however + dblink query will be interpreted correctly. Note however that inconsistent settings are still risky, since literal values appearing in SQL commands sent to the remote server might be interpreted differently than they would be locally. @@ -7639,25 +7639,25 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Enable building PostgreSQL with Microsoft Visual + Enable building PostgreSQL with Microsoft Visual Studio 2012 (Brar Piening, Noah Misch) - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -7665,12 +7665,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -7715,7 +7715,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -7742,7 +7742,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This mistake could result in incorrect WAL ends before end of - online backup errors. + online backup errors. @@ -7824,8 +7824,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve performance of SPI_execute and related - functions, thereby improving PL/pgSQL's EXECUTE + Improve performance of SPI_execute and related + functions, thereby improving PL/pgSQL's EXECUTE (Heikki Linnakangas, Tom Lane) @@ -7860,20 +7860,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix intermittent crash in DROP INDEX CONCURRENTLY (Tom Lane) + Fix intermittent crash in DROP INDEX CONCURRENTLY (Tom Lane) Fix potential corruption of shared-memory lock table during - CREATE/DROP INDEX CONCURRENTLY (Tom Lane) + CREATE/DROP INDEX CONCURRENTLY (Tom Lane) - Fix COPY's multiple-tuple-insertion code for the case of + Fix COPY's multiple-tuple-insertion code for the case of a tuple larger than page size minus fillfactor (Heikki Linnakangas) @@ -7885,19 +7885,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -7909,13 +7909,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -7923,13 +7923,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -7947,7 +7947,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 There were some issues with default privileges for types, and - pg_dump failed to dump such privileges at all. + pg_dump failed to dump such privileges at all. @@ -7967,13 +7967,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) - Fix pg_extension_config_dump() to handle + Fix pg_extension_config_dump() to handle extension-update cases properly (Tom Lane) @@ -7991,7 +7991,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The previous coding resulted in sometimes omitting the first line in - the CONTEXT traceback for the error. + the CONTEXT traceback for the error. @@ -8009,13 +8009,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) @@ -8023,74 +8023,74 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible error if a relation file is removed while - pg_basebackup is running (Heikki Linnakangas) + pg_basebackup is running (Heikki Linnakangas) - Tolerate timeline switches while pg_basebackup -X fetch + Tolerate timeline switches while pg_basebackup -X fetch is backing up a standby server (Heikki Linnakangas) - Make pg_dump exclude data of unlogged tables when + Make pg_dump exclude data of unlogged tables when running on a hot-standby server (Magnus Hagander) This would fail anyway because the data is not available on the standby server, so it seems most convenient to assume - automatically. - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix pg_upgrade's -O/-o options (Marti Raudsepp) + Fix pg_upgrade's -O/-o options (Marti Raudsepp) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -8110,15 +8110,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -8153,7 +8153,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - However, you may need to perform REINDEX operations to + However, you may need to perform REINDEX operations to correct problems in concurrently-built indexes, as described in the first changelog item below. @@ -8173,22 +8173,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix multiple bugs associated with CREATE/DROP INDEX - CONCURRENTLY (Andres Freund, Tom Lane, Simon Riggs, Pavan Deolasee) + CONCURRENTLY (Andres Freund, Tom Lane, Simon Riggs, Pavan Deolasee) - An error introduced while adding DROP INDEX CONCURRENTLY + An error introduced while adding DROP INDEX CONCURRENTLY allowed incorrect indexing decisions to be made during the initial - phase of CREATE INDEX CONCURRENTLY; so that indexes built + phase of CREATE INDEX CONCURRENTLY; so that indexes built by that command could be corrupt. It is recommended that indexes - built in 9.2.X with CREATE INDEX CONCURRENTLY be rebuilt + built in 9.2.X with CREATE INDEX CONCURRENTLY be rebuilt after applying this update. - In addition, fix CREATE/DROP INDEX CONCURRENTLY to use + In addition, fix CREATE/DROP INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus again resulting in corrupt concurrently-created indexes. @@ -8196,33 +8196,33 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. - Also fix DROP INDEX CONCURRENTLY to not disable + Also fix DROP INDEX CONCURRENTLY to not disable insertions into the target index until all queries using it are done. - Also fix misbehavior if DROP INDEX CONCURRENTLY is + Also fix misbehavior if DROP INDEX CONCURRENTLY is canceled: the previous coding could leave an un-droppable index behind. - Correct predicate locking for DROP INDEX CONCURRENTLY + Correct predicate locking for DROP INDEX CONCURRENTLY (Kevin Grittner) Previously, SSI predicate locks were processed at the wrong time, possibly leading to incorrect behavior of serializable transactions - executing in parallel with the DROP. + executing in parallel with the DROP. @@ -8280,13 +8280,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -8306,20 +8306,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix the syslogger process to not fail when - log_rotation_age exceeds 2^31 milliseconds (about 25 days) + log_rotation_age exceeds 2^31 milliseconds (about 25 days) (Tom Lane) - Fix WaitLatch() to return promptly when the requested + Fix WaitLatch() to return promptly when the requested timeout expires (Jeff Janes, Tom Lane) With the previous coding, a steady stream of non-wait-terminating - interrupts could delay return from WaitLatch() + interrupts could delay return from WaitLatch() indefinitely. This has been shown to be a problem for the autovacuum launcher process, and might cause trouble elsewhere as well. @@ -8372,8 +8372,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. 9.2 showed this type of error in more cases than previous releases, but the basic bug has been there for a long time. @@ -8381,13 +8381,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix SELECT DISTINCT with index-optimized - MIN/MAX on an inheritance tree (Tom Lane) + Fix SELECT DISTINCT with index-optimized + MIN/MAX on an inheritance tree (Tom Lane) The planner would fail with failed to re-find MinMaxAggInfo - record given this combination of factors. + record given this combination of factors. @@ -8407,7 +8407,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A strict join clause can be sufficient to establish an - x IS NOT NULL predicate, for example. + x IS NOT NULL predicate, for example. This fixes a planner regression in 9.2, since previous versions could make comparable deductions. @@ -8434,10 +8434,10 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -8450,8 +8450,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could result in wrong answers from merge joins whose inner side is an index scan using an - indexed_column = - ANY(array) condition. + indexed_column = + ANY(array) condition. @@ -8475,12 +8475,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) In very unusual circumstances, this oversight could result in passing - incorrect data to a trigger WHEN condition, or to the + incorrect data to a trigger WHEN condition, or to the precheck logic for a foreign-key enforcement trigger. That could result in a crash, or in an incorrect decision about whether to fire the trigger. @@ -8489,7 +8489,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -8501,7 +8501,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix ALTER EXTENSION SET SCHEMA's failure to move some + Fix ALTER EXTENSION SET SCHEMA's failure to move some subsidiary objects into the new schema (Álvaro Herrera, Dimitri Fontaine) @@ -8509,7 +8509,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Handle CREATE TABLE AS EXECUTE correctly in extended query + Handle CREATE TABLE AS EXECUTE correctly in extended query protocol (Tom Lane) @@ -8517,7 +8517,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Don't modify the input parse tree in DROP RULE IF NOT - EXISTS and DROP TRIGGER IF NOT EXISTS (Tom Lane) + EXISTS and DROP TRIGGER IF NOT EXISTS (Tom Lane) @@ -8528,14 +8528,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -8549,7 +8549,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -8557,7 +8557,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -8579,22 +8579,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. - Fix pg_terminate_backend() and - pg_cancel_backend() to not throw error for a non-existent + Fix pg_terminate_backend() and + pg_cancel_backend() to not throw error for a non-existent target process (Josh Kupershmidt) @@ -8607,7 +8607,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix display of - pg_stat_replication.sync_state at a + pg_stat_replication.sync_state at a page boundary (Kyotaro Horiguchi) @@ -8621,7 +8621,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -8646,8 +8646,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -8657,45 +8657,45 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Make pg_dump dump SEQUENCE SET items in + Make pg_dump dump SEQUENCE SET items in the data not pre-data section of the archive (Tom Lane) This fixes an undesirable inconsistency between the meanings of - and , and also fixes dumping of sequences that are marked as extension configuration tables. - Fix pg_dump's handling of DROP DATABASE - commands in mode (Guillaume Lelarge) - Beginning in 9.2.0, pg_dump --clean would issue a - DROP DATABASE command, which was either useless or + Beginning in 9.2.0, pg_dump --clean would issue a + DROP DATABASE command, which was either useless or dangerous depending on the usage scenario. It no longer does that. - This change also fixes the combination of - Fix pg_dump for views with circular dependencies and + Fix pg_dump for views with circular dependencies and no relation options (Tom Lane) @@ -8703,31 +8703,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The previous fix to dump relation options when a view is involved in a circular dependency didn't work right for the case that the view has no options; it emitted ALTER VIEW foo - SET () which is invalid syntax. + SET () which is invalid syntax. - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -8738,82 +8738,82 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix tar files emitted by pg_basebackup to + Fix tar files emitted by pg_basebackup to be POSIX conformant (Brian Weaver, Tom Lane) - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Prevent pg_upgrade from trying to process TOAST tables + Prevent pg_upgrade from trying to process TOAST tables for system catalogs (Bruce Momjian) - This fixes an error seen when the information_schema has + This fixes an error seen when the information_schema has been dropped and recreated. Other failures were also possible. - Improve pg_upgrade performance by setting - synchronous_commit to off in the new cluster + Improve pg_upgrade performance by setting + synchronous_commit to off in the new cluster (Bruce Momjian) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Work around unportable behavior of malloc(0) and - realloc(NULL, 0) (Tom Lane) + Work around unportable behavior of malloc(0) and + realloc(NULL, 0) (Tom Lane) - On platforms where these calls return NULL, some code + On platforms where these calls return NULL, some code mistakenly thought that meant out-of-memory. - This is known to have broken pg_dump for databases + This is known to have broken pg_dump for databases containing no user-defined aggregates. There might be other cases as well. @@ -8821,19 +8821,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that make install for an extension creates the - extension installation directory (Cédric Villemain) + Ensure that make install for an extension creates the + extension installation directory (Cédric Villemain) - Previously, this step was missed if MODULEDIR was set in + Previously, this step was missed if MODULEDIR was set in the extension's Makefile. - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -8844,7 +8844,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -8877,8 +8877,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - However, you may need to perform REINDEX and/or - VACUUM operations to recover from the effects of the data + However, you may need to perform REINDEX and/or + VACUUM operations to recover from the effects of the data corruption bug described in the first changelog item below. @@ -8903,7 +8903,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 likely to occur on standby slave servers since those perform much more WAL replay. There is a low probability of corruption of btree and GIN indexes. There is a much higher probability of corruption - of table visibility maps, which might lead to wrong answers + of table visibility maps, which might lead to wrong answers from index-only scans. Table data proper cannot be corrupted by this bug. @@ -8911,16 +8911,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 While no index corruption due to this bug is known to have occurred in the field, as a precautionary measure it is recommended that - production installations REINDEX all btree and GIN + production installations REINDEX all btree and GIN indexes at a convenient time after upgrading to 9.2.1. - Also, it is recommended to perform a VACUUM of all tables + Also, it is recommended to perform a VACUUM of all tables while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any incorrect visibility map data. vacuum_cost_delay + linkend="guc-vacuum-cost-delay">vacuum_cost_delay can be adjusted to reduce the performance impact of vacuuming, while causing it to take longer to finish. @@ -8929,14 +8929,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible incorrect sorting of output from queries involving - WHERE indexed_column IN - (list_of_values) (Tom Lane) + WHERE indexed_column IN + (list_of_values) (Tom Lane) - Fix planner failure for queries involving GROUP BY + Fix planner failure for queries involving GROUP BY expressions along with window functions and aggregates (Tom Lane) @@ -8948,7 +8948,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This error could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -8961,7 +8961,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve selectivity estimation for text search queries involving - prefixes, i.e. word:* patterns (Tom Lane) + prefixes, i.e. word:* patterns (Tom Lane) @@ -8972,14 +8972,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A command that needed no locks other than ones its transaction already - had might fail to notice a concurrent GRANT or - REVOKE that committed since the start of its transaction. + had might fail to notice a concurrent GRANT or + REVOKE that committed since the start of its transaction. - Fix ANALYZE to not fail when a column is a domain over an + Fix ANALYZE to not fail when a column is a domain over an array type (Tom Lane) @@ -8998,7 +8998,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -9006,14 +9006,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove unnecessary dependency on pg_config from - pg_upgrade (Peter Eisentraut) + Remove unnecessary dependency on pg_config from + pg_upgrade (Peter Eisentraut) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -9047,7 +9047,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow queries to retrieve data only from indexes, avoiding heap - access (index-only scans) + access (index-only scans) @@ -9069,14 +9069,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow streaming replication slaves to forward data to other slaves (cascading - replication) + replication) Allow pg_basebackup + linkend="app-pgbasebackup">pg_basebackup to make base backups from standby servers @@ -9084,7 +9084,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog tool to archive WAL file changes as they are written @@ -9112,14 +9112,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="SQL-CREATEVIEW">security_barrier option for views - Allow libpq connection strings to have the format of a + Allow libpq connection strings to have the format of a URI @@ -9127,7 +9127,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a single-row processing - mode to libpq for better handling of large + mode to libpq for better handling of large result sets @@ -9162,8 +9162,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove the spclocation field from pg_tablespace + Remove the spclocation field from pg_tablespace (Magnus Hagander) @@ -9173,23 +9173,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 a tablespace. This change allows tablespace directories to be moved while the server is down, by manually adjusting the symbolic links. To replace this field, we have added pg_tablespace_location() + linkend="functions-info-catalog-table">pg_tablespace_location() to allow querying of the symbolic links. - Move tsvector most-common-element statistics to new - pg_stats columns + Move tsvector most-common-element statistics to new + pg_stats columns (Alexander Korotkov) - Consult most_common_elems - and most_common_elem_freqs for the data formerly - available in most_common_vals - and most_common_freqs for a tsvector column. + Consult most_common_elems + and most_common_elem_freqs for the data formerly + available in most_common_vals + and most_common_freqs for a tsvector column. @@ -9204,14 +9204,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove hstore's => + Remove hstore's => operator (Robert Haas) - Users should now use hstore(text, text). Since + Users should now use hstore(text, text). Since PostgreSQL 9.0, a warning message has been - emitted when an operator named => is created because + emitted when an operator named => is created because the SQL standard reserves that token for another use. @@ -9220,7 +9220,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that xpath() + linkend="functions-xml-processing">xpath() escapes special characters in string values (Florian Pflug) @@ -9233,13 +9233,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make pg_relation_size() + linkend="functions-admin-dbobject">pg_relation_size() and friends return NULL if the object does not exist (Phil Sorber) This prevents queries that call these functions from returning - errors immediately after a concurrent DROP. + errors immediately after a concurrent DROP. @@ -9247,7 +9247,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make EXTRACT(EPOCH FROM - timestamp without time zone) + timestamp without time zone) measure the epoch from local midnight, not UTC midnight (Tom Lane) @@ -9256,17 +9256,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change reverts an ill-considered change made in release 7.3. Measuring from UTC midnight was inconsistent because it made the result dependent on the timezone setting, which - computations for timestamp without time zone should not be. + linkend="guc-timezone">timezone setting, which + computations for timestamp without time zone should not be. The previous behavior remains available by casting the input value - to timestamp with time zone. + to timestamp with time zone. - Properly parse time strings with trailing yesterday, - today, and tomorrow (Dean Rasheed) + Properly parse time strings with trailing yesterday, + today, and tomorrow (Dean Rasheed) @@ -9278,8 +9278,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix to_date() and - to_timestamp() to wrap incomplete dates toward 2020 + linkend="functions-formatting">to_date() and + to_timestamp() to wrap incomplete dates toward 2020 (Bruce Momjian) @@ -9314,15 +9314,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer forcibly lowercase procedural language names in CREATE FUNCTION + linkend="SQL-CREATEFUNCTION">CREATE FUNCTION (Robert Haas) While unquoted language identifiers are still lowercased, strings and quoted identifiers are no longer forcibly down-cased. - Thus for example CREATE FUNCTION ... LANGUAGE 'C' - will no longer work; it must be spelled 'c', or better + Thus for example CREATE FUNCTION ... LANGUAGE 'C' + will no longer work; it must be spelled 'c', or better omit the quotes. @@ -9352,15 +9352,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Provide consistent backquote, variable expansion, and quoted substring behavior in psql meta-command + linkend="APP-PSQL">psql meta-command arguments (Tom Lane) Previously, such references were treated oddly when not separated by - whitespace from adjacent text. For example 'FOO'BAR was - output as FOO BAR (unexpected insertion of a space) and - FOO'BAR'BAZ was output unchanged (not removing the quotes + whitespace from adjacent text. For example 'FOO'BAR was + output as FOO BAR (unexpected insertion of a space) and + FOO'BAR'BAZ was output unchanged (not removing the quotes as most would expect). @@ -9368,9 +9368,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer treat clusterdb + linkend="APP-CLUSTERDB">clusterdb table names as double-quoted; no longer treat reindexdb table + linkend="APP-REINDEXDB">reindexdb table and index names as double-quoted (Bruce Momjian) @@ -9382,20 +9382,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - createuser + createuser no longer prompts for option settings by default (Peter Eisentraut) - Use to obtain the old behavior. Disable prompting for the user name in dropuser unless - is specified (Peter Eisentraut) @@ -9417,36 +9417,36 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows changing the names and locations of the files that were - previously hard-coded as server.crt, - server.key, root.crt, and - root.crl in the data directory. - The server will no longer examine root.crt or - root.crl by default; to load these files, the + previously hard-coded as server.crt, + server.key, root.crt, and + root.crl in the data directory. + The server will no longer examine root.crt or + root.crl by default; to load these files, the associated parameters must be set to non-default values. - Remove the silent_mode parameter (Heikki Linnakangas) + Remove the silent_mode parameter (Heikki Linnakangas) Similar behavior can be obtained with pg_ctl start - -l postmaster.log. + -l postmaster.log. - Remove the wal_sender_delay parameter, + Remove the wal_sender_delay parameter, as it is no longer needed (Tom Lane) - Remove the custom_variable_classes parameter (Tom Lane) + Remove the custom_variable_classes parameter (Tom Lane) @@ -9466,19 +9466,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Rename pg_stat_activity.procpid - to pid, to match other system tables (Magnus Hagander) + linkend="monitoring-stats-views-table">pg_stat_activity.procpid + to pid, to match other system tables (Magnus Hagander) - Create a separate pg_stat_activity column to + Create a separate pg_stat_activity column to report process state (Scott Mead, Magnus Hagander) - The previous query and query_start + The previous query and query_start values now remain available for an idle session, allowing enhanced analysis. @@ -9486,8 +9486,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Rename pg_stat_activity.current_query to - query because it is not cleared when the query + Rename pg_stat_activity.current_query to + query because it is not cleared when the query completes (Magnus Hagander) @@ -9495,24 +9495,24 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Change all SQL-level statistics timing values - to be float8 columns measured in milliseconds (Tom Lane) + to be float8 columns measured in milliseconds (Tom Lane) This change eliminates the designed-in assumption that the values - are accurate to microseconds and no more (since the float8 + are accurate to microseconds and no more (since the float8 values can be fractional). The columns affected are - pg_stat_user_functions.total_time, - pg_stat_user_functions.self_time, - pg_stat_xact_user_functions.total_time, + pg_stat_user_functions.total_time, + pg_stat_user_functions.self_time, + pg_stat_xact_user_functions.total_time, and - pg_stat_xact_user_functions.self_time. + pg_stat_xact_user_functions.self_time. The statistics functions underlying these columns now also return - float8 milliseconds, rather than bigint + float8 milliseconds, rather than bigint microseconds. - contrib/pg_stat_statements' - total_time column is now also measured in + contrib/pg_stat_statements' + total_time column is now also measured in milliseconds. @@ -9546,7 +9546,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This feature is often called index-only scans. + This feature is often called index-only scans. Heap access can be skipped for heap pages containing only tuples that are visible to all sessions, as reported by the visibility map; so the benefit applies mainly to mostly-static data. The visibility map @@ -9618,7 +9618,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Move the frequently accessed members of the PGPROC + Move the frequently accessed members of the PGPROC shared memory array to a separate array (Pavan Deolasee, Heikki Linnakangas, Robert Haas) @@ -9663,7 +9663,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make the number of CLOG buffers scale based on shared_buffers + linkend="guc-shared-buffers">shared_buffers (Robert Haas, Simon Riggs, Tom Lane) @@ -9724,7 +9724,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Previously, only wal_writer_delay + linkend="guc-wal-writer-delay">wal_writer_delay triggered WAL flushing to disk; now filling a WAL buffer also triggers WAL writes. @@ -9763,7 +9763,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 In the past, a prepared statement always had a single - generic plan that was used for all parameter values, which + generic plan that was used for all parameter values, which was frequently much inferior to the plans used for non-prepared statements containing explicit constant values. Now, the planner attempts to generate custom plans for specific parameter values. @@ -9781,7 +9781,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - The new parameterized path mechanism allows inner + The new parameterized path mechanism allows inner index scans to use values from relations that are more than one join level up from the scan. This can greatly improve performance in situations where semantic restrictions (such as outer joins) limit @@ -9796,7 +9796,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Wrappers can now provide multiple access paths for their + Wrappers can now provide multiple access paths for their tables, allowing more flexibility in join planning. @@ -9809,14 +9809,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This check is only performed when constraint_exclusion + linkend="guc-constraint-exclusion">constraint_exclusion is on. - Allow indexed_col op ANY(ARRAY[...]) conditions to be + Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain index scans and index-only scans (Tom Lane) @@ -9827,14 +9827,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Support MIN/MAX index optimizations on + Support MIN/MAX index optimizations on boolean columns (Marti Raudsepp) - Account for set-returning functions in SELECT target + Account for set-returning functions in SELECT target lists when setting row count estimates (Tom Lane) @@ -9882,7 +9882,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve statistical estimates for subqueries using - DISTINCT (Tom Lane) + DISTINCT (Tom Lane) @@ -9897,13 +9897,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Do not treat role names and samerole specified in samerole specified in pg_hba.conf as automatically including superusers (Andrew Dunstan) - This makes it easier to use reject lines with group roles. + This makes it easier to use reject lines with group roles. @@ -9958,7 +9958,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This logging is triggered by log_autovacuum_min_duration. + linkend="guc-log-autovacuum-min-duration">log_autovacuum_min_duration. @@ -9977,7 +9977,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add pg_xlog_location_diff() + linkend="functions-admin-backup">pg_xlog_location_diff() to simplify WAL location comparisons (Euler Taveira de Oliveira) @@ -9995,15 +9995,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows different instances to use the event log with different identifiers, by setting the event_source + linkend="guc-event-source">event_source server parameter, which is similar to how syslog_ident works. + linkend="guc-syslog-ident">syslog_ident works. - Change unexpected EOF messages to DEBUG1 level, + Change unexpected EOF messages to DEBUG1 level, except when there is an open transaction (Magnus Hagander) @@ -10025,14 +10025,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Track temporary file sizes and file counts in the pg_stat_database + linkend="pg-stat-database-view">pg_stat_database system view (Tomas Vondra) - Add a deadlock counter to the pg_stat_database + Add a deadlock counter to the pg_stat_database system view (Magnus Hagander) @@ -10040,7 +10040,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a server parameter track_io_timing + linkend="guc-track-io-timing">track_io_timing to track I/O timings (Ants Aasma, Robert Haas) @@ -10048,7 +10048,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Report checkpoint timing information in pg_stat_bgwriter + linkend="pg-stat-bgwriter-view">pg_stat_bgwriter (Greg Smith, Peter Geoghegan) @@ -10065,7 +10065,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Silently ignore nonexistent schemas specified in search_path (Tom Lane) + linkend="guc-search-path">search_path (Tom Lane) @@ -10077,12 +10077,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow superusers to set deadlock_timeout + linkend="guc-deadlock-timeout">deadlock_timeout per-session, not just per-cluster (Noah Misch) - This allows deadlock_timeout to be reduced for + This allows deadlock_timeout to be reduced for transactions that are likely to be involved in a deadlock, thus detecting the failure more quickly. Alternatively, increasing the value can be used to reduce the chances of a session being chosen for @@ -10093,7 +10093,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a server parameter temp_file_limit + linkend="guc-temp-file-limit">temp_file_limit to constrain temporary file space usage per session (Mark Kirkwood) @@ -10114,13 +10114,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add postmaster option to query configuration parameters (Bruce Momjian) - This allows pg_ctl to better handle cases where - PGDATA or points to a configuration-only directory. @@ -10128,14 +10128,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Replace an empty locale name with the implied value in - CREATE DATABASE + CREATE DATABASE (Tom Lane) This prevents cases where - pg_database.datcollate or - datctype could be interpreted differently after a + pg_database.datcollate or + datctype could be interpreted differently after a server restart. @@ -10170,22 +10170,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add an include_if_exists facility for configuration + Add an include_if_exists facility for configuration files (Greg Smith) - This works the same as include, except that an error + This works the same as include, except that an error is not thrown if the file is missing. - Identify the server time zone during initdb, and set + Identify the server time zone during initdb, and set postgresql.conf entries - timezone and - log_timezone + timezone and + log_timezone accordingly (Tom Lane) @@ -10197,7 +10197,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix pg_settings to + linkend="view-pg-settings">pg_settings to report postgresql.conf line numbers on Windows (Tom Lane) @@ -10220,7 +10220,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow streaming replication slaves to forward data to other slaves (cascading - replication) (Fujii Masao) + replication) (Fujii Masao) @@ -10232,8 +10232,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add new synchronous_commit - mode remote_write (Fujii Masao, Simon Riggs) + linkend="guc-synchronous-commit">synchronous_commit + mode remote_write (Fujii Masao, Simon Riggs) @@ -10246,7 +10246,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog tool to archive WAL file changes as they are written, rather than waiting for completed WAL files (Magnus Hagander) @@ -10255,7 +10255,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow pg_basebackup + linkend="app-pgbasebackup">pg_basebackup to make base backups from standby servers (Jun Ishizuka, Fujii Masao) @@ -10267,7 +10267,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow streaming of WAL files while pg_basebackup + Allow streaming of WAL files while pg_basebackup is performing a backup (Magnus Hagander) @@ -10306,19 +10306,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change allows better results when a row value is converted to - hstore or json type: the fields of the resulting + hstore or json type: the fields of the resulting value will now have the expected names. - Improve column labels used for sub-SELECT results + Improve column labels used for sub-SELECT results (Marti Raudsepp) - Previously, the generic label ?column? was used. + Previously, the generic label ?column? was used. @@ -10348,7 +10348,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - When a row fails a CHECK or NOT NULL + When a row fails a CHECK or NOT NULL constraint, show the row's contents as error detail (Jan Kundrát) @@ -10376,7 +10376,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change adds locking that should eliminate cache lookup - failed errors in many scenarios. Also, it is no longer possible + failed errors in many scenarios. Also, it is no longer possible to add relations to a schema that is being concurrently dropped, a scenario that formerly led to inconsistent system catalog contents. @@ -10384,7 +10384,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add CONCURRENTLY option to CONCURRENTLY option to DROP INDEX (Simon Riggs) @@ -10415,31 +10415,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow CHECK - constraints to be declared NOT VALID (Álvaro + Allow CHECK + constraints to be declared NOT VALID (Álvaro Herrera) - Adding a NOT VALID constraint does not cause the table to + Adding a NOT VALID constraint does not cause the table to be scanned to verify that existing rows meet the constraint. Subsequently, newly added or updated rows are checked. Such constraints are ignored by the planner when considering - constraint_exclusion, since it is not certain that all + constraint_exclusion, since it is not certain that all rows meet the constraint. - The new ALTER TABLE VALIDATE command allows NOT - VALID constraints to be checked for existing rows, after which + The new ALTER TABLE VALIDATE command allows NOT + VALID constraints to be checked for existing rows, after which they are converted into ordinary constraints. - Allow CHECK constraints to be declared NO - INHERIT (Nikhil Sontakke, Alex Hunsaker, Álvaro Herrera) + Allow CHECK constraints to be declared NO + INHERIT (Nikhil Sontakke, Alex Hunsaker, Álvaro Herrera) @@ -10459,7 +10459,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <command>ALTER</> + <command>ALTER</command> @@ -10467,18 +10467,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Reduce need to rebuild tables and indexes for certain ALTER TABLE - ... ALTER COLUMN TYPE operations (Noah Misch) + ... ALTER COLUMN TYPE operations (Noah Misch) - Increasing the length limit for a varchar or varbit + Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite. Similarly, increasing the allowable precision of a - numeric column, or changing a column from constrained - numeric to unconstrained numeric, no longer + numeric column, or changing a column from constrained + numeric to unconstrained numeric, no longer requires a table rewrite. Table rewrites are also avoided in similar - cases involving the interval, timestamp, and - timestamptz types. + cases involving the interval, timestamp, and + timestamptz types. @@ -10492,7 +10492,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add IF EXISTS options to some ALTER + Add IF EXISTS options to some ALTER commands (Pavel Stehule) @@ -10505,16 +10505,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add ALTER - FOREIGN DATA WRAPPER ... RENAME + FOREIGN DATA WRAPPER ... RENAME and ALTER - SERVER ... RENAME (Peter Eisentraut) + SERVER ... RENAME (Peter Eisentraut) Add ALTER - DOMAIN ... RENAME (Peter Eisentraut) + DOMAIN ... RENAME (Peter Eisentraut) @@ -10526,11 +10526,11 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Throw an error for ALTER DOMAIN ... DROP - CONSTRAINT on a nonexistent constraint (Peter Eisentraut) + CONSTRAINT on a nonexistent constraint (Peter Eisentraut) - An IF EXISTS option has been added to provide the + An IF EXISTS option has been added to provide the previous behavior. @@ -10540,7 +10540,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> @@ -10565,8 +10565,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix CREATE TABLE ... AS EXECUTE - to handle WITH NO DATA and column name specifications + Fix CREATE TABLE ... AS EXECUTE + to handle WITH NO DATA and column name specifications (Tom Lane) @@ -10583,14 +10583,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="SQL-CREATEVIEW">security_barrier option for views (KaiGai Kohei, Robert Haas) This option prevents optimizations that might allow view-protected data to be exposed to users, for example pushing a clause involving - an insecure function into the WHERE clause of the view. + an insecure function into the WHERE clause of the view. Such views can be expected to perform more poorly than ordinary views. @@ -10599,9 +10599,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a new LEAKPROOF function + linkend="SQL-CREATEFUNCTION">LEAKPROOF function attribute to mark functions that can safely be pushed down - into security_barrier views (KaiGai Kohei) + into security_barrier views (KaiGai Kohei) @@ -10611,8 +10611,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This adds support for the SQL-conforming - USAGE privilege on types and domains. The intent is + This adds support for the SQL-conforming + USAGE privilege on types and domains. The intent is to be able to restrict which users can create dependencies on types, since such dependencies limit the owner's ability to alter the type. @@ -10628,7 +10628,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Because the object is being created by SELECT INTO or CREATE TABLE AS, the creator would ordinarily have insert permissions; but there are corner cases where this is not - true, such as when ALTER DEFAULT PRIVILEGES has removed + true, such as when ALTER DEFAULT PRIVILEGES has removed such permissions. @@ -10646,20 +10646,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow VACUUM to more + Allow VACUUM to more easily skip pages that cannot be locked (Simon Riggs, Robert Haas) - This change should greatly reduce the incidence of VACUUM - getting stuck waiting for other sessions. + This change should greatly reduce the incidence of VACUUM + getting stuck waiting for other sessions. - Make EXPLAIN - (BUFFERS) count blocks dirtied and written (Robert Haas) + Make EXPLAIN + (BUFFERS) count blocks dirtied and written (Robert Haas) @@ -10677,8 +10677,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This is accomplished by setting the new TIMING option to - FALSE. + This is accomplished by setting the new TIMING option to + FALSE. @@ -10719,41 +10719,41 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add array_to_json() - and row_to_json() (Andrew Dunstan) + linkend="functions-json">array_to_json() + and row_to_json() (Andrew Dunstan) - Add a SMALLSERIAL + Add a SMALLSERIAL data type (Mike Pultz) - This is like SERIAL, except it stores the sequence in - a two-byte integer column (int2). + This is like SERIAL, except it stores the sequence in + a two-byte integer column (int2). Allow domains to be - declared NOT VALID (Álvaro Herrera) + declared NOT VALID (Álvaro Herrera) This option can be set at domain creation time, or via ALTER - DOMAIN ... ADD CONSTRAINT ... NOT - VALID. ALTER DOMAIN ... VALIDATE - CONSTRAINT fully validates the constraint. + DOMAIN ... ADD CONSTRAINT ... NOT + VALID. ALTER DOMAIN ... VALIDATE + CONSTRAINT fully validates the constraint. Support more locale-specific formatting options for the money data type (Tom Lane) + linkend="datatype-money">money data type (Tom Lane) @@ -10766,22 +10766,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add bitwise and, or, and not - operators for the macaddr data type (Brendan Jurd) + Add bitwise and, or, and not + operators for the macaddr data type (Brendan Jurd) Allow xpath() to + linkend="functions-xml-processing">xpath() to return a single-element XML array when supplied a scalar value (Florian Pflug) Previously, it returned an empty array. This change will also - cause xpath_exists() to return true, not false, + cause xpath_exists() to return true, not false, for such expressions. @@ -10805,9 +10805,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow non-superusers to use pg_cancel_backend() + linkend="functions-admin-signal">pg_cancel_backend() and pg_terminate_backend() + linkend="functions-admin-signal">pg_terminate_backend() on other sessions belonging to the same user (Magnus Hagander, Josh Kupershmidt, Dan Farina) @@ -10827,7 +10827,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows multiple transactions to share identical views of the database state. Snapshots are exported via pg_export_snapshot() + linkend="functions-snapshot-synchronization">pg_export_snapshot() and imported via SET TRANSACTION SNAPSHOT. Only snapshots from currently-running transactions can be imported. @@ -10838,7 +10838,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Support COLLATION - FOR on expressions (Peter Eisentraut) + FOR on expressions (Peter Eisentraut) @@ -10849,23 +10849,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add pg_opfamily_is_visible() + linkend="functions-info-schema-table">pg_opfamily_is_visible() (Josh Kupershmidt) - Add a numeric variant of pg_size_pretty() - for use with pg_xlog_location_diff() (Fujii Masao) + Add a numeric variant of pg_size_pretty() + for use with pg_xlog_location_diff() (Fujii Masao) Add a pg_trigger_depth() + linkend="functions-info-session-table">pg_trigger_depth() function (Kevin Grittner) @@ -10877,8 +10877,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow string_agg() - to process bytea values (Pavel Stehule) + linkend="functions-aggregate-table">string_agg() + to process bytea values (Pavel Stehule) @@ -10889,7 +10889,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - For example, ^(\w+)( \1)+$. Previous releases did not + For example, ^(\w+)( \1)+$. Previous releases did not check that the back-reference actually matched the first occurrence. @@ -10906,22 +10906,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add information schema views - role_udt_grants, udt_privileges, - and user_defined_types (Peter Eisentraut) + role_udt_grants, udt_privileges, + and user_defined_types (Peter Eisentraut) Add composite-type attributes to the - information schema element_types view + information schema element_types view (Peter Eisentraut) - Implement interval_type columns in the information + Implement interval_type columns in the information schema (Peter Eisentraut) @@ -10933,23 +10933,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Implement collation-related columns in the information schema - attributes, columns, - domains, and element_types + attributes, columns, + domains, and element_types views (Peter Eisentraut) - Implement the with_hierarchy column in the - information schema table_privileges view (Peter + Implement the with_hierarchy column in the + information schema table_privileges view (Peter Eisentraut) - Add display of sequence USAGE privileges to information + Add display of sequence USAGE privileges to information schema (Peter Eisentraut) @@ -10980,7 +10980,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow the PL/pgSQL OPEN cursor command to supply + Allow the PL/pgSQL OPEN cursor command to supply parameters by name (Yeb Havinga) @@ -11002,7 +11002,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve performance and memory consumption for long chains of - ELSIF clauses (Tom Lane) + ELSIF clauses (Tom Lane) @@ -11083,31 +11083,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add initdb - options and (Peter Eisentraut) - This allows separate control of local and - host pg_hba.conf authentication - settings. still controls both. - Add - Add the @@ -11115,15 +11115,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Give command-line tools the ability to specify the name of the - database to connect to, and fall back to template1 - if a postgres database connection fails (Robert Haas) + database to connect to, and fall back to template1 + if a postgres database connection fails (Robert Haas) - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> @@ -11134,7 +11134,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This adds the auto option to the \x + This adds the auto option to the \x command, which switches to the expanded mode when the normal output would be wider than the screen. @@ -11147,32 +11147,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This is done with a new command \ir. + This is done with a new command \ir. Add support for non-ASCII characters in - psql variable names (Tom Lane) + psql variable names (Tom Lane) - Add support for major-version-specific .psqlrc files + Add support for major-version-specific .psqlrc files (Bruce Momjian) - psql already supported minor-version-specific - .psqlrc files. + psql already supported minor-version-specific + .psqlrc files. - Provide environment variable overrides for psql + Provide environment variable overrides for psql history and startup file locations (Andrew Dunstan) @@ -11184,15 +11184,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a \setenv command to modify + Add a \setenv command to modify the environment variables passed to child processes (Andrew Dunstan) - Name psql's temporary editor files with a - .sql extension (Peter Eisentraut) + Name psql's temporary editor files with a + .sql extension (Peter Eisentraut) @@ -11202,19 +11202,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow psql to use zero-byte field and record + Allow psql to use zero-byte field and record separators (Peter Eisentraut) Various shell tools use zero-byte (NUL) separators, - e.g. find. + e.g. find. - Make the \timing option report times for + Make the \timing option report times for failed queries (Magnus Hagander) @@ -11225,13 +11225,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Unify and tighten psql's treatment of \copy - and SQL COPY (Noah Misch) + Unify and tighten psql's treatment of \copy + and SQL COPY (Noah Misch) This fix makes failure behavior more predictable and honors - \set ON_ERROR_ROLLBACK. + \set ON_ERROR_ROLLBACK. @@ -11245,21 +11245,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make \d on a sequence show the + Make \d on a sequence show the table/column name owning it (Magnus Hagander) - Show statistics target for columns in \d+ (Magnus + Show statistics target for columns in \d+ (Magnus Hagander) - Show role password expiration dates in \du + Show role password expiration dates in \du (Fabrízio de Royes Mello) @@ -11271,8 +11271,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - These are included in the output of \dC+, - \dc+, \dD+, and \dL respectively. + These are included in the output of \dC+, + \dc+, \dD+, and \dL respectively. @@ -11283,15 +11283,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - These are included in the output of \des+, - \det+, and \dew+ for foreign servers, foreign + These are included in the output of \des+, + \det+, and \dew+ for foreign servers, foreign tables, and foreign data wrappers respectively. - Change \dd to display comments only for object types + Change \dd to display comments only for object types without their own backslash command (Josh Kupershmidt) @@ -11307,9 +11307,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In psql tab completion, complete SQL + In psql tab completion, complete SQL keywords in either upper or lower case according to the new COMP_KEYWORD_CASE + linkend="APP-PSQL-variables">COMP_KEYWORD_CASE setting (Peter Eisentraut) @@ -11348,14 +11348,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add an @@ -11366,13 +11366,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a - Valid values are pre-data, data, - and post-data. The option can be + Valid values are pre-data, data, + and post-data. The option can be given more than once to select two or more sections. @@ -11380,7 +11380,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make pg_dumpall dump all + linkend="APP-PG-DUMPALL">pg_dumpall dump all roles first, then all configuration settings on roles (Phil Sorber) @@ -11392,8 +11392,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_dumpall to avoid errors if the - postgres database is missing in the new cluster + Allow pg_dumpall to avoid errors if the + postgres database is missing in the new cluster (Robert Haas) @@ -11418,13 +11418,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Tighten rules for when extension configuration tables are dumped - by pg_dump (Tom Lane) + by pg_dump (Tom Lane) - Make pg_dump emit more useful dependency + Make pg_dump emit more useful dependency information (Tom Lane) @@ -11438,7 +11438,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve pg_dump's performance when dumping many + Improve pg_dump's performance when dumping many database objects (Tom Lane) @@ -11450,19 +11450,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Allow libpq connection strings to have the format of a + Allow libpq connection strings to have the format of a URI (Alexander Shulgin) - The syntax begins with postgres://. This can allow + The syntax begins with postgres://. This can allow applications to avoid implementing their own parser for URIs representing database connections. @@ -11489,30 +11489,30 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously, libpq always collected the entire query + Previously, libpq always collected the entire query result in memory before passing it back to the application. - Add const qualifiers to the declarations of the functions - PQconnectdbParams, PQconnectStartParams, - and PQpingParams (Lionel Elie Mamane) + Add const qualifiers to the declarations of the functions + PQconnectdbParams, PQconnectStartParams, + and PQpingParams (Lionel Elie Mamane) - Allow the .pgpass file to include escaped characters + Allow the .pgpass file to include escaped characters in the password field (Robert Haas) - Make library functions use abort() instead of - exit() when it is necessary to terminate the process + Make library functions use abort() instead of + exit() when it is necessary to terminate the process (Peter Eisentraut) @@ -11557,7 +11557,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Install plpgsql.h into include/server during installation + Install plpgsql.h into include/server during installation (Heikki Linnakangas) @@ -11583,14 +11583,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve the concurrent transaction regression tests - (isolationtester) (Noah Misch) + (isolationtester) (Noah Misch) - Modify thread_test to create its test files in - the current directory, rather than /tmp (Bruce Momjian) + Modify thread_test to create its test files in + the current directory, rather than /tmp (Bruce Momjian) @@ -11639,7 +11639,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a pg_upgrade test suite (Peter Eisentraut) + Add a pg_upgrade test suite (Peter Eisentraut) @@ -11659,14 +11659,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add options to git_changelog for use in major + Add options to git_changelog for use in major release note creation (Bruce Momjian) - Support Linux's /proc/self/oom_score_adj API (Tom Lane) + Support Linux's /proc/self/oom_score_adj API (Tom Lane) @@ -11688,13 +11688,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This improvement does not apply to - dblink_send_query()/dblink_get_result(). + dblink_send_query()/dblink_get_result(). - Support force_not_null option in force_not_null option in file_fdw (Shigeru Hanada) @@ -11702,7 +11702,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Implement dry-run mode for pg_archivecleanup + linkend="pgarchivecleanup">pg_archivecleanup (Gabriele Bartolini) @@ -11714,29 +11714,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add new pgbench switches - , , and + (Robert Haas) Change pg_test_fsync to test + linkend="pgtestfsync">pg_test_fsync to test for a fixed amount of time, rather than a fixed number of cycles (Bruce Momjian) - The /cycles option was removed, and + /seconds added. Add a pg_test_timing + linkend="pgtesttiming">pg_test_timing utility to measure clock monotonicity and timing overhead (Ants Aasma, Greg Smith) @@ -11753,19 +11753,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="pgupgrade"><application>pg_upgrade</></link> + <link linkend="pgupgrade"><application>pg_upgrade</application></link> - Adjust pg_upgrade environment variables (Bruce + Adjust pg_upgrade environment variables (Bruce Momjian) Rename data, bin, and port environment - variables to begin with PG, and support + variables to begin with PG, and support PGPORTOLD/PGPORTNEW, to replace PGPORT. @@ -11773,22 +11773,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Overhaul pg_upgrade logging and failure reporting + Overhaul pg_upgrade logging and failure reporting (Bruce Momjian) Create four append-only log files, and delete them on success. - Add - Make pg_upgrade create a script to incrementally + Make pg_upgrade create a script to incrementally generate more accurate optimizer statistics (Bruce Momjian) @@ -11800,14 +11800,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_upgrade to upgrade an old cluster that - does not have a postgres database (Bruce Momjian) + Allow pg_upgrade to upgrade an old cluster that + does not have a postgres database (Bruce Momjian) - Allow pg_upgrade to handle cases where some + Allow pg_upgrade to handle cases where some old or new databases are missing, as long as they are empty (Bruce Momjian) @@ -11815,14 +11815,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_upgrade to handle configuration-only + Allow pg_upgrade to handle configuration-only directory installations (Bruce Momjian) - In pg_upgrade, add / options to pass parameters to the servers (Bruce Momjian) @@ -11833,7 +11833,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Change pg_upgrade to use port 50432 by default + Change pg_upgrade to use port 50432 by default (Bruce Momjian) @@ -11844,7 +11844,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Reduce cluster locking in pg_upgrade (Bruce + Reduce cluster locking in pg_upgrade (Bruce Momjian) @@ -11859,13 +11859,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="pgstatstatements"><application>pg_stat_statements</></link> + <link linkend="pgstatstatements"><application>pg_stat_statements</application></link> - Allow pg_stat_statements to aggregate similar + Allow pg_stat_statements to aggregate similar queries via SQL text normalization (Peter Geoghegan, Tom Lane) @@ -11878,13 +11878,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add dirtied and written block counts and read/write times to - pg_stat_statements (Robert Haas, Ants Aasma) + pg_stat_statements (Robert Haas, Ants Aasma) - Prevent pg_stat_statements from double-counting + Prevent pg_stat_statements from double-counting PREPARE and EXECUTE commands (Tom Lane) @@ -11900,7 +11900,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Support SECURITY LABEL on global objects (KaiGai + Support SECURITY LABEL on global objects (KaiGai Kohei, Robert Haas) @@ -11925,7 +11925,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add sepgsql_setcon() and related functions to control + Add sepgsql_setcon() and related functions to control the sepgsql security domain (KaiGai Kohei) @@ -11954,7 +11954,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Use gmake STYLE=website draft. + Use gmake STYLE=website draft. @@ -11967,7 +11967,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Document that user/database names are preserved with double-quoting - by command-line tools like vacuumdb (Bruce + by command-line tools like vacuumdb (Bruce Momjian) @@ -11981,12 +11981,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Deprecate use of GLOBAL and LOCAL in - CREATE TEMP TABLE (Noah Misch) + Deprecate use of GLOBAL and LOCAL in + CREATE TEMP TABLE (Noah Misch) - PostgreSQL has long treated these keyword as no-ops, + PostgreSQL has long treated these keyword as no-ops, and continues to do so; but in future they might mean what the SQL standard says they mean, so applications should avoid using them. diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index 91fbb34399..dada255057 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -37,20 +37,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -89,21 +89,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -120,7 +120,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -132,7 +132,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -140,13 +140,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -158,12 +158,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -213,7 +213,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -221,11 +221,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -240,15 +240,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -279,15 +279,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -301,7 +301,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -315,16 +315,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -424,28 +424,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -453,7 +453,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -461,56 +461,56 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -521,20 +521,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -547,9 +547,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -560,8 +560,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -572,7 +572,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -585,14 +585,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -603,14 +603,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -618,13 +618,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -632,7 +632,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -644,8 +644,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -657,9 +657,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -670,7 +670,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -682,7 +682,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -702,27 +702,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -772,18 +772,18 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -807,7 +807,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -821,17 +821,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -839,7 +839,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -852,7 +852,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -860,7 +860,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -875,19 +875,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -895,27 +895,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -929,33 +929,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -973,21 +973,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1002,20 +1002,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1027,26 +1027,26 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1059,7 +1059,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1101,7 +1101,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1115,9 +1115,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1130,15 +1130,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1192,15 +1192,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1209,13 +1209,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1269,7 +1269,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1287,15 +1287,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1309,7 +1309,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1339,13 +1339,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1353,12 +1353,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1373,15 +1373,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1392,33 +1392,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1430,8 +1430,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1443,15 +1443,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1464,14 +1464,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1490,21 +1490,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1516,23 +1516,23 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1543,22 +1543,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -1584,7 +1584,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1648,7 +1648,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -1656,19 +1656,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -1702,71 +1702,71 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -1788,7 +1788,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -1802,7 +1802,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -1813,30 +1813,30 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -1844,8 +1844,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -1856,7 +1856,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -1864,8 +1864,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -1886,17 +1886,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -1909,15 +1909,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -1963,17 +1963,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -1987,7 +1987,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -1996,22 +1996,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -2021,40 +2021,40 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -2065,19 +2065,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -2087,8 +2087,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -2102,7 +2102,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -2128,15 +2128,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -2165,12 +2165,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -2179,12 +2179,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -2199,15 +2199,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -2219,7 +2219,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -2254,8 +2254,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -2268,53 +2268,53 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -2322,8 +2322,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -2335,7 +2335,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -2347,7 +2347,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2368,13 +2368,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -2394,7 +2394,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2406,7 +2406,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2462,7 +2462,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2471,7 +2471,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2485,10 +2485,10 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2496,8 +2496,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -2509,28 +2509,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -2538,20 +2538,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -2586,22 +2586,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -2614,19 +2614,19 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -2634,9 +2634,9 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -2683,56 +2683,56 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -2758,27 +2758,27 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -2792,26 +2792,26 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -2839,21 +2839,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -2914,25 +2914,25 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -2941,7 +2941,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -2967,21 +2967,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -2990,7 +2990,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -3004,13 +3004,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix planner's handling of LATERAL references (Tom + Fix planner's handling of LATERAL references (Tom Lane) This fixes some corner cases that led to failed to build any - N-way joins or could not devise a query plan planner + N-way joins or could not devise a query plan planner failures. @@ -3032,22 +3032,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Speed up generation of unique table aliases in EXPLAIN and + Speed up generation of unique table aliases in EXPLAIN and rule dumping, and ensure that generated aliases do not - exceed NAMEDATALEN (Tom Lane) + exceed NAMEDATALEN (Tom Lane) - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -3099,7 +3099,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -3112,14 +3112,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -3132,7 +3132,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -3158,13 +3158,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -3172,15 +3172,15 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -3188,21 +3188,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -3210,23 +3210,23 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -3234,18 +3234,18 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -3255,51 +3255,51 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Avoid repeated password prompts during parallel pg_dump + Avoid repeated password prompts during parallel pg_dump (Zeus Kronion) - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -3308,22 +3308,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -3331,42 +3331,42 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) - Fix premature clearing of libpq's input buffer when + Fix premature clearing of libpq's input buffer when socket EOF is seen (Tom Lane) - This mistake caused libpq to sometimes not report the + This mistake caused libpq to sometimes not report the backend's final error message before reporting server closed the - connection unexpectedly. + connection unexpectedly. - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -3374,36 +3374,36 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -3414,14 +3414,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -3445,19 +3445,19 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -3465,11 +3465,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -3478,7 +3478,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -3524,13 +3524,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Guard against stack overflows in json parsing + Guard against stack overflows in json parsing (Oskari Saarenmaa) - If an application constructs PostgreSQL json - or jsonb values from arbitrary user input, the application's + If an application constructs PostgreSQL json + or jsonb values from arbitrary user input, the application's users can reliably crash the PostgreSQL server, causing momentary denial of service. (CVE-2015-5289) @@ -3538,8 +3538,8 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -3572,13 +3572,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -3596,7 +3596,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -3608,7 +3608,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - This was seen primarily when restoring pg_dump output + This was seen primarily when restoring pg_dump output for databases with many thousands of tables. @@ -3623,13 +3623,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -3641,7 +3641,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) @@ -3649,15 +3649,15 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid logging complaints when a parameter that can only be set at - server start appears multiple times in postgresql.conf, - and fix counting of line numbers after an include_dir + server start appears multiple times in postgresql.conf, + and fix counting of line numbers after an include_dir directive (Tom Lane) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -3665,21 +3665,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -3692,7 +3692,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -3744,22 +3744,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -3772,9 +3772,9 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -3782,7 +3782,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -3803,12 +3803,12 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -3835,7 +3835,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -3872,7 +3872,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -3880,44 +3880,44 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -3929,20 +3929,20 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve contrib/postgres_fdw's handling of + Improve contrib/postgres_fdw's handling of collation-related decisions (Tom Lane) The main user-visible effect is expected to be that comparisons - involving varchar columns will be sent to the remote server + involving varchar columns will be sent to the remote server for execution in more cases than before. - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -3950,64 +3950,64 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -4015,11 +4015,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -4028,18 +4028,18 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -4047,11 +4047,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -4059,14 +4059,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -4078,38 +4078,38 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -4141,7 +4141,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading an installation that was previously - upgraded using a pg_upgrade version between 9.3.0 and + upgraded using a pg_upgrade version between 9.3.0 and 9.3.4 inclusive, see the first changelog entry below. @@ -4164,52 +4164,52 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Recent PostgreSQL releases introduced mechanisms to + Recent PostgreSQL releases introduced mechanisms to protect against multixact wraparound, but some of that code did not account for the possibility that it would need to run during crash recovery, when the database may not be in a consistent state. This could result in failure to restart after a crash, or failure to start up a secondary server. The lingering effects of a previously-fixed - bug in pg_upgrade could also cause such a failure, in - installations that had used pg_upgrade versions + bug in pg_upgrade could also cause such a failure, in + installations that had used pg_upgrade versions between 9.3.0 and 9.3.4. - The pg_upgrade bug in question was that it would - set oldestMultiXid to 1 in pg_control even + The pg_upgrade bug in question was that it would + set oldestMultiXid to 1 in pg_control even if the true value should be higher. With the fixes introduced in this release, such a situation will result in immediate emergency - autovacuuming until a correct oldestMultiXid value can be + autovacuuming until a correct oldestMultiXid value can be determined. If that would pose a hardship, users can avoid it by - doing manual vacuuming before upgrading to this release. + doing manual vacuuming before upgrading to this release. In detail: - Check whether pg_controldata reports Latest - checkpoint's oldestMultiXid to be 1. If not, there's nothing + Check whether pg_controldata reports Latest + checkpoint's oldestMultiXid to be 1. If not, there's nothing to do. - Look in PGDATA/pg_multixact/offsets to see if there's a - file named 0000. If there is, there's nothing to do. + Look in PGDATA/pg_multixact/offsets to see if there's a + file named 0000. If there is, there's nothing to do. Otherwise, for each table that has - pg_class.relminmxid equal to 1, - VACUUM that table with + pg_class.relminmxid equal to 1, + VACUUM that table with both and set to zero. (You can use the vacuum cost delay parameters described in to reduce the performance consequences for concurrent sessions.) You must - use PostgreSQL 9.3.5 or later to perform this step. + use PostgreSQL 9.3.5 or later to perform this step. @@ -4223,7 +4223,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -4234,13 +4234,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -4302,12 +4302,12 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -4319,28 +4319,28 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Also apply the same rules in initdb --sync-only. + Also apply the same rules in initdb --sync-only. This case is less critical but it should act similarly. - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. @@ -4355,15 +4355,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -4397,8 +4397,8 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -4436,7 +4436,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -4446,7 +4446,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -4456,15 +4456,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -4479,7 +4479,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Under certain usage patterns, the existing defenses against this might - be insufficient, allowing pg_multixact/members files to be + be insufficient, allowing pg_multixact/members files to be removed too early, resulting in data loss. The fix for this includes modifying the server to fail transactions that would result in overwriting old multixact member ID data, and @@ -4491,16 +4491,16 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -4508,16 +4508,16 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -4559,7 +4559,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -4587,7 +4587,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -4595,7 +4595,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -4609,7 +4609,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -4629,19 +4629,19 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -4688,18 +4688,18 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -4719,20 +4719,20 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -4745,20 +4745,20 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -4766,14 +4766,14 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -4782,32 +4782,32 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -4820,38 +4820,38 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -4864,21 +4864,21 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Avoid possible pg_dump failure when concurrent sessions + Avoid possible pg_dump failure when concurrent sessions are creating and dropping temporary functions (Tom Lane) - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -4890,7 +4890,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -4898,28 +4898,28 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -4927,15 +4927,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -4953,7 +4953,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -4988,11 +4988,11 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -5026,15 +5026,15 @@ Branch: REL9_0_STABLE [56b970f2e] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -5054,27 +5054,27 @@ Branch: REL9_0_STABLE [9e05c5063] 2015-02-02 10:00:52 -0500 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -5099,12 +5099,12 @@ Branch: REL9_0_STABLE [0a3ee8a5f] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -5165,7 +5165,7 @@ Branch: REL9_0_STABLE [3a2063369] 2015-01-28 12:33:29 -0500 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -5214,14 +5214,14 @@ Branch: REL9_2_STABLE [6bf343c6e] 2015-01-16 13:10:23 +0200 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. @@ -5236,7 +5236,7 @@ Branch: REL9_0_STABLE [45a607d5c] 2014-11-04 13:24:26 -0500 Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) @@ -5256,14 +5256,14 @@ Branch: REL9_0_STABLE [73f950fc8] 2014-10-30 13:03:39 -0400 - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -5278,8 +5278,8 @@ Branch: REL9_1_STABLE [d5fef87e9] 2014-10-20 23:47:45 +0200 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) @@ -5291,12 +5291,12 @@ Branch: REL9_3_STABLE [e35db342a] 2014-09-22 16:19:59 -0400 Fix incorrect processing - of CreateEventTrigStmt.eventname (Petr + of CreateEventTrigStmt.eventname (Petr Jelinek) - This could result in misbehavior if CREATE EVENT TRIGGER + This could result in misbehavior if CREATE EVENT TRIGGER were executed as a prepared query, or via extended query protocol. @@ -5310,7 +5310,7 @@ Branch: REL9_1_STABLE [94d5d57d5] 2014-11-11 17:00:28 -0500 - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -5318,7 +5318,7 @@ Branch: REL9_1_STABLE [94d5d57d5] 2014-11-11 17:00:28 -0500 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -5340,7 +5340,7 @@ Branch: REL9_0_STABLE [5308e085b] 2015-01-15 18:52:38 -0500 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. @@ -5369,8 +5369,8 @@ Branch: REL9_3_STABLE [54a8abc2b] 2015-01-04 15:48:29 -0300 Fix failure to wait when a transaction tries to acquire a FOR - NO KEY EXCLUSIVE tuple lock, while multiple other transactions - currently hold FOR SHARE locks (Álvaro Herrera) + NO KEY EXCLUSIVE tuple lock, while multiple other transactions + currently hold FOR SHARE locks (Álvaro Herrera) @@ -5384,15 +5384,15 @@ Branch: REL9_0_STABLE [662eebdc6] 2014-12-11 21:02:41 -0500 - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -5408,12 +5408,12 @@ Branch: REL9_0_STABLE [f5e4e92fb] 2014-12-11 19:37:17 -0500 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -5429,7 +5429,7 @@ Branch: REL9_3_STABLE [939f0fb67] 2015-01-15 13:18:19 -0500 - Improve performance of EXPLAIN with large range tables + Improve performance of EXPLAIN with large range tables (Tom Lane) @@ -5445,7 +5445,7 @@ Branch: REL9_0_STABLE [4ff49746e] 2014-08-09 13:46:52 -0400 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -5462,7 +5462,7 @@ Branch: REL9_3_STABLE [6306d0712] 2014-07-22 13:30:14 -0400 - Re-enable error for SELECT ... OFFSET -1 (Tom Lane) + Re-enable error for SELECT ... OFFSET -1 (Tom Lane) @@ -5499,7 +5499,7 @@ Branch: REL9_3_STABLE [8571ecb24] 2014-12-02 15:02:43 -0500 - Fix json_agg() to not return extra trailing right + Fix json_agg() to not return extra trailing right brackets in its result (Tom Lane) @@ -5514,7 +5514,7 @@ Branch: REL9_0_STABLE [26f8a4691] 2014-09-11 23:31:06 -0400 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -5535,19 +5535,19 @@ Branch: REL9_0_STABLE [e6550626c] 2014-12-01 15:25:18 -0500 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -5566,7 +5566,7 @@ Branch: REL9_2_STABLE [3359a818c] 2014-09-23 20:25:39 -0400 Matching would often fail when the number of allowed iterations is - limited by a ? quantifier or a bound expression. + limited by a ? quantifier or a bound expression. @@ -5601,7 +5601,7 @@ Branch: REL9_0_STABLE [10059c2da] 2014-10-27 10:51:38 +0200 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -5658,14 +5658,14 @@ Branch: REL9_0_STABLE [cebb3f032] 2015-01-17 22:37:32 -0500 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -5685,7 +5685,7 @@ Branch: REL9_2_STABLE [19ccaf9d4] 2014-11-10 15:21:26 -0500 - In some contexts, constructs like row_to_json(tab.*) may + In some contexts, constructs like row_to_json(tab.*) may not produce the expected column names. This is fixed properly as of 9.4; in older branches, just ensure that we produce some nonempty name. (In some cases this will be the underlying table's column name @@ -5703,7 +5703,7 @@ Branch: REL9_2_STABLE [906599f65] 2014-11-22 16:01:15 -0500 Fix mishandling of system columns, - particularly tableoid, in FDW queries (Etsuro Fujita) + particularly tableoid, in FDW queries (Etsuro Fujita) @@ -5721,7 +5721,7 @@ Branch: REL9_3_STABLE [527ff8baf] 2015-01-30 12:30:43 -0500 - This patch fixes corner-case unexpected operator NNNN planner + This patch fixes corner-case unexpected operator NNNN planner errors, and improves the selectivity estimates for some other cases. @@ -5734,13 +5734,13 @@ Branch: REL9_2_STABLE [4586572d7] 2014-10-26 16:12:32 -0400 - Avoid doing indexed_column = ANY - (array) as an index qualifier if that leads + Avoid doing indexed_column = ANY + (array) as an index qualifier if that leads to an inferior plan (Andrew Gierth) - In some cases, = ANY conditions applied to non-first index + In some cases, = ANY conditions applied to non-first index columns would be done as index conditions even though it would be better to use them as simple filter conditions. @@ -5753,9 +5753,9 @@ Branch: REL9_3_STABLE [4e54685d0] 2014-10-20 12:23:48 -0400 - Fix variable not found in subplan target list planner + Fix variable not found in subplan target list planner failure when an inline-able SQL function taking a composite argument - is used in a LATERAL subselect and the composite argument + is used in a LATERAL subselect and the composite argument is a lateral reference (Tom Lane) @@ -5771,7 +5771,7 @@ Branch: REL9_0_STABLE [288f15b7c] 2014-10-01 19:30:41 -0400 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -5800,8 +5800,8 @@ Branch: REL9_0_STABLE [50a757698] 2014-10-03 13:01:27 -0300 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -5835,7 +5835,7 @@ Branch: REL9_0_STABLE [91b4a881c] 2014-07-30 14:42:12 -0400 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -5899,12 +5899,12 @@ Branch: REL9_0_STABLE [804983961] 2014-07-29 11:58:17 +0300 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. @@ -5932,7 +5932,7 @@ Branch: REL9_0_STABLE [83c7bfb9a] 2014-11-06 21:26:21 +0900 - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -5948,8 +5948,8 @@ Branch: REL9_0_STABLE [857a5d6b5] 2014-09-05 02:19:57 +0900 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) @@ -5965,7 +5965,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -5974,7 +5974,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -6018,7 +6018,7 @@ Branch: REL9_0_STABLE [2e4946169] 2015-01-07 22:46:20 -0500 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) @@ -6033,13 +6033,13 @@ Branch: REL9_0_STABLE [9880fea4f] 2014-11-25 17:39:09 +0200 - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. @@ -6054,12 +6054,12 @@ Branch: REL9_0_STABLE [ac6e87537] 2014-10-22 18:42:01 -0400 - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -6075,14 +6075,14 @@ Branch: REL9_0_STABLE [49ef4eba2] 2014-10-29 14:35:39 +0200 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -6098,7 +6098,7 @@ Branch: REL9_0_STABLE [1f3517039] 2014-11-25 14:10:54 +0200 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) @@ -6112,8 +6112,8 @@ Branch: REL9_0_STABLE [d9a1e9de5] 2014-10-06 21:23:50 -0400 - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) @@ -6127,7 +6127,7 @@ Branch: REL9_0_STABLE [d67be559e] 2014-12-05 14:30:55 +0200 - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) @@ -6142,12 +6142,12 @@ Branch: REL9_0_STABLE [44c518328] 2014-09-08 16:10:05 -0400 - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -6157,7 +6157,7 @@ Branch: REL9_0_STABLE [44c518328] 2014-09-08 16:10:05 -0400 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -6175,17 +6175,17 @@ Branch: REL9_0_STABLE [2600e4436] 2014-12-31 12:17:12 -0500 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -6198,8 +6198,8 @@ Branch: REL9_3_STABLE [4b1953079] 2014-11-28 02:44:40 +0900 - Make psql's \watch command display - nulls as specified by \pset null (Fujii Masao) + Make psql's \watch command display + nulls as specified by \pset null (Fujii Masao) @@ -6213,9 +6213,9 @@ Branch: REL9_0_STABLE [1f89fc218] 2014-09-12 11:24:39 -0400 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) @@ -6229,7 +6229,7 @@ Branch: REL9_3_STABLE [bb1e2426b] 2015-01-05 19:27:09 -0500 - Fix pg_dump to handle comments on event triggers + Fix pg_dump to handle comments on event triggers without failing (Tom Lane) @@ -6243,8 +6243,8 @@ Branch: REL9_3_STABLE [cc609c46f] 2015-01-30 09:01:36 -0600 - Allow parallel pg_dump to - use (Kevin Grittner) @@ -6257,7 +6257,7 @@ Branch: REL9_1_STABLE [40c333c39] 2014-07-25 19:48:54 -0400 - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -6271,7 +6271,7 @@ Branch: REL9_2_STABLE [3c5ce5102] 2014-11-13 18:19:35 -0500 - Fix pg_dumpall to restore its ability to dump from + Fix pg_dumpall to restore its ability to dump from pre-8.1 servers (Gilles Darold) @@ -6301,7 +6301,7 @@ Branch: REL9_0_STABLE [31021e7ba] 2014-10-17 12:49:15 -0400 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) @@ -6314,7 +6314,7 @@ Branch: REL9_3_STABLE [26a4e0ed7] 2014-11-15 01:21:11 +0100 Fix failure to fsync tables in nondefault tablespaces - during pg_upgrade (Abhijit Menon-Sen, Andres Freund) + during pg_upgrade (Abhijit Menon-Sen, Andres Freund) @@ -6330,7 +6330,7 @@ Branch: REL9_3_STABLE [fca9f349b] 2014-08-07 14:56:13 -0400 - In pg_upgrade, cope with cases where the new cluster + In pg_upgrade, cope with cases where the new cluster creates a TOAST table for a table that didn't previously have one (Bruce Momjian) @@ -6347,8 +6347,8 @@ Branch: REL9_3_STABLE [24ae44914] 2014-08-04 11:45:45 -0400 - In pg_upgrade, don't try to - set autovacuum_multixact_freeze_max_age for the old cluster + In pg_upgrade, don't try to + set autovacuum_multixact_freeze_max_age for the old cluster (Bruce Momjian) @@ -6365,12 +6365,12 @@ Branch: REL9_3_STABLE [5724f491d] 2014-09-11 18:39:46 -0400 - In pg_upgrade, preserve the transaction ID epoch + In pg_upgrade, preserve the transaction ID epoch (Bruce Momjian) - This oversight did not bother PostgreSQL proper, + This oversight did not bother PostgreSQL proper, but could confuse some external replication tools. @@ -6386,7 +6386,7 @@ Branch: REL9_1_STABLE [2a0bfa4d6] 2015-01-03 20:54:13 +0100 - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) @@ -6398,7 +6398,7 @@ Branch: REL9_3_STABLE [9747a9898] 2014-08-02 15:19:45 +0900 - Fix memory leak in pg_receivexlog (Fujii Masao) + Fix memory leak in pg_receivexlog (Fujii Masao) @@ -6409,7 +6409,7 @@ Branch: REL9_3_STABLE [39217ce41] 2014-08-02 14:59:10 +0900 - Fix unintended suppression of pg_receivexlog verbose + Fix unintended suppression of pg_receivexlog verbose messages (Fujii Masao) @@ -6422,8 +6422,8 @@ Branch: REL9_2_STABLE [5ff8c2d7d] 2014-09-19 13:19:05 -0400 - Fix failure of contrib/auto_explain to print per-node - timing information when doing EXPLAIN ANALYZE (Tom Lane) + Fix failure of contrib/auto_explain to print per-node + timing information when doing EXPLAIN ANALYZE (Tom Lane) @@ -6436,7 +6436,7 @@ Branch: REL9_1_STABLE [9807c8220] 2014-08-28 18:21:20 -0400 - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -6449,7 +6449,7 @@ Branch: REL9_3_STABLE [f44290b7b] 2014-11-04 16:54:59 -0500 Avoid integer overflow and buffer overrun - in contrib/hstore's hstore_to_json() + in contrib/hstore's hstore_to_json() (Heikki Linnakangas) @@ -6461,7 +6461,7 @@ Branch: REL9_3_STABLE [55c880797] 2014-12-01 11:44:48 -0500 - Fix recognition of numbers in hstore_to_json_loose(), + Fix recognition of numbers in hstore_to_json_loose(), so that JSON numbers and strings are correctly distinguished (Andrew Dunstan) @@ -6478,7 +6478,7 @@ Branch: REL9_0_STABLE [9dc2a3fd0] 2014-07-22 11:46:04 -0400 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -6498,7 +6498,7 @@ Branch: REL9_0_STABLE [ef5a3b957] 2014-11-11 17:22:58 -0500 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -6513,7 +6513,7 @@ Branch: REL9_1_STABLE [a855c90a7] 2014-11-19 12:26:06 -0500 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -6535,12 +6535,12 @@ Branch: REL9_0_STABLE [dc9a506e6] 2015-01-29 20:18:46 -0500 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. @@ -6555,12 +6555,12 @@ Branch: REL9_0_STABLE [6a694bbab] 2014-11-27 11:13:03 -0500 - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -6575,7 +6575,7 @@ Branch: REL9_1_STABLE [7225abf00] 2014-11-05 11:34:25 -0500 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -6696,10 +6696,10 @@ Branch: REL9_0_STABLE [4c6d0abde] 2014-07-22 11:02:25 -0400 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. @@ -6713,7 +6713,7 @@ Branch: REL9_0_STABLE [e6841c4d6] 2014-08-18 23:01:23 -0400 - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) @@ -6730,13 +6730,13 @@ Branch: REL9_0_STABLE [338ff75fc] 2015-01-19 23:44:33 -0500 - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -6756,15 +6756,15 @@ Branch: REL9_0_STABLE [870a980aa] 2014-10-16 15:22:26 -0400 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -6789,9 +6789,9 @@ Branch: REL9_0_STABLE [8b70023af] 2014-12-24 16:35:54 -0500 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -6818,21 +6818,21 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -6873,7 +6873,7 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 However, this release corrects a logic error - in pg_upgrade, as well as an index corruption problem in + in pg_upgrade, as well as an index corruption problem in some GiST indexes. See the first two changelog entries below to find out whether your installation has been affected and what steps you should take if so. @@ -6900,15 +6900,15 @@ Branch: REL9_3_STABLE [cc5841809] 2014-06-24 16:11:06 -0400 - In pg_upgrade, remove pg_multixact files - left behind by initdb (Bruce Momjian) + In pg_upgrade, remove pg_multixact files + left behind by initdb (Bruce Momjian) - If you used a pre-9.3.5 version of pg_upgrade to + If you used a pre-9.3.5 version of pg_upgrade to upgrade a database cluster to 9.3, it might have left behind a file - $PGDATA/pg_multixact/offsets/0000 that should not be - there and will eventually cause problems in VACUUM. + $PGDATA/pg_multixact/offsets/0000 that should not be + there and will eventually cause problems in VACUUM. However, in common cases this file is actually valid and must not be removed. To determine whether your installation has this problem, run this @@ -6921,9 +6921,9 @@ SELECT EXISTS (SELECT * FROM list WHERE file = '0000') AND EXISTS (SELECT * FROM list WHERE file != '0000') AS file_0000_removal_required; - If this query returns t, manually remove the file - $PGDATA/pg_multixact/offsets/0000. - Do nothing if the query returns f. + If this query returns t, manually remove the file + $PGDATA/pg_multixact/offsets/0000. + Do nothing if the query returns f. @@ -6939,15 +6939,15 @@ Branch: REL8_4_STABLE [e31d77c96] 2014-05-13 15:27:43 +0300 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -7032,7 +7032,7 @@ Branch: REL9_3_STABLE [167a2535f] 2014-06-09 15:17:23 -0400 - Fix wraparound handling for pg_multixact/members + Fix wraparound handling for pg_multixact/members (Álvaro Herrera) @@ -7046,12 +7046,12 @@ Branch: REL9_3_STABLE [9a28c3752] 2014-06-27 14:43:52 -0400 - Truncate pg_multixact during checkpoints, not - during VACUUM (Álvaro Herrera) + Truncate pg_multixact during checkpoints, not + during VACUUM (Álvaro Herrera) - This change ensures that pg_multixact segments can't be + This change ensures that pg_multixact segments can't be removed if they'd still be needed during WAL replay after a crash. @@ -7082,7 +7082,7 @@ Branch: REL8_4_STABLE [3ada1fab8] 2014-05-05 14:43:55 -0400 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -7108,8 +7108,8 @@ Branch: REL9_1_STABLE [555d0b200] 2014-06-26 10:42:08 -0700 - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -7148,7 +7148,7 @@ Branch: REL9_2_STABLE [0901dbab3] 2014-04-29 13:12:33 -0400 Improve planner to drop constant-NULL inputs - of AND/OR when possible (Tom Lane) + of AND/OR when possible (Tom Lane) @@ -7166,8 +7166,8 @@ Branch: REL9_3_STABLE [d359f71ac] 2014-04-03 22:02:27 -0400 - Ensure that the planner sees equivalent VARIADIC and - non-VARIADIC function calls as equivalent (Tom Lane) + Ensure that the planner sees equivalent VARIADIC and + non-VARIADIC function calls as equivalent (Tom Lane) @@ -7188,13 +7188,13 @@ Branch: REL9_3_STABLE [a1fc36495] 2014-06-24 21:22:47 -0700 - Fix handling of nested JSON objects - in json_populate_recordset() and friends + Fix handling of nested JSON objects + in json_populate_recordset() and friends (Michael Paquier, Tom Lane) - A nested JSON object could result in previous fields of the + A nested JSON object could result in previous fields of the parent object not being shown in the output. @@ -7208,13 +7208,13 @@ Branch: REL9_2_STABLE [25c933c5c] 2014-05-09 12:55:06 -0400 - Fix identification of input type category in to_json() + Fix identification of input type category in to_json() and friends (Tom Lane) - This is known to have led to inadequate quoting of money - fields in the JSON result, and there may have been wrong + This is known to have led to inadequate quoting of money + fields in the JSON result, and there may have been wrong results for other data types as well. @@ -7239,7 +7239,7 @@ Branch: REL8_4_STABLE [70debcf09] 2014-05-01 15:19:23 -0400 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. @@ -7256,7 +7256,7 @@ Branch: REL8_4_STABLE [a81fbcfb3] 2014-07-11 19:12:56 -0400 - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -7292,7 +7292,7 @@ Branch: REL8_4_STABLE [d297c91d4] 2014-06-19 22:14:00 -0400 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -7327,7 +7327,7 @@ Branch: REL8_4_STABLE [f3f40434b] 2014-06-10 22:49:08 -0400 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -7367,7 +7367,7 @@ Branch: REL8_4_STABLE [80d45ae4e] 2014-06-04 23:27:38 +0200 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. @@ -7384,12 +7384,12 @@ Branch: REL8_4_STABLE [82fbd88a7] 2014-04-24 13:30:14 -0400 - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -7408,7 +7408,7 @@ Branch: REL8_4_STABLE [4b767789d] 2014-07-15 13:24:07 -0400 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -7422,8 +7422,8 @@ Branch: REL9_3_STABLE [e86cfc4bb] 2014-06-27 14:43:45 -0400 - Prevent pg_class.relminmxid values from - going backwards during VACUUM FULL (Álvaro Herrera) + Prevent pg_class.relminmxid values from + going backwards during VACUUM FULL (Álvaro Herrera) @@ -7461,7 +7461,7 @@ Branch: REL9_3_STABLE [e31193d49] 2014-05-01 20:22:39 -0400 Fix dumping of rules/views when subsequent addition of a column has - resulted in multiple input columns matching a USING + resulted in multiple input columns matching a USING specification (Tom Lane) @@ -7476,7 +7476,7 @@ Branch: REL9_3_STABLE [b978ab5f6] 2014-07-19 14:29:05 -0400 Repair view printing for some cases involving functions - in FROM that return a composite type containing dropped + in FROM that return a composite type containing dropped columns (Tom Lane) @@ -7498,7 +7498,7 @@ Branch: REL8_4_STABLE [969735cf1] 2014-04-05 18:16:24 -0400 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -7513,7 +7513,7 @@ Branch: REL9_1_STABLE [b7a424371] 2014-04-02 17:11:34 -0400 - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -7534,14 +7534,14 @@ Branch: REL9_2_STABLE [6d25eb314] 2014-04-04 22:03:42 -0400 - Allow the root user to use postgres -C variable and - postgres --describe-config (MauMau) + Allow the root user to use postgres -C variable and + postgres --describe-config (MauMau) The prohibition on starting the server as root does not need to extend to these operations, and relaxing it prevents failure - of pg_ctl in some scenarios. + of pg_ctl in some scenarios. @@ -7559,7 +7559,7 @@ Branch: REL8_4_STABLE [95cefd30e] 2014-06-14 09:41:18 -0400 Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -7568,16 +7568,16 @@ Branch: REL8_4_STABLE [95cefd30e] 2014-06-14 09:41:18 -0400 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -7651,9 +7651,9 @@ Branch: REL8_4_STABLE [e3f273ff6] 2014-04-30 10:39:03 +0300 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. @@ -7669,7 +7669,7 @@ Branch: REL8_4_STABLE [ae41bb4be] 2014-05-30 18:18:32 -0400 - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -7690,17 +7690,17 @@ Branch: REL8_4_STABLE [664ac3de7] 2014-05-07 21:38:50 -0400 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -7718,7 +7718,7 @@ Branch: REL8_4_STABLE [b4ae2e37d] 2014-04-16 18:59:48 +0200 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) @@ -7741,8 +7741,8 @@ Branch: REL9_0_STABLE [0c2eb989e] 2014-04-09 12:12:32 +0200 - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -7756,13 +7756,13 @@ Branch: REL9_3_STABLE [3080bbaa9] 2014-03-29 17:34:03 -0400 - Fix pg_dump to cope with a materialized view that + Fix pg_dump to cope with a materialized view that depends on a table's primary key (Tom Lane) This occurs if the view's query relies on functional dependency to - abbreviate a GROUP BY list. pg_dump got + abbreviate a GROUP BY list. pg_dump got sufficiently confused that it dumped the materialized view as a regular view. @@ -7776,7 +7776,7 @@ Branch: REL9_3_STABLE [63817f86b] 2014-03-18 10:38:38 -0400 - Fix parsing of pg_dumpall's switch (Tom Lane) @@ -7794,13 +7794,13 @@ Branch: REL8_4_STABLE [6adddac8a] 2014-06-12 20:14:55 -0400 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. @@ -7815,12 +7815,12 @@ Branch: REL9_2_STABLE [759c9fb63] 2014-07-07 13:24:08 -0400 - Fix pg_upgrade for cases where the new server creates + Fix pg_upgrade for cases where the new server creates a TOAST table but the old version did not (Bruce Momjian) - This rare situation would manifest as relation OID mismatch + This rare situation would manifest as relation OID mismatch errors. @@ -7839,9 +7839,9 @@ Branch: REL9_3_STABLE [e7984cca0] 2014-07-21 11:42:05 -0400 - In pg_upgrade, - preserve pg_database.datminmxid - and pg_class.relminmxid values from the + In pg_upgrade, + preserve pg_database.datminmxid + and pg_class.relminmxid values from the old cluster, or insert reasonable values when upgrading from pre-9.3; also defend against unreasonable values in the core server (Bruce Momjian, Álvaro Herrera, Tom Lane) @@ -7864,13 +7864,13 @@ Branch: REL9_2_STABLE [31f579f09] 2014-05-20 12:20:57 -0400 - Prevent contrib/auto_explain from changing the output of - a user's EXPLAIN (Tom Lane) + Prevent contrib/auto_explain from changing the output of + a user's EXPLAIN (Tom Lane) - If auto_explain is active, it could cause - an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless + If auto_explain is active, it could cause + an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless print timing information. @@ -7885,7 +7885,7 @@ Branch: REL9_2_STABLE [3e2cfa42f] 2014-06-20 12:27:04 -0700 - Fix query-lifespan memory leak in contrib/dblink + Fix query-lifespan memory leak in contrib/dblink (MauMau, Joe Conway) @@ -7902,7 +7902,7 @@ Branch: REL8_4_STABLE [df2e62603] 2014-04-17 12:37:53 -0400 - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -7919,7 +7919,7 @@ Branch: REL9_2_STABLE [f6d6b7b1e] 2014-06-30 17:00:40 -0400 Prevent use of already-freed memory in - contrib/pgstattuple's pgstat_heap() + contrib/pgstattuple's pgstat_heap() (Noah Misch) @@ -7936,13 +7936,13 @@ Branch: REL8_4_STABLE [fd785441f] 2014-05-29 13:51:18 -0400 - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. @@ -7960,7 +7960,7 @@ Branch: REL8_4_STABLE [c51da696b] 2014-07-19 15:01:45 -0400 - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -8071,7 +8071,7 @@ Branch: REL9_0_STABLE [7aea1050e] 2014-03-13 12:03:07 -0400 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -8089,8 +8089,8 @@ Branch: REL9_3_STABLE [3973034e6] 2014-03-06 11:37:04 -0500 - Allow materialized views to be referenced in UPDATE - and DELETE commands (Michael Paquier) + Allow materialized views to be referenced in UPDATE + and DELETE commands (Michael Paquier) @@ -8133,7 +8133,7 @@ Branch: REL8_4_STABLE [dd378dd1e] 2014-02-18 12:44:36 -0500 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -8156,17 +8156,17 @@ Branch: REL8_4_STABLE [f043bddfe] 2014-03-06 19:31:22 -0500 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -8201,9 +8201,9 @@ Branch: REL9_3_STABLE [e8655a77f] 2014-02-21 17:10:49 -0500 Use non-default selectivity estimates for - value IN (list) and - value operator ANY - (array) + value IN (list) and + value operator ANY + (array) expressions when the righthand side is a stable expression (Tom Lane) @@ -8217,16 +8217,16 @@ Branch: REL9_3_STABLE [13ea43ab8] 2014-03-05 13:03:29 -0300 Remove the correct per-database statistics file during DROP - DATABASE (Tomas Vondra) + DATABASE (Tomas Vondra) This fix prevents a permanent leak of statistics file space. - Users who have done many DROP DATABASE commands since - upgrading to PostgreSQL 9.3 may wish to check their + Users who have done many DROP DATABASE commands since + upgrading to PostgreSQL 9.3 may wish to check their statistics directory and delete statistics files that do not correspond to any existing database. Please note - that db_0.stat should not be removed. + that db_0.stat should not be removed. @@ -8238,12 +8238,12 @@ Branch: REL9_3_STABLE [dcd1131c8] 2014-03-06 21:40:50 +0200 - Fix walsender ping logic to avoid inappropriate + Fix walsender ping logic to avoid inappropriate disconnects under continuous load (Andres Freund, Heikki Linnakangas) - walsender failed to send ping messages to the client + walsender failed to send ping messages to the client if it was constantly busy sending WAL data; but it expected to see ping responses despite that, and would therefore disconnect once elapsed. @@ -8260,8 +8260,8 @@ Branch: REL9_1_STABLE [65e8dbb18] 2014-03-17 20:42:35 +0900 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -8324,13 +8324,13 @@ Branch: REL8_4_STABLE [172c53e92] 2014-03-13 20:59:57 -0400 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -8358,13 +8358,13 @@ Branch: REL9_2_STABLE [b315b767f] 2014-03-10 15:47:13 -0400 - Fix tracking of psql script line numbers - during \copy from out-of-line data + Fix tracking of psql script line numbers + during \copy from out-of-line data (Kumar Rajeev Rastogi, Amit Khandekar) - \copy ... from incremented the script file line number + \copy ... from incremented the script file line number for each data line, even if the data was not coming from the script file. This mistake resulted in wrong line numbers being reported for any errors occurring later in the same script file. @@ -8379,12 +8379,12 @@ Branch: REL9_3_STABLE [73f0483fd] 2014-03-07 16:36:50 -0500 - Fix contrib/postgres_fdw to handle multiple join + Fix contrib/postgres_fdw to handle multiple join conditions properly (Tom Lane) - This oversight could result in sending WHERE clauses to + This oversight could result in sending WHERE clauses to the remote server for execution even though the clauses are not known to have the same semantics on the remote server (for example, clauses that use non-built-in operators). The query might succeed anyway, @@ -8404,7 +8404,7 @@ Branch: REL9_0_STABLE [665515539] 2014-03-16 11:47:37 +0100 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) @@ -8421,7 +8421,7 @@ Branch: REL8_4_STABLE [6e6c2c2e1] 2014-03-15 13:36:57 -0400 - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -8494,19 +8494,19 @@ Branch: REL8_4_STABLE [ff35425c8] 2014-02-17 09:33:38 -0500 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -8529,7 +8529,7 @@ Branch: REL8_4_STABLE [823b9dc25] 2014-02-17 09:33:38 -0500 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -8559,7 +8559,7 @@ Branch: REL8_4_STABLE [e46476133] 2014-02-17 09:33:38 -0500 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -8583,12 +8583,12 @@ Branch: REL8_4_STABLE [d0ed1a6c0] 2014-02-17 09:33:39 -0500 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -8635,7 +8635,7 @@ Branch: REL8_4_STABLE [69d2bc14a] 2014-02-17 11:20:38 -0500 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -8657,16 +8657,16 @@ Branch: REL8_4_STABLE [69d2bc14a] 2014-02-17 11:20:38 -0500 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) @@ -8683,19 +8683,19 @@ Branch: REL8_4_STABLE [f58663ab1] 2014-02-17 11:24:51 -0500 - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -8716,7 +8716,7 @@ Branch: REL9_3_STABLE [8e9a16ab8] 2013-12-16 11:29:51 -0300 The logic for tuple freezing was unable to handle some cases involving freezing of - multixact + multixact IDs, with the practical effect that shared row-level locks might be forgotten once old enough. @@ -8725,7 +8725,7 @@ Branch: REL9_3_STABLE [8e9a16ab8] 2013-12-16 11:29:51 -0300 Fixing this required changing the WAL record format for tuple freezing. While this is no issue for standalone servers, when using replication it means that standby servers must be upgraded - to 9.3.3 or later before their masters are. An older standby will + to 9.3.3 or later before their masters are. An older standby will be unable to interpret freeze records generated by a newer master, and will fail with a PANIC message. (In such a case, upgrading the standby should be sufficient to let it resume execution.) @@ -8783,8 +8783,8 @@ Branch: REL9_3_STABLE [db1014bc4] 2013-12-18 13:31:27 -0300 This oversight could allow referential integrity checks to give false positives (for instance, allow deletes that should have been rejected). - Applications using the new commands SELECT FOR KEY SHARE - and SELECT FOR NO KEY UPDATE might also have suffered + Applications using the new commands SELECT FOR KEY SHARE + and SELECT FOR NO KEY UPDATE might also have suffered locking failures of this kind. @@ -8797,7 +8797,7 @@ Branch: REL9_3_STABLE [c6cd27e36] 2013-12-05 12:21:55 -0300 - Prevent forgetting valid row locks when one of several + Prevent forgetting valid row locks when one of several holders of a row lock aborts (Álvaro Herrera) @@ -8822,8 +8822,8 @@ Branch: REL9_3_STABLE [2dcc48c35] 2013-12-05 17:47:51 -0300 This mistake could result in spurious could not serialize access - due to concurrent update errors in REPEATABLE READ - and SERIALIZABLE transaction isolation modes. + due to concurrent update errors in REPEATABLE READ + and SERIALIZABLE transaction isolation modes. @@ -8836,7 +8836,7 @@ Branch: REL9_3_STABLE [03db79459] 2014-01-02 18:17:07 -0300 Handle wraparound correctly during extension or truncation - of pg_multixact/members + of pg_multixact/members (Andres Freund, Álvaro Herrera) @@ -8849,7 +8849,7 @@ Branch: REL9_3_STABLE [948a3dfbb] 2014-01-02 18:17:29 -0300 - Fix handling of 5-digit filenames in pg_multixact/members + Fix handling of 5-digit filenames in pg_multixact/members (Álvaro Herrera) @@ -8886,7 +8886,7 @@ Branch: REL9_3_STABLE [85d3b3c3a] 2013-12-19 16:39:59 -0300 This fixes a performance regression from pre-9.3 versions when doing - SELECT FOR UPDATE followed by UPDATE/DELETE. + SELECT FOR UPDATE followed by UPDATE/DELETE. @@ -8900,7 +8900,7 @@ Branch: REL9_3_STABLE [762bd379a] 2014-02-14 15:18:34 +0200 During archive recovery, prefer highest timeline number when WAL segments with the same ID are present in both the archive - and pg_xlog/ (Kyotaro Horiguchi) + and pg_xlog/ (Kyotaro Horiguchi) @@ -8929,7 +8929,7 @@ Branch: REL8_4_STABLE [9620fede9] 2014-02-12 14:52:32 -0500 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -8958,7 +8958,7 @@ Branch: REL9_0_STABLE [5301c8395] 2014-01-08 14:34:21 +0200 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. @@ -8986,13 +8986,13 @@ Branch: REL9_0_STABLE [5d742b9ce] 2014-01-14 17:35:00 -0500 Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -9028,8 +9028,8 @@ Branch: REL9_1_STABLE [0402f2441] 2014-01-08 23:31:01 +0200 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -9058,14 +9058,14 @@ Branch: REL9_3_STABLE [478af9b79] 2013-12-13 11:50:25 -0500 Prevent timeout interrupts from taking control away from mainline - code unless ImmediateInterruptOK is set + code unless ImmediateInterruptOK is set (Andres Freund, Tom Lane) This is a serious issue for any application making use of statement timeouts, as it could cause all manner of strange failures after a - timeout occurred. We have seen reports of stuck spinlocks, + timeout occurred. We have seen reports of stuck spinlocks, ERRORs being unexpectedly promoted to PANICs, unkillable backends, and other misbehaviors. @@ -9088,7 +9088,7 @@ Branch: REL8_4_STABLE [458b20f2d] 2014-01-31 21:41:09 -0500 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -9119,13 +9119,13 @@ Branch: REL8_4_STABLE [01b882fd8] 2014-01-29 20:04:14 -0500 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. @@ -9141,7 +9141,7 @@ Branch: REL8_4_STABLE [d0070ac81] 2014-01-11 16:35:44 -0500 - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -9185,7 +9185,7 @@ Branch: REL8_4_STABLE [a8a46d846] 2014-02-13 14:24:58 -0500 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -9227,7 +9227,7 @@ Branch: REL9_0_STABLE [f2eede9b5] 2014-01-21 23:01:40 -0500 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -9259,7 +9259,7 @@ Branch: REL8_4_STABLE [884c6384a] 2013-12-10 16:10:36 -0500 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) @@ -9272,13 +9272,13 @@ Branch: REL9_3_STABLE [a4aa854ca] 2014-01-30 14:51:19 -0500 - Fix mishandling of WHERE conditions pulled up from - a LATERAL subquery (Tom Lane) + Fix mishandling of WHERE conditions pulled up from + a LATERAL subquery (Tom Lane) The typical symptom of this bug was a JOIN qualification - cannot refer to other relations error, though subtle logic + cannot refer to other relations error, though subtle logic errors in created plans seem possible as well. @@ -9291,8 +9291,8 @@ Branch: REL9_3_STABLE [27ff4cfe7] 2014-01-11 19:03:15 -0500 - Disallow LATERAL references to the target table of - an UPDATE/DELETE (Tom Lane) + Disallow LATERAL references to the target table of + an UPDATE/DELETE (Tom Lane) @@ -9310,12 +9310,12 @@ Branch: REL9_2_STABLE [5d545b7ed] 2013-12-14 17:34:00 -0500 - Fix UPDATE/DELETE of an inherited target table - that has UNION ALL subqueries (Tom Lane) + Fix UPDATE/DELETE of an inherited target table + that has UNION ALL subqueries (Tom Lane) - Without this fix, UNION ALL subqueries aren't correctly + Without this fix, UNION ALL subqueries aren't correctly inserted into the update plans for inheritance child tables after the first one, typically resulting in no update happening for those child table(s). @@ -9330,7 +9330,7 @@ Branch: REL9_3_STABLE [663f8419b] 2013-12-23 22:18:23 -0500 - Fix ANALYZE to not fail on a column that's a domain over + Fix ANALYZE to not fail on a column that's a domain over a range type (Tom Lane) @@ -9347,12 +9347,12 @@ Branch: REL8_4_STABLE [00b77771a] 2014-01-11 13:42:11 -0500 - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -9370,14 +9370,14 @@ Branch: REL8_4_STABLE [0fb4e3ceb] 2014-01-18 18:50:47 -0500 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. @@ -9405,8 +9405,8 @@ Branch: REL8_4_STABLE [57ac7d8a7] 2014-01-08 20:18:24 -0500 - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -9487,12 +9487,12 @@ Branch: REL8_4_STABLE [6141983fb] 2014-02-10 10:00:50 +0200 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -9508,8 +9508,8 @@ Branch: REL9_1_STABLE [026a91f86] 2014-01-07 18:00:36 +0100 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -9530,7 +9530,7 @@ Branch: REL8_4_STABLE [69f77d756] 2013-12-15 11:11:11 +0900 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) @@ -9544,14 +9544,14 @@ Branch: REL9_2_STABLE [888b56570] 2014-02-03 14:46:57 -0500 - Fix *-qualification of named parameters in SQL-language + Fix *-qualification of named parameters in SQL-language functions (Tom Lane) Given a composite-type parameter - named foo, $1.* worked fine, - but foo.* not so much. + named foo, $1.* worked fine, + but foo.* not so much. @@ -9567,11 +9567,11 @@ Branch: REL8_4_STABLE [5525529db] 2014-01-23 23:02:30 +0900 - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. @@ -9587,14 +9587,14 @@ Branch: REL8_4_STABLE [7644a7bd8] 2014-02-13 18:45:32 -0500 - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -9609,7 +9609,7 @@ Branch: REL9_2_STABLE [fa28f9cba] 2014-01-04 16:05:23 -0500 Fix incorrect translation handling in - some psql \d commands + some psql \d commands (Peter Eisentraut, Tom Lane) @@ -9623,7 +9623,7 @@ Branch: REL9_2_STABLE [0ae288d2d] 2014-02-12 14:51:00 +0100 - Ensure pg_basebackup's background process is killed + Ensure pg_basebackup's background process is killed when exiting its foreground process (Magnus Hagander) @@ -9639,7 +9639,7 @@ Branch: REL9_1_STABLE [c6e5c4dd1] 2014-02-09 12:09:55 +0100 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -9670,7 +9670,7 @@ Branch: REL8_4_STABLE [d68a65b01] 2014-01-09 15:58:37 +0100 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) @@ -9686,7 +9686,7 @@ Branch: REL8_4_STABLE [96de4939c] 2014-01-01 12:44:58 +0100 - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) @@ -9703,7 +9703,7 @@ Branch: REL8_4_STABLE [6c8b16e30] 2013-12-07 16:56:34 -0800 - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -9724,7 +9724,7 @@ Branch: REL8_4_STABLE [492b68541] 2014-01-13 15:44:14 +0200 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -9737,7 +9737,7 @@ Branch: REL9_3_STABLE [27902bc91] 2013-12-12 19:07:53 +0900 - Fix contrib/pgbench's progress logging to avoid overflow + Fix contrib/pgbench's progress logging to avoid overflow when the scale factor is large (Tatsuo Ishii) @@ -9751,8 +9751,8 @@ Branch: REL9_2_STABLE [27ab1eb7e] 2014-01-21 16:34:35 -0500 - Fix contrib/pg_stat_statement's handling - of CURRENT_DATE and related constructs (Kyotaro + Fix contrib/pg_stat_statement's handling + of CURRENT_DATE and related constructs (Kyotaro Horiguchi) @@ -9766,7 +9766,7 @@ Branch: REL9_3_STABLE [eb3d350db] 2014-02-03 21:30:28 -0500 Improve lost-connection error handling - in contrib/postgres_fdw (Tom Lane) + in contrib/postgres_fdw (Tom Lane) @@ -9799,13 +9799,13 @@ Branch: REL8_4_STABLE [ae3c98b9b] 2014-02-01 15:16:52 -0500 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. @@ -9821,7 +9821,7 @@ Branch: REL9_0_STABLE [1c0bf372f] 2014-02-01 16:14:15 -0500 - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) @@ -9850,8 +9850,8 @@ Branch: REL8_4_STABLE [432735cbf] 2014-02-10 20:48:30 -0500 - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -9860,7 +9860,7 @@ Branch: REL8_4_STABLE [432735cbf] 2014-02-10 20:48:30 -0500 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. @@ -9877,13 +9877,13 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -9935,19 +9935,19 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would need to happen before actual loss occurs, but it's not zero. In 9.2.0 and later, the probability of loss is higher, and it's also possible - to get could not access status of transaction errors as a + to get could not access status of transaction errors as a consequence of this bug. Users upgrading from releases 9.0.4 or 8.4.8 or earlier are not affected, but all later versions contain the bug. @@ -9955,12 +9955,12 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). @@ -9972,14 +9972,14 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 These bugs could lead to could not access status of - transaction errors, or to duplicate or vanishing rows. + transaction errors, or to duplicate or vanishing rows. Users upgrading from releases prior to 9.3.0 are not affected. The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix latent corruption but will not be able to fix all pre-existing data errors. @@ -9995,7 +9995,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -10028,7 +10028,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 These bugs could result in incorrect behavior, such as locking or even updating the wrong row, in the presence of concurrent updates. - Spurious unable to fetch updated version of tuple errors + Spurious unable to fetch updated version of tuple errors were also possible. @@ -10040,7 +10040,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 This could lead to corruption of the lock data structures in shared - memory, causing lock already held and other odd errors. + memory, causing lock already held and other odd errors. @@ -10057,7 +10057,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -10068,14 +10068,14 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Ensure an anti-wraparound VACUUM counts a page as scanned + Ensure an anti-wraparound VACUUM counts a page as scanned when it's only verified that no tuples need freezing (Sergey Burladyan, Jeff Janes) This bug could result in failing to - advance relfrozenxid, so that the table would still be + advance relfrozenxid, so that the table would still be thought to need another anti-wraparound vacuum. In the worst case the database might even shut down to prevent wraparound. @@ -10104,7 +10104,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix unexpected spgdoinsert() failure error during SP-GiST + Fix unexpected spgdoinsert() failure error during SP-GiST index creation (Teodor Sigaev) @@ -10122,12 +10122,12 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Historically PostgreSQL has accepted queries like + Historically PostgreSQL has accepted queries like SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z although a strict reading of the SQL standard would forbid the - duplicate usage of table alias x. A misguided change in + duplicate usage of table alias x. A misguided change in 9.3.0 caused it to reject some such cases that were formerly accepted. Restore the previous behavior. @@ -10135,8 +10135,8 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -10153,14 +10153,14 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. Fix incorrect planning in cases where the same non-strict expression - appears in multiple WHERE and outer JOIN + appears in multiple WHERE and outer JOIN equality clauses (Tom Lane) @@ -10248,20 +10248,20 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. - Return a valid JSON value when converting an empty hstore value - to json + Return a valid JSON value when converting an empty hstore value + to json (Oskari Saarenmaa) @@ -10276,7 +10276,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -10290,7 +10290,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -10301,10 +10301,10 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -10314,19 +10314,19 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix pg_isready to handle its option properly (Fabrízio de Royes Mello and Fujii Masao) - Fix parsing of WAL file names in pg_receivexlog + Fix parsing of WAL file names in pg_receivexlog (Heikki Linnakangas) - This error made pg_receivexlog unable to restart + This error made pg_receivexlog unable to restart streaming after stopping, once at least 4 GB of WAL had been written. @@ -10334,34 +10334,34 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z Report out-of-disk-space failures properly - in pg_upgrade (Peter Eisentraut) + in pg_upgrade (Peter Eisentraut) - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -10395,7 +10395,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - However, if you use the hstore extension, see the + However, if you use the hstore extension, see the first changelog entry. @@ -10408,18 +10408,18 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Ensure new-in-9.3 JSON functionality is added to the hstore + Ensure new-in-9.3 JSON functionality is added to the hstore extension during an update (Andrew Dunstan) - Users who upgraded a pre-9.3 database containing hstore + Users who upgraded a pre-9.3 database containing hstore should execute ALTER EXTENSION hstore UPDATE; after installing 9.3.1, to add two new JSON functions and a cast. - (If hstore is already up to date, this command does + (If hstore is already up to date, this command does nothing.) @@ -10452,14 +10452,14 @@ ALTER EXTENSION hstore UPDATE; - Fix timeline handling bugs in pg_receivexlog + Fix timeline handling bugs in pg_receivexlog (Heikki Linnakangas, Andrew Gierth) - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) @@ -10488,7 +10488,7 @@ ALTER EXTENSION hstore UPDATE; Overview - Major enhancements in PostgreSQL 9.3 include: + Major enhancements in PostgreSQL 9.3 include: @@ -10511,17 +10511,17 @@ ALTER EXTENSION hstore UPDATE; - Add many features for the JSON data type, + Add many features for the JSON data type, including operators and functions - to extract elements from JSON values + to extract elements from JSON values - Implement SQL-standard LATERAL option for - FROM-clause subqueries and function calls + Implement SQL-standard LATERAL option for + FROM-clause subqueries and function calls @@ -10535,9 +10535,9 @@ ALTER EXTENSION hstore UPDATE; - Add a Postgres foreign + Add a Postgres foreign data wrapper to allow access to - other Postgres servers + other Postgres servers @@ -10582,8 +10582,8 @@ ALTER EXTENSION hstore UPDATE; A dump/restore using pg_dumpall, or use - of pg_upgrade, is + linkend="APP-PG-DUMPALL">pg_dumpall, or use + of pg_upgrade, is required for those wishing to migrate data from any previous release. @@ -10599,21 +10599,21 @@ ALTER EXTENSION hstore UPDATE; - Rename replication_timeout to wal_sender_timeout + Rename replication_timeout to wal_sender_timeout (Amit Kapila) This setting controls the WAL sender timeout. + linkend="wal">WAL sender timeout. Require superuser privileges to set commit_delay + linkend="guc-commit-delay">commit_delay because it can now potentially delay other sessions (Simon Riggs) @@ -10625,7 +10625,7 @@ ALTER EXTENSION hstore UPDATE; Users who have set work_mem based on the + linkend="guc-work-mem">work_mem based on the previous behavior may need to revisit that setting. @@ -10642,7 +10642,7 @@ ALTER EXTENSION hstore UPDATE; Throw an error if a tuple to be updated or deleted has already been - updated or deleted by a BEFORE trigger (Kevin Grittner) + updated or deleted by a BEFORE trigger (Kevin Grittner) @@ -10652,7 +10652,7 @@ ALTER EXTENSION hstore UPDATE; Now an error is thrown to prevent the inconsistent results from being committed. If this change affects your application, the best solution is usually to move the data-propagation actions to - an AFTER trigger. + an AFTER trigger. @@ -10665,15 +10665,15 @@ ALTER EXTENSION hstore UPDATE; Change multicolumn ON UPDATE - SET NULL/SET DEFAULT foreign key actions to affect + SET NULL/SET DEFAULT foreign key actions to affect all columns of the constraint, not just those changed in the - UPDATE (Tom Lane) + UPDATE (Tom Lane) Previously, we would set only those referencing columns that correspond to referenced columns that were changed by - the UPDATE. This was what was required by SQL-92, + the UPDATE. This was what was required by SQL-92, but more recent editions of the SQL standard specify the new behavior. @@ -10681,35 +10681,35 @@ ALTER EXTENSION hstore UPDATE; Force cached plans to be replanned if the search_path changes + linkend="guc-search-path">search_path changes (Tom Lane) Previously, cached plans already generated in the current session were not redone if the query was re-executed with a - new search_path setting, resulting in surprising behavior. + new search_path setting, resulting in surprising behavior. Fix to_number() + linkend="functions-formatting-table">to_number() to properly handle a period used as a thousands separator (Tom Lane) Previously, a period was considered to be a decimal point even when - the locale says it isn't and the D format code is used to + the locale says it isn't and the D format code is used to specify use of the locale-specific decimal point. This resulted in - wrong answers if FM format was also used. + wrong answers if FM format was also used. - Fix STRICT non-set-returning functions that have + Fix STRICT non-set-returning functions that have set-returning functions in their arguments to properly return null rows (Tom Lane) @@ -10722,14 +10722,14 @@ ALTER EXTENSION hstore UPDATE; - Store WAL in a continuous + Store WAL in a continuous stream, rather than skipping the last 16MB segment every 4GB (Heikki Linnakangas) - Previously, WAL files with names ending in FF - were not used because of this skipping. If you have WAL + Previously, WAL files with names ending in FF + were not used because of this skipping. If you have WAL backup or restore scripts that took this behavior into account, they will need to be adjusted. @@ -10738,15 +10738,15 @@ ALTER EXTENSION hstore UPDATE; In pg_constraint.confmatchtype, - store the default foreign key match type (non-FULL, - non-PARTIAL) as s for simple + linkend="catalog-pg-constraint">pg_constraint.confmatchtype, + store the default foreign key match type (non-FULL, + non-PARTIAL) as s for simple (Tom Lane) - Previously this case was represented by u - for unspecified. + Previously this case was represented by u + for unspecified. @@ -10783,10 +10783,10 @@ ALTER EXTENSION hstore UPDATE; This change improves concurrency and reduces the probability of deadlocks when updating tables involved in a foreign-key constraint. - UPDATEs that do not change any columns referenced in a - foreign key now take the new NO KEY UPDATE lock mode on - the row, while foreign key checks use the new KEY SHARE - lock mode, which does not conflict with NO KEY UPDATE. + UPDATEs that do not change any columns referenced in a + foreign key now take the new NO KEY UPDATE lock mode on + the row, while foreign key checks use the new KEY SHARE + lock mode, which does not conflict with NO KEY UPDATE. So there is no blocking unless a foreign-key column is changed. @@ -10794,7 +10794,7 @@ ALTER EXTENSION hstore UPDATE; Add configuration variable lock_timeout to + linkend="guc-lock-timeout">lock_timeout to allow limiting how long a session will wait to acquire any one lock (Zoltán Böszörményi) @@ -10811,21 +10811,21 @@ ALTER EXTENSION hstore UPDATE; - Add SP-GiST + Add SP-GiST support for range data types (Alexander Korotkov) - Allow GiST indexes to be + Allow GiST indexes to be unlogged (Jeevan Chalke) - Improve performance of GiST index insertion by randomizing + Improve performance of GiST index insertion by randomizing the choice of which page to descend to when there are multiple equally good alternatives (Heikki Linnakangas) @@ -10863,7 +10863,7 @@ ALTER EXTENSION hstore UPDATE; Improve optimizer's hash table size estimate for - doing DISTINCT via hash aggregation (Tom Lane) + doing DISTINCT via hash aggregation (Tom Lane) @@ -10893,7 +10893,7 @@ ALTER EXTENSION hstore UPDATE; - Add COPY FREEZE + Add COPY FREEZE option to avoid the overhead of marking tuples as frozen later (Simon Riggs, Jeff Davis) @@ -10902,7 +10902,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of NUMERIC calculations + linkend="datatype-numeric">NUMERIC calculations (Kyotaro Horiguchi) @@ -10910,12 +10910,12 @@ ALTER EXTENSION hstore UPDATE; Improve synchronization of sessions waiting for commit_delay + linkend="guc-commit-delay">commit_delay (Peter Geoghegan) - This greatly improves the usefulness of commit_delay. + This greatly improves the usefulness of commit_delay. @@ -10923,7 +10923,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of the CREATE TEMPORARY TABLE ... ON - COMMIT DELETE ROWS option by not truncating such temporary + COMMIT DELETE ROWS option by not truncating such temporary tables in transactions that haven't touched any temporary tables (Heikki Linnakangas) @@ -10948,7 +10948,7 @@ ALTER EXTENSION hstore UPDATE; This speeds up lock bookkeeping at statement completion in multi-statement transactions that hold many locks; it is particularly - useful for pg_dump. + useful for pg_dump. @@ -10960,7 +10960,7 @@ ALTER EXTENSION hstore UPDATE; This speeds up sessions that create many tables in successive - small transactions, such as a pg_restore run. + small transactions, such as a pg_restore run. @@ -11042,7 +11042,7 @@ ALTER EXTENSION hstore UPDATE; When an authentication failure occurs, log the relevant - pg_hba.conf + pg_hba.conf line, to ease debugging of unintended failures (Magnus Hagander) @@ -11050,23 +11050,23 @@ ALTER EXTENSION hstore UPDATE; - Improve LDAP error + Improve LDAP error reporting and documentation (Peter Eisentraut) - Add support for specifying LDAP authentication parameters - in URL format, per RFC 4516 (Peter Eisentraut) + Add support for specifying LDAP authentication parameters + in URL format, per RFC 4516 (Peter Eisentraut) Change the ssl_ciphers parameter - to start with DEFAULT, rather than ALL, + linkend="guc-ssl-ciphers">ssl_ciphers parameter + to start with DEFAULT, rather than ALL, then remove insecure ciphers (Magnus Hagander) @@ -11078,12 +11078,12 @@ ALTER EXTENSION hstore UPDATE; Parse and load pg_ident.conf + linkend="auth-username-maps">pg_ident.conf once, not during each connection (Amit Kapila) - This is similar to how pg_hba.conf is processed. + This is similar to how pg_hba.conf is processed. @@ -11103,8 +11103,8 @@ ALTER EXTENSION hstore UPDATE; - On Unix-like systems, mmap() is now used for most - of PostgreSQL's shared memory. For most users, this + On Unix-like systems, mmap() is now used for most + of PostgreSQL's shared memory. For most users, this will eliminate any need to adjust kernel parameters for shared memory. @@ -11117,8 +11117,8 @@ ALTER EXTENSION hstore UPDATE; The configuration parameter - unix_socket_directory is replaced by unix_socket_directories, + unix_socket_directory is replaced by unix_socket_directories, which accepts a list of directories. @@ -11131,7 +11131,7 @@ ALTER EXTENSION hstore UPDATE; Such a directory is specified with include_dir in the server + linkend="config-includes">include_dir in the server configuration file. @@ -11140,13 +11140,13 @@ ALTER EXTENSION hstore UPDATE; Increase the maximum initdb-configured value for shared_buffers + linkend="guc-shared-buffers">shared_buffers to 128MB (Robert Haas) This is the maximum value that initdb will attempt to set in postgresql.conf; + linkend="config-setting-configuration-file">postgresql.conf; the previous maximum was 32MB. @@ -11154,7 +11154,7 @@ ALTER EXTENSION hstore UPDATE; Remove the external - PID file, if any, on postmaster exit + PID file, if any, on postmaster exit (Peter Eisentraut) @@ -11186,10 +11186,10 @@ ALTER EXTENSION hstore UPDATE; - Add SQL functions pg_is_in_backup() + Add SQL functions pg_is_in_backup() and pg_backup_start_time() + linkend="functions-admin-backup">pg_backup_start_time() (Gilles Darold) @@ -11201,7 +11201,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of streaming log shipping with synchronous_commit + linkend="guc-synchronous-commit">synchronous_commit disabled (Andres Freund) @@ -11216,12 +11216,12 @@ ALTER EXTENSION hstore UPDATE; Add the last checkpoint's redo location to pg_controldata's + linkend="APP-PGCONTROLDATA">pg_controldata's output (Fujii Masao) - This information is useful for determining which WAL + This information is useful for determining which WAL files are needed for restore. @@ -11229,7 +11229,7 @@ ALTER EXTENSION hstore UPDATE; Allow tools like pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog to run on computers with different architectures (Heikki Linnakangas) @@ -11245,9 +11245,9 @@ ALTER EXTENSION hstore UPDATE; Make pg_basebackup - @@ -11259,10 +11259,10 @@ ALTER EXTENSION hstore UPDATE; Allow pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog and pg_basebackup - to handle streaming timeline switches (Heikki Linnakangas) @@ -11270,8 +11270,8 @@ ALTER EXTENSION hstore UPDATE; Add wal_receiver_timeout - parameter to control the WAL receiver's timeout + linkend="guc-wal-receiver-timeout">wal_receiver_timeout + parameter to control the WAL receiver's timeout (Amit Kapila) @@ -11282,7 +11282,7 @@ ALTER EXTENSION hstore UPDATE; - Change the WAL record format to + Change the WAL record format to allow splitting the record header across pages (Heikki Linnakangas) @@ -11303,23 +11303,23 @@ ALTER EXTENSION hstore UPDATE; - Implement SQL-standard LATERAL option for - FROM-clause subqueries and function calls (Tom Lane) + Implement SQL-standard LATERAL option for + FROM-clause subqueries and function calls (Tom Lane) - This feature allows subqueries and functions in FROM to - reference columns from other tables in the FROM - clause. The LATERAL keyword is optional for functions. + This feature allows subqueries and functions in FROM to + reference columns from other tables in the FROM + clause. The LATERAL keyword is optional for functions. Add support for piping COPY and psql \copy + linkend="SQL-COPY">COPY and psql \copy data to/from an external program (Etsuro Fujita) @@ -11327,8 +11327,8 @@ ALTER EXTENSION hstore UPDATE; Allow a multirow VALUES clause in a rule - to reference OLD/NEW (Tom Lane) + linkend="SQL-VALUES">VALUES clause in a rule + to reference OLD/NEW (Tom Lane) @@ -11364,14 +11364,14 @@ ALTER EXTENSION hstore UPDATE; Add CREATE SCHEMA ... IF - NOT EXISTS clause (Fabrízio de Royes Mello) + NOT EXISTS clause (Fabrízio de Royes Mello) Make REASSIGN - OWNED also change ownership of shared objects + OWNED also change ownership of shared objects (Álvaro Herrera) @@ -11379,7 +11379,7 @@ ALTER EXTENSION hstore UPDATE; Make CREATE - AGGREGATE complain if the given initial value string is not + AGGREGATE complain if the given initial value string is not valid input for the transition datatype (Tom Lane) @@ -11387,12 +11387,12 @@ ALTER EXTENSION hstore UPDATE; Suppress CREATE - TABLE's messages about implicit index and sequence creation + TABLE's messages about implicit index and sequence creation (Robert Haas) - These messages now appear at DEBUG1 verbosity, so that + These messages now appear at DEBUG1 verbosity, so that they will not be shown by default. @@ -11400,7 +11400,7 @@ ALTER EXTENSION hstore UPDATE; Allow DROP TABLE IF - EXISTS to succeed when a non-existent schema is specified + EXISTS to succeed when a non-existent schema is specified in the table name (Bruce Momjian) @@ -11427,14 +11427,14 @@ ALTER EXTENSION hstore UPDATE; - <command>ALTER</> + <command>ALTER</command> - Support IF NOT EXISTS option in ALTER TYPE ... ADD VALUE + Support IF NOT EXISTS option in ALTER TYPE ... ADD VALUE (Andrew Dunstan) @@ -11446,21 +11446,21 @@ ALTER EXTENSION hstore UPDATE; Add ALTER ROLE ALL - SET to establish settings for all users (Peter Eisentraut) + SET to establish settings for all users (Peter Eisentraut) This allows settings to apply to all users in all databases. ALTER DATABASE SET + linkend="SQL-ALTERDATABASE">ALTER DATABASE SET already allowed addition of settings for all users in a single - database. postgresql.conf has a similar effect. + database. postgresql.conf has a similar effect. Add support for ALTER RULE - ... RENAME (Ali Dar) + ... RENAME (Ali Dar) @@ -11469,7 +11469,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="rules-views"><command>VIEWs</></link> + <link linkend="rules-views"><command>VIEWs</command></link> @@ -11499,20 +11499,20 @@ ALTER EXTENSION hstore UPDATE; Simple views that reference some or all columns from a single base table are now updatable by default. More complex views can be made updatable using INSTEAD OF triggers - or INSTEAD rules. + linkend="SQL-CREATETRIGGER">INSTEAD OF triggers + or INSTEAD rules. Add CREATE RECURSIVE - VIEW syntax (Peter Eisentraut) + VIEW syntax (Peter Eisentraut) Internally this is translated into CREATE VIEW ... WITH - RECURSIVE .... + RECURSIVE .... @@ -11558,8 +11558,8 @@ ALTER EXTENSION hstore UPDATE; Allow text timezone - designations, e.g. America/Chicago, in the - T field of ISO-format timestamptz + designations, e.g. America/Chicago, in the + T field of ISO-format timestamptz input (Bruce Momjian) @@ -11567,20 +11567,20 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="datatype-json"><type>JSON</></link> + <link linkend="datatype-json"><type>JSON</type></link> Add operators and functions - to extract elements from JSON values (Andrew Dunstan) + to extract elements from JSON values (Andrew Dunstan) - Allow JSON values to be JSON values to be converted into records (Andrew Dunstan) @@ -11589,7 +11589,7 @@ ALTER EXTENSION hstore UPDATE; Add functions to convert - scalars, records, and hstore values to JSON (Andrew + scalars, records, and hstore values to JSON (Andrew Dunstan) @@ -11609,9 +11609,9 @@ ALTER EXTENSION hstore UPDATE; Add array_remove() + linkend="array-functions-table">array_remove() and array_replace() + linkend="array-functions-table">array_replace() functions (Marco Nenciarini, Gabriele Bartolini) @@ -11619,10 +11619,10 @@ ALTER EXTENSION hstore UPDATE; Allow concat() + linkend="functions-string-other">concat() and format() - to properly expand VARIADIC-labeled arguments + linkend="functions-string-format">format() + to properly expand VARIADIC-labeled arguments (Pavel Stehule) @@ -11630,7 +11630,7 @@ ALTER EXTENSION hstore UPDATE; Improve format() + linkend="functions-string-format">format() to provide field width and left/right alignment options (Pavel Stehule) @@ -11638,29 +11638,29 @@ ALTER EXTENSION hstore UPDATE; Make to_char(), + linkend="functions-formatting-table">to_char(), to_date(), + linkend="functions-formatting-table">to_date(), and to_timestamp() + linkend="functions-formatting-table">to_timestamp() handle negative (BC) century values properly (Bruce Momjian) Previously the behavior was either wrong or inconsistent - with positive/AD handling, e.g. with the format mask - IYYY-IW-DY. + with positive/AD handling, e.g. with the format mask + IYYY-IW-DY. Make to_date() + linkend="functions-formatting-table">to_date() and to_timestamp() - return proper results when mixing ISO and Gregorian + linkend="functions-formatting-table">to_timestamp() + return proper results when mixing ISO and Gregorian week/day designations (Bruce Momjian) @@ -11668,27 +11668,27 @@ ALTER EXTENSION hstore UPDATE; Cause pg_get_viewdef() - to start a new line by default after each SELECT target - list entry and FROM entry (Marko Tiikkaja) + linkend="functions-info-catalog-table">pg_get_viewdef() + to start a new line by default after each SELECT target + list entry and FROM entry (Marko Tiikkaja) This reduces line length in view printing, for instance in pg_dump output. + linkend="APP-PGDUMP">pg_dump output. - Fix map_sql_value_to_xml_value() to print values of + Fix map_sql_value_to_xml_value() to print values of domain types the same way their base type would be printed (Pavel Stehule) There are special formatting rules for certain built-in types such as - boolean; these rules now also apply to domains over these + boolean; these rules now also apply to domains over these types. @@ -11707,13 +11707,13 @@ ALTER EXTENSION hstore UPDATE; - Allow PL/pgSQL to use RETURN with a composite-type + Allow PL/pgSQL to use RETURN with a composite-type expression (Asif Rehman) Previously, in a function returning a composite type, - RETURN could only reference a variable of that type. + RETURN could only reference a variable of that type. @@ -11728,14 +11728,14 @@ ALTER EXTENSION hstore UPDATE; Allow PL/pgSQL to access the number of rows processed by - COPY (Pavel Stehule) + COPY (Pavel Stehule) - A COPY executed in a PL/pgSQL function now updates the + A COPY executed in a PL/pgSQL function now updates the value retrieved by GET DIAGNOSTICS - x = ROW_COUNT. + x = ROW_COUNT. @@ -11779,9 +11779,9 @@ ALTER EXTENSION hstore UPDATE; - Handle SPI errors raised - explicitly (with PL/Python's RAISE) the same as - internal SPI errors (Oskari Saarenmaa and Jan Urbanski) + Handle SPI errors raised + explicitly (with PL/Python's RAISE) the same as + internal SPI errors (Oskari Saarenmaa and Jan Urbanski) @@ -11798,7 +11798,7 @@ ALTER EXTENSION hstore UPDATE; - Prevent leakage of SPI tuple tables during subtransaction + Prevent leakage of SPI tuple tables during subtransaction abort (Tom Lane) @@ -11809,7 +11809,7 @@ ALTER EXTENSION hstore UPDATE; of such tuple tables and release them manually in error-recovery code. Failure to do so caused a number of transaction-lifespan memory leakage issues in PL/pgSQL and perhaps other SPI clients. SPI_freetuptable() + linkend="spi-spi-freetupletable">SPI_freetuptable() now protects itself against multiple freeing requests, so any existing code that did take care to clean up shouldn't be broken by this change. @@ -11817,8 +11817,8 @@ ALTER EXTENSION hstore UPDATE; - Allow SPI functions to access the number of rows processed - by COPY (Pavel Stehule) + Allow SPI functions to access the number of rows processed + by COPY (Pavel Stehule) @@ -11834,35 +11834,35 @@ ALTER EXTENSION hstore UPDATE; Add command-line utility pg_isready to + linkend="app-pg-isready">pg_isready to check if the server is ready to accept connections (Phil Sorber) - Support multiple This is similar to the way pg_dump's - option works. - Add @@ -11870,7 +11870,7 @@ ALTER EXTENSION hstore UPDATE; Add libpq function PQconninfo() + linkend="libpq-pqconninfo">PQconninfo() to return connection information (Zoltán Böszörményi, Magnus Hagander) @@ -11879,27 +11879,27 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Adjust function cost settings so psql tab + Adjust function cost settings so psql tab completion and pattern searching are more efficient (Tom Lane) - Improve psql's tab completion coverage (Jeff Janes, + Improve psql's tab completion coverage (Jeff Janes, Dean Rasheed, Peter Eisentraut, Magnus Hagander) - Allow the psql mode to work when reading from standard input (Fabien Coelho, Robert Haas) @@ -11911,13 +11911,13 @@ ALTER EXTENSION hstore UPDATE; - Remove psql warning when connecting to an older + Remove psql warning when connecting to an older server (Peter Eisentraut) A warning is still issued when connecting to a server of a newer major - version than psql's. + version than psql's. @@ -11930,42 +11930,42 @@ ALTER EXTENSION hstore UPDATE; - Add psql command \watch to repeatedly + Add psql command \watch to repeatedly execute a SQL command (Will Leinweber) - Add psql command \gset to store query - results in psql variables (Pavel Stehule) + Add psql command \gset to store query + results in psql variables (Pavel Stehule) - Add SSL information to psql's - \conninfo command (Alastair Turner) + Add SSL information to psql's + \conninfo command (Alastair Turner) - Add Security column to psql's - \df+ output (Jon Erdman) + Add Security column to psql's + \df+ output (Jon Erdman) - Allow psql command \l to accept a database + Allow psql command \l to accept a database name pattern (Peter Eisentraut) - In psql, do not allow \connect to + In psql, do not allow \connect to use defaults if there is no active connection (Bruce Momjian) @@ -11977,7 +11977,7 @@ ALTER EXTENSION hstore UPDATE; Properly reset state after failure of a SQL command executed with - psql's \g file + psql's \g file (Tom Lane) @@ -11998,8 +11998,8 @@ ALTER EXTENSION hstore UPDATE; - Add a latex-longtable output format to - psql (Bruce Momjian) + Add a latex-longtable output format to + psql (Bruce Momjian) @@ -12009,21 +12009,21 @@ ALTER EXTENSION hstore UPDATE; - Add a border=3 output mode to the psql - latex format (Bruce Momjian) + Add a border=3 output mode to the psql + latex format (Bruce Momjian) - In psql's tuples-only and expanded output modes, no - longer emit (No rows) for zero rows (Peter Eisentraut) + In psql's tuples-only and expanded output modes, no + longer emit (No rows) for zero rows (Peter Eisentraut) - In psql's unaligned, expanded output mode, no longer + In psql's unaligned, expanded output mode, no longer print an empty line for zero rows (Peter Eisentraut) @@ -12035,34 +12035,34 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add pg_dump option to dump tables in parallel (Joachim Wieland) - Make pg_dump output functions in a more predictable + Make pg_dump output functions in a more predictable order (Joel Jacobson) - Fix tar files emitted by pg_dump - to be POSIX conformant (Brian Weaver, Tom Lane) + Fix tar files emitted by pg_dump + to be POSIX conformant (Brian Weaver, Tom Lane) - Add @@ -12076,7 +12076,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-INITDB"><application>initdb</></link> + <link linkend="APP-INITDB"><application>initdb</application></link> @@ -12087,19 +12087,19 @@ ALTER EXTENSION hstore UPDATE; This insures data integrity in event of a system crash shortly after - initdb. This can be disabled by using . - Add initdb option to sync the data directory to durable storage (Bruce Momjian) This is used by pg_upgrade. + linkend="pgupgrade">pg_upgrade. @@ -12131,14 +12131,14 @@ ALTER EXTENSION hstore UPDATE; - Create a centralized timeout API (Zoltán + Create a centralized timeout API (Zoltán Böszörményi) - Create libpgcommon and move pg_malloc() and other + Create libpgcommon and move pg_malloc() and other functions there (Álvaro Herrera, Andres Freund) @@ -12155,15 +12155,15 @@ ALTER EXTENSION hstore UPDATE; - Use SA_RESTART for all signals, - including SIGALRM (Tom Lane) + Use SA_RESTART for all signals, + including SIGALRM (Tom Lane) Ensure that the correct text domain is used when - translating errcontext() messages + translating errcontext() messages (Heikki Linnakangas) @@ -12176,7 +12176,7 @@ ALTER EXTENSION hstore UPDATE; - Provide support for static assertions that will fail at + Provide support for static assertions that will fail at compile time if some compile-time-constant condition is not met (Andres Freund, Tom Lane) @@ -12184,14 +12184,14 @@ ALTER EXTENSION hstore UPDATE; - Support Assert() in client-side code (Andrew Dunstan) + Support Assert() in client-side code (Andrew Dunstan) - Add decoration to inform the C compiler that some ereport() - and elog() calls do not return (Peter Eisentraut, + Add decoration to inform the C compiler that some ereport() + and elog() calls do not return (Peter Eisentraut, Andres Freund, Tom Lane, Heikki Linnakangas) @@ -12200,7 +12200,7 @@ ALTER EXTENSION hstore UPDATE; Allow options to be passed to the regression test output comparison utility via PG_REGRESS_DIFF_OPTS + linkend="regress-evaluation">PG_REGRESS_DIFF_OPTS (Peter Eisentraut) @@ -12209,42 +12209,42 @@ ALTER EXTENSION hstore UPDATE; Add isolation tests for CREATE INDEX - CONCURRENTLY (Abhijit Menon-Sen) + CONCURRENTLY (Abhijit Menon-Sen) - Remove typedefs for int2/int4 as they are better - represented as int16/int32 (Peter Eisentraut) + Remove typedefs for int2/int4 as they are better + represented as int16/int32 (Peter Eisentraut) Fix install-strip on Mac OS - X (Peter Eisentraut) + X (Peter Eisentraut) Remove configure flag - , as it is no longer supported (Bruce Momjian) - Rewrite pgindent in Perl (Andrew Dunstan) + Rewrite pgindent in Perl (Andrew Dunstan) Provide Emacs macro to set Perl formatting to - match PostgreSQL's perltidy settings (Peter Eisentraut) + match PostgreSQL's perltidy settings (Peter Eisentraut) @@ -12257,25 +12257,25 @@ ALTER EXTENSION hstore UPDATE; - Change the way UESCAPE is lexed, to significantly reduce + Change the way UESCAPE is lexed, to significantly reduce the size of the lexer tables (Heikki Linnakangas) - Centralize flex and bison - make rules (Peter Eisentraut) + Centralize flex and bison + make rules (Peter Eisentraut) - This is useful for pgxs authors. + This is useful for pgxs authors. - Change many internal backend functions to return object OIDs + Change many internal backend functions to return object OIDs rather than void (Dimitri Fontaine) @@ -12299,7 +12299,7 @@ ALTER EXTENSION hstore UPDATE; Add function pg_identify_object() + linkend="functions-info-catalog-table">pg_identify_object() to produce a machine-readable description of a database object (Álvaro Herrera) @@ -12307,7 +12307,7 @@ ALTER EXTENSION hstore UPDATE; - Add post-ALTER-object server hooks (KaiGai Kohei) + Add post-ALTER-object server hooks (KaiGai Kohei) @@ -12321,28 +12321,28 @@ ALTER EXTENSION hstore UPDATE; Provide a tool to help detect timezone abbreviation changes when - updating the src/timezone/data files + updating the src/timezone/data files (Tom Lane) - Add pkg-config support for libpq - and ecpg libraries (Peter Eisentraut) + Add pkg-config support for libpq + and ecpg libraries (Peter Eisentraut) - Remove src/tools/backend, now that the content is on - the PostgreSQL wiki (Bruce Momjian) + Remove src/tools/backend, now that the content is on + the PostgreSQL wiki (Bruce Momjian) - Split out WAL reading as + Split out WAL reading as an independent facility (Heikki Linnakangas, Andres Freund) @@ -12350,13 +12350,13 @@ ALTER EXTENSION hstore UPDATE; Use a 64-bit integer to represent WAL positions - (XLogRecPtr) instead of two 32-bit integers + linkend="wal">WAL positions + (XLogRecPtr) instead of two 32-bit integers (Heikki Linnakangas) - Generally, tools that need to read the WAL format + Generally, tools that need to read the WAL format will need to be adjusted. @@ -12371,7 +12371,7 @@ ALTER EXTENSION hstore UPDATE; Allow PL/Python on OS - X to build against custom versions of Python + X to build against custom versions of Python (Peter Eisentraut) @@ -12387,9 +12387,9 @@ ALTER EXTENSION hstore UPDATE; - Add a Postgres foreign + Add a Postgres foreign data wrapper contrib module to allow access to - other Postgres servers (Shigeru Hanada) + other Postgres servers (Shigeru Hanada) @@ -12399,7 +12399,7 @@ ALTER EXTENSION hstore UPDATE; - Add pg_xlogdump + Add pg_xlogdump contrib program (Andres Freund) @@ -12407,46 +12407,46 @@ ALTER EXTENSION hstore UPDATE; Add support for indexing of regular-expression searches in - pg_trgm + pg_trgm (Alexander Korotkov) - Improve pg_trgm's + Improve pg_trgm's handling of multibyte characters (Tom Lane) On a platform that does not have the wcstombs() or towlower() library functions, this could result in an incompatible change in the contents - of pg_trgm indexes for non-ASCII data. In such cases, - REINDEX those indexes to ensure correct search results. + of pg_trgm indexes for non-ASCII data. In such cases, + REINDEX those indexes to ensure correct search results. Add a pgstattuple function to report - the size of the pending-insertions list of a GIN index + the size of the pending-insertions list of a GIN index (Fujii Masao) - Make oid2name, - pgbench, and - vacuumlo set - fallback_application_name (Amit Kapila) + Make oid2name, + pgbench, and + vacuumlo set + fallback_application_name (Amit Kapila) Improve output of pg_test_timing + linkend="pgtesttiming">pg_test_timing (Bruce Momjian) @@ -12454,7 +12454,7 @@ ALTER EXTENSION hstore UPDATE; Improve output of pg_test_fsync + linkend="pgtestfsync">pg_test_fsync (Peter Geoghegan) @@ -12466,9 +12466,9 @@ ALTER EXTENSION hstore UPDATE; - When using this FDW to define the target of a dblink + When using this FDW to define the target of a dblink connection, instead of using a hard-wired list of connection options, - the underlying libpq library is consulted to see what + the underlying libpq library is consulted to see what connection options it supports. @@ -12476,26 +12476,26 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="pgupgrade"><application>pg_upgrade</></link> + <link linkend="pgupgrade"><application>pg_upgrade</application></link> - Allow pg_upgrade to do dumps and restores in + Allow pg_upgrade to do dumps and restores in parallel (Bruce Momjian, Andrew Dunstan) This allows parallel schema dump/restore of databases, as well as parallel copy/link of data files per tablespace. Use the - option to specify the level of parallelism. - Make pg_upgrade create Unix-domain sockets in + Make pg_upgrade create Unix-domain sockets in the current directory (Bruce Momjian, Tom Lane) @@ -12507,7 +12507,7 @@ ALTER EXTENSION hstore UPDATE; - Make pg_upgrade mode properly detect the location of non-default socket directories (Bruce Momjian, Tom Lane) @@ -12515,21 +12515,21 @@ ALTER EXTENSION hstore UPDATE; - Improve performance of pg_upgrade for databases + Improve performance of pg_upgrade for databases with many tables (Bruce Momjian) - Improve pg_upgrade's logs by showing + Improve pg_upgrade's logs by showing executed commands (Álvaro Herrera) - Improve pg_upgrade's status display during + Improve pg_upgrade's status display during copy/link (Bruce Momjian) @@ -12539,33 +12539,33 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="pgbench"><application>pgbench</></link> + <link linkend="pgbench"><application>pgbench</application></link> - Add This adds foreign key constraints to the standard tables created by - pgbench, for use in foreign key performance testing. + pgbench, for use in foreign key performance testing. - Allow pgbench to aggregate performance statistics - and produce output every seconds (Tomas Vondra) - Add pgbench option to control the percentage of transactions logged (Tomas Vondra) @@ -12573,29 +12573,29 @@ ALTER EXTENSION hstore UPDATE; Reduce and improve the status message output of - pgbench's initialization mode (Robert Haas, + pgbench's initialization mode (Robert Haas, Peter Eisentraut) - Add pgbench mode to print one output line every five seconds (Tomas Vondra) - Output pgbench elapsed and estimated remaining + Output pgbench elapsed and estimated remaining time during initialization (Tomas Vondra) - Allow pgbench to use much larger scale factors, - by changing relevant columns from integer to bigint + Allow pgbench to use much larger scale factors, + by changing relevant columns from integer to bigint when the requested scale factor exceeds 20000 (Greg Smith) @@ -12614,21 +12614,21 @@ ALTER EXTENSION hstore UPDATE; - Allow EPUB-format documentation to be created + Allow EPUB-format documentation to be created (Peter Eisentraut) - Update FreeBSD kernel configuration documentation + Update FreeBSD kernel configuration documentation (Brad Davis) - Improve WINDOW + Improve WINDOW function documentation (Bruce Momjian, Florian Pflug) @@ -12636,7 +12636,7 @@ ALTER EXTENSION hstore UPDATE; Add instructions for setting - up the documentation tool chain on macOS + up the documentation tool chain on macOS (Peter Eisentraut) @@ -12644,7 +12644,7 @@ ALTER EXTENSION hstore UPDATE; Improve commit_delay + linkend="guc-commit-delay">commit_delay documentation (Peter Geoghegan) diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index c665f90ca1..deb74b4e1c 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -53,20 +53,20 @@ Branch: REL9_4_STABLE [b51c8efc6] 2017-08-24 15:21:32 -0700 Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -105,21 +105,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -136,7 +136,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -148,7 +148,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -156,13 +156,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -174,12 +174,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -228,7 +228,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -236,11 +236,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -255,15 +255,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -294,15 +294,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -316,7 +316,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -330,16 +330,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -347,13 +347,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -460,21 +460,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -488,7 +488,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -505,7 +505,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Logical decoding crashed on tuples that are wider than 64KB (after compression, but with all data in-line). The case arises only - when REPLICA IDENTITY FULL is enabled for a table + when REPLICA IDENTITY FULL is enabled for a table containing such tuples. @@ -553,7 +553,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -561,7 +561,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -569,56 +569,56 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -629,20 +629,20 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -655,9 +655,9 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -668,8 +668,8 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -680,15 +680,15 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -701,14 +701,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -719,14 +719,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -734,13 +734,13 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -748,7 +748,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -760,8 +760,8 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -773,9 +773,9 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -786,7 +786,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -798,14 +798,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -825,34 +825,34 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -889,7 +889,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -906,18 +906,18 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -941,7 +941,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -955,17 +955,17 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -996,7 +996,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1009,7 +1009,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1017,21 +1017,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1052,19 +1052,19 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1072,27 +1072,27 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -1106,12 +1106,12 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -1120,21 +1120,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -1146,7 +1146,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -1160,13 +1160,13 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -1184,21 +1184,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1213,20 +1213,20 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1238,26 +1238,26 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1270,7 +1270,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1320,7 +1320,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1334,9 +1334,9 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1349,15 +1349,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1410,15 +1410,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1435,19 +1435,19 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1513,7 +1513,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1528,7 +1528,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -1544,15 +1544,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1565,7 +1565,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -1578,7 +1578,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1608,13 +1608,13 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1622,12 +1622,12 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1642,15 +1642,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1661,33 +1661,33 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1699,8 +1699,8 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1712,15 +1712,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1733,28 +1733,28 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -1767,15 +1767,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1794,21 +1794,21 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1820,23 +1820,23 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1847,22 +1847,22 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -1888,7 +1888,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1951,7 +1951,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -1959,7 +1959,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -1970,20 +1970,20 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - The typical symptom was unexpected GIN leaf action errors + The typical symptom was unexpected GIN leaf action errors during WAL replay. - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -2017,79 +2017,79 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix query-lifespan memory leak in a bulk UPDATE on a table - with a PRIMARY KEY or REPLICA IDENTITY index + Fix query-lifespan memory leak in a bulk UPDATE on a table + with a PRIMARY KEY or REPLICA IDENTITY index (Tom Lane) - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -2134,7 +2134,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -2153,7 +2153,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 This failure to reset all of the fields of the slot could - prevent VACUUM from removing dead tuples. + prevent VACUUM from removing dead tuples. @@ -2164,7 +2164,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -2178,7 +2178,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Previously, the same value would be chosen every time, because it was - derived from random() but srandom() had not + derived from random() but srandom() had not yet been called. While relatively harmless, this was not the intended behavior. @@ -2191,8 +2191,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Windows sometimes returns ERROR_ACCESS_DENIED rather - than ERROR_ALREADY_EXISTS when there is an existing + Windows sometimes returns ERROR_ACCESS_DENIED rather + than ERROR_ALREADY_EXISTS when there is an existing segment. This led to postmaster startup failure due to believing that the former was an unrecoverable error. @@ -2201,7 +2201,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -2212,30 +2212,30 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - Fix pgbench's calculation of average latency + Fix pgbench's calculation of average latency (Fabien Coelho) - The calculation was incorrect when there were \sleep + The calculation was incorrect when there were \sleep commands in the script, or when the test duration was specified in number of transactions rather than total time. @@ -2243,12 +2243,12 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -2256,8 +2256,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -2268,7 +2268,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -2276,15 +2276,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Fix contrib/pg_buffercache to work - when shared_buffers exceeds 256GB (KaiGai Kohei) + Fix contrib/pg_buffercache to work + when shared_buffers exceeds 256GB (KaiGai Kohei) - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -2296,17 +2296,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - When PostgreSQL has been configured - with - In MSVC builds, include pg_recvlogical in a + In MSVC builds, include pg_recvlogical in a client-only installation (MauMau) @@ -2327,17 +2327,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -2350,15 +2350,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -2403,17 +2403,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -2427,7 +2427,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -2436,22 +2436,22 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -2461,40 +2461,40 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -2505,19 +2505,19 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -2527,8 +2527,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -2542,7 +2542,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -2568,15 +2568,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -2619,12 +2619,12 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -2640,12 +2640,12 @@ Branch: REL9_2_STABLE [294509ea9] 2016-05-25 19:39:49 -0400 Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 --> - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -2660,15 +2660,15 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -2680,7 +2680,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -2713,7 +2713,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - This mistake prevented VACUUM from completing in some + This mistake prevented VACUUM from completing in some cases involving corrupt b-tree indexes. @@ -2727,8 +2727,8 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -2741,53 +2741,53 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -2795,8 +2795,8 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -2808,7 +2808,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -2820,7 +2820,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2841,13 +2841,13 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -2867,7 +2867,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2879,7 +2879,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2934,7 +2934,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2943,7 +2943,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2957,10 +2957,10 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2981,14 +2981,14 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 The memory leak would typically not amount to much in simple queries, but it could be very substantial during a large GIN index build with - high maintenance_work_mem. + high maintenance_work_mem. - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -3000,29 +3000,29 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Disallow newlines in ALTER SYSTEM parameter values + Disallow newlines in ALTER SYSTEM parameter values (Tom Lane) The configuration-file parser doesn't support embedded newlines in string literals, so we mustn't allow them in values to be inserted - by ALTER SYSTEM. + by ALTER SYSTEM. - Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to + Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to work properly if an index on OID is selected (David Rowley) @@ -3048,19 +3048,19 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -3068,20 +3068,20 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -3092,22 +3092,22 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -3120,19 +3120,19 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -3140,9 +3140,9 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -3188,29 +3188,29 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Fix bug in json_to_record() when a field of its input + Fix bug in json_to_record() when a field of its input object contains a sub-object with a field name matching one of the requested output column names (Tom Lane) @@ -3219,7 +3219,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix misformatting of negative time zone offsets - by to_char()'s OF format code + by to_char()'s OF format code (Thomas Munro, Tom Lane) @@ -3232,7 +3232,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Previously, standby servers would delay application of WAL records in - response to recovery_min_apply_delay even while replaying + response to recovery_min_apply_delay even while replaying the initial portion of WAL needed to make their database state valid. Since the standby is useless until it's reached a consistent database state, this was deemed unhelpful. @@ -3241,7 +3241,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) @@ -3253,44 +3253,44 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Trouble cases included tuples larger than one page when replica - identity is FULL, UPDATEs that change a + identity is FULL, UPDATEs that change a primary key within a transaction large enough to be spooled to disk, incorrect reports of subxact logged without previous toplevel - record, and incorrect reporting of a transaction's commit time. + record, and incorrect reporting of a transaction's commit time. Fix planner error with nested security barrier views when the outer - view has a WHERE clause containing a correlated subquery + view has a WHERE clause containing a correlated subquery (Dean Rasheed) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -3316,27 +3316,27 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -3350,26 +3350,26 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -3397,21 +3397,21 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -3447,7 +3447,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 However, if you are upgrading an installation that contains any GIN - indexes that use the (non-default) jsonb_path_ops operator + indexes that use the (non-default) jsonb_path_ops operator class, see the first changelog entry below. @@ -3471,19 +3471,19 @@ Branch: REL9_4_STABLE [788e35ac0] 2015-11-05 18:15:48 -0500 - Fix inconsistent hash calculations in jsonb_path_ops GIN + Fix inconsistent hash calculations in jsonb_path_ops GIN indexes (Tom Lane) - When processing jsonb values that contain both scalars and + When processing jsonb values that contain both scalars and sub-objects at the same nesting level, for example an array containing both scalars and sub-arrays, key hash values could be calculated differently than they would be for the same key in a different context. This could result in queries not finding entries that they should find. Fixing this means that existing indexes may now be inconsistent with the new hash calculation code. Users - should REINDEX jsonb_path_ops GIN indexes after + should REINDEX jsonb_path_ops GIN indexes after installing this update to make sure that all searches work as expected. @@ -3513,18 +3513,18 @@ Branch: REL9_1_STABLE [dea6da132] 2015-10-06 17:15:27 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. @@ -3541,7 +3541,7 @@ Branch: REL9_1_STABLE [08322daed] 2015-10-31 14:36:58 -0500 - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -3560,7 +3560,7 @@ Branch: REL9_1_STABLE [5f9a86b35] 2015-12-12 14:19:29 +0100 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -3614,7 +3614,7 @@ Branch: REL9_1_STABLE [60ba32cb5] 2015-11-20 14:55:29 -0500 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) @@ -3629,7 +3629,7 @@ Branch: REL9_1_STABLE [7e29e7f55] 2015-12-21 19:49:15 -0300 - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) @@ -3644,7 +3644,7 @@ Branch: REL9_1_STABLE [ab14c1383] 2015-12-21 19:16:15 -0300 - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -3663,7 +3663,7 @@ Branch: REL9_1_STABLE [f44c5203b] 2015-12-11 18:39:09 -0300 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -3697,13 +3697,13 @@ Branch: REL9_3_STABLE [0a34ff7e9] 2015-12-07 17:41:45 -0500 - Fix planner's handling of LATERAL references (Tom + Fix planner's handling of LATERAL references (Tom Lane) This fixes some corner cases that led to failed to build any - N-way joins or could not devise a query plan planner + N-way joins or could not devise a query plan planner failures. @@ -3753,9 +3753,9 @@ Branch: REL9_3_STABLE [faf18a905] 2015-11-16 13:45:17 -0500 - Speed up generation of unique table aliases in EXPLAIN and + Speed up generation of unique table aliases in EXPLAIN and rule dumping, and ensure that generated aliases do not - exceed NAMEDATALEN (Tom Lane) + exceed NAMEDATALEN (Tom Lane) @@ -3771,8 +3771,8 @@ Branch: REL9_1_STABLE [7b21d1bca] 2015-11-15 14:41:09 -0500 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) @@ -3785,8 +3785,8 @@ Branch: REL9_4_STABLE [4f33572ee] 2015-10-20 11:06:24 -0700 - Translation of minus-infinity dates and timestamps to json - or jsonb incorrectly rendered them as plus-infinity (Tom Lane) + Translation of minus-infinity dates and timestamps to json + or jsonb incorrectly rendered them as plus-infinity (Tom Lane) @@ -3802,7 +3802,7 @@ Branch: REL9_1_STABLE [728a2ac21] 2015-11-17 15:47:12 -0500 - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -3894,7 +3894,7 @@ Branch: REL9_1_STABLE [b94c2b6a6] 2015-10-16 15:36:17 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -3929,14 +3929,14 @@ Branch: REL9_1_STABLE [b00c79b5b] 2015-10-16 14:43:18 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -3959,7 +3959,7 @@ Branch: REL9_1_STABLE [b0d858359] 2015-10-13 11:21:33 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -4009,13 +4009,13 @@ Branch: REL9_1_STABLE [db462a44e] 2015-12-17 16:55:51 -0500 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -4023,8 +4023,8 @@ Branch: REL9_1_STABLE [db462a44e] 2015-12-17 16:55:51 -0500 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) @@ -4041,7 +4041,7 @@ Branch: REL9_1_STABLE [6430a11fa] 2015-11-25 17:31:54 -0500 - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -4059,21 +4059,21 @@ Branch: REL9_1_STABLE [c869a7d5b] 2015-10-12 18:30:37 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -4091,23 +4091,23 @@ Branch: REL9_1_STABLE [87deb55a4] 2015-11-08 17:31:24 -0500 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -4127,18 +4127,18 @@ Branch: REL9_1_STABLE [6df62ef43] 2015-11-23 00:32:01 -0500 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -4148,16 +4148,16 @@ Branch: REL9_1_STABLE [6df62ef43] 2015-11-23 00:32:01 -0500 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) @@ -4180,14 +4180,14 @@ Branch: REL9_1_STABLE [e4959fb5c] 2016-01-02 19:04:45 -0500 Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. @@ -4202,7 +4202,7 @@ Branch: REL9_3_STABLE [534a4159c] 2015-12-23 14:25:31 -0500 - Avoid repeated password prompts during parallel pg_dump + Avoid repeated password prompts during parallel pg_dump (Zeus Kronion) @@ -4225,14 +4225,14 @@ Branch: REL9_1_STABLE [c36064e43] 2015-11-24 17:18:27 -0500 - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -4250,22 +4250,22 @@ Branch: REL9_2_STABLE [4fb9e6109] 2015-12-28 10:50:35 -0300 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -4273,7 +4273,7 @@ Branch: REL9_2_STABLE [4fb9e6109] 2015-12-28 10:50:35 -0300 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) @@ -4288,29 +4288,29 @@ Branch: REL9_3_STABLE [db6e8e162] 2015-11-12 13:03:53 -0500 - Fix premature clearing of libpq's input buffer when + Fix premature clearing of libpq's input buffer when socket EOF is seen (Tom Lane) - This mistake caused libpq to sometimes not report the + This mistake caused libpq to sometimes not report the backend's final error message before reporting server closed the - connection unexpectedly. + connection unexpectedly. - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. @@ -4326,7 +4326,7 @@ Branch: REL9_1_STABLE [4b58ded74] 2015-12-14 18:48:49 +0200 - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -4343,7 +4343,7 @@ Branch: REL9_1_STABLE [a9bcd8370] 2015-10-18 10:17:12 +0200 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) @@ -4360,29 +4360,29 @@ Branch: REL9_1_STABLE [84387496f] 2015-12-01 11:42:52 -0500 - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -4403,15 +4403,15 @@ Branch: REL9_1_STABLE [1b6102eb7] 2015-12-27 13:03:19 -0300 - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - In contrib/postgres_fdw, fix bugs triggered by use - of tableoid in data-modifying commands (Etsuro Fujita, + In contrib/postgres_fdw, fix bugs triggered by use + of tableoid in data-modifying commands (Etsuro Fujita, Robert Haas) @@ -4433,7 +4433,7 @@ Branch: REL9_2_STABLE [7f94a5c10] 2015-12-10 10:19:31 -0500 - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -4467,19 +4467,19 @@ Branch: REL9_1_STABLE [2a37a103b] 2015-12-11 16:14:48 -0500 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -4497,11 +4497,11 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -4510,7 +4510,7 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -4563,13 +4563,13 @@ Branch: REL9_3_STABLE [f8862172e] 2015-10-05 10:06:34 -0400 - Guard against stack overflows in json parsing + Guard against stack overflows in json parsing (Oskari Saarenmaa) - If an application constructs PostgreSQL json - or jsonb values from arbitrary user input, the application's + If an application constructs PostgreSQL json + or jsonb values from arbitrary user input, the application's users can reliably crash the PostgreSQL server, causing momentary denial of service. (CVE-2015-5289) @@ -4588,8 +4588,8 @@ Branch: REL9_0_STABLE [188e081ef] 2015-10-05 10:06:36 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -4634,7 +4634,7 @@ Branch: REL9_4_STABLE [bab959906] 2015-08-02 20:09:05 +0300 Fix possible deadlock during WAL insertion - when commit_delay is set (Heikki Linnakangas) + when commit_delay is set (Heikki Linnakangas) @@ -4665,13 +4665,13 @@ Branch: REL9_0_STABLE [45c69178b] 2015-06-25 14:39:06 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -4711,7 +4711,7 @@ Branch: REL9_0_STABLE [2d4336cf8] 2015-09-30 23:32:23 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -4731,7 +4731,7 @@ Branch: REL9_3_STABLE [1bcc9e60a] 2015-09-25 13:16:31 -0400 - This was seen primarily when restoring pg_dump output + This was seen primarily when restoring pg_dump output for databases with many thousands of tables. @@ -4755,7 +4755,7 @@ Branch: REL9_0_STABLE [444b2ebee] 2015-07-28 22:06:32 +0200 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). @@ -4779,7 +4779,7 @@ Branch: REL9_0_STABLE [eeb0b7830] 2015-10-05 11:57:25 +0200 - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -4802,7 +4802,7 @@ Branch: REL9_0_STABLE [b09446ed7] 2015-08-04 13:12:03 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) @@ -4816,8 +4816,8 @@ Branch: REL9_3_STABLE [5a56c2545] 2015-06-28 18:38:06 -0400 Avoid logging complaints when a parameter that can only be set at - server start appears multiple times in postgresql.conf, - and fix counting of line numbers after an include_dir + server start appears multiple times in postgresql.conf, + and fix counting of line numbers after an include_dir directive (Tom Lane) @@ -4835,7 +4835,7 @@ Branch: REL9_0_STABLE [a89781e34] 2015-09-21 12:12:16 -0400 - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -4862,8 +4862,8 @@ Branch: REL9_2_STABLE [8dacb29ca] 2015-10-05 10:06:35 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) @@ -4887,14 +4887,14 @@ Branch: REL9_0_STABLE [92d956f51] 2015-09-07 20:47:06 +0100 - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -4936,7 +4936,7 @@ Branch: REL9_0_STABLE [b875ca09f] 2015-10-02 15:00:52 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -5052,8 +5052,8 @@ Branch: REL9_0_STABLE [bd327627f] 2015-08-04 18:18:47 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) @@ -5071,7 +5071,7 @@ Branch: REL9_0_STABLE [36522d627] 2015-07-16 22:57:46 -0400 - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) @@ -5093,8 +5093,8 @@ Branch: REL9_0_STABLE [d637a899c] 2015-10-04 15:55:07 -0400 - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -5172,9 +5172,9 @@ Branch: REL9_0_STABLE [7b4b57fc4] 2015-08-12 21:19:10 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -5190,7 +5190,7 @@ Branch: REL9_2_STABLE [e538e510e] 2015-06-22 18:53:27 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -5232,12 +5232,12 @@ Branch: REL9_0_STABLE [8b53c087d] 2015-08-02 14:54:44 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -5311,7 +5311,7 @@ Branch: REL9_0_STABLE [f527c0a2a] 2015-07-28 17:34:00 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -5389,7 +5389,7 @@ Branch: REL9_2_STABLE [f4297f8c5] 2015-07-27 12:32:48 +0300 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -5408,7 +5408,7 @@ Branch: REL9_0_STABLE [40ad78220] 2015-07-23 01:30:19 +0300 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) @@ -5426,8 +5426,8 @@ Branch: REL9_0_STABLE [e41718fa1] 2015-08-18 19:22:38 -0400 - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) @@ -5444,7 +5444,7 @@ Branch: REL9_1_STABLE [ca6c2f863] 2015-09-29 10:52:22 -0400 - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) @@ -5467,8 +5467,8 @@ Branch: REL9_1_STABLE [1d190d095] 2015-08-21 12:21:37 -0400 - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) @@ -5485,8 +5485,8 @@ Branch: REL9_0_STABLE [4c11967e7] 2015-07-20 14:18:08 +0200 - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) @@ -5503,7 +5503,7 @@ Branch: REL9_1_STABLE [2d19a0e97] 2015-08-02 22:12:51 +0300 - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -5522,7 +5522,7 @@ Branch: REL9_4_STABLE [93840f96c] 2015-10-04 17:58:30 -0400 - Improve contrib/pg_stat_statements' handling of + Improve contrib/pg_stat_statements' handling of query-text garbage collection (Peter Geoghegan) @@ -5543,13 +5543,13 @@ Branch: REL9_3_STABLE [b7dcb2dd4] 2015-09-24 12:47:30 -0400 - Improve contrib/postgres_fdw's handling of + Improve contrib/postgres_fdw's handling of collation-related decisions (Tom Lane) The main user-visible effect is expected to be that comparisons - involving varchar columns will be sent to the remote server + involving varchar columns will be sent to the remote server for execution in more cases than before. @@ -5567,7 +5567,7 @@ Branch: REL9_0_STABLE [2b189c7ec] 2015-07-07 18:45:31 +0300 - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -5603,7 +5603,7 @@ Branch: REL9_0_STABLE [d278ff3b2] 2015-06-15 14:27:39 +0200 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) @@ -5634,15 +5634,15 @@ Branch: REL9_0_STABLE [98d8c75f9] 2015-09-25 12:20:46 -0400 - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. @@ -5659,7 +5659,7 @@ Branch: REL9_0_STABLE [6087bf1a1] 2015-07-08 20:44:27 -0400 - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) @@ -5675,7 +5675,7 @@ Branch: REL9_2_STABLE [3756c65a0] 2015-10-01 16:19:49 -0400 - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) @@ -5692,8 +5692,8 @@ Branch: REL9_1_STABLE [af225551e] 2015-07-25 17:16:39 -0400 - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) @@ -5710,8 +5710,8 @@ Branch: REL9_0_STABLE [24aed2124] 2015-09-20 20:44:34 -0400 - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) @@ -5729,8 +5729,8 @@ Branch: REL9_0_STABLE [52b07779d] 2015-09-11 15:51:10 -0400 - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) @@ -5748,7 +5748,7 @@ Branch: REL9_0_STABLE [298d1f808] 2015-08-10 20:10:16 -0400 - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -5756,11 +5756,11 @@ Branch: REL9_0_STABLE [298d1f808] 2015-08-10 20:10:16 -0400 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -5780,12 +5780,12 @@ Branch: REL9_0_STABLE [5d175be17] 2015-08-04 19:34:12 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. @@ -5801,7 +5801,7 @@ Branch: REL9_1_STABLE [e9a859b54] 2015-07-12 16:25:52 -0400 - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -5815,8 +5815,8 @@ Branch: REL9_4_STABLE [9d6352aaa] 2015-07-03 11:15:27 +0300 - Fix pgbench's progress-report behavior when a query, - or pgbench itself, gets stuck (Fabien Coelho) + Fix pgbench's progress-report behavior when a query, + or pgbench itself, gets stuck (Fabien Coelho) @@ -5845,11 +5845,11 @@ Branch: REL9_0_STABLE [b5a22d8bb] 2015-08-29 16:09:25 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -5868,7 +5868,7 @@ Branch: REL9_0_STABLE [cdf596b1c] 2015-07-17 03:02:46 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) @@ -5886,7 +5886,7 @@ Branch: REL9_0_STABLE [7803d5720] 2015-07-15 21:00:31 -0400 - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -5909,7 +5909,7 @@ Branch: REL9_0_STABLE [2d8c136e7] 2015-07-29 22:54:08 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) @@ -5925,7 +5925,7 @@ Branch: REL9_0_STABLE [b185c42c1] 2015-06-30 14:20:37 -0300 - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) @@ -5939,7 +5939,7 @@ Branch: REL9_4_STABLE [a0104e080] 2015-08-14 20:23:42 -0400 - Translate encoding UHC as Windows code page 949 + Translate encoding UHC as Windows code page 949 (Noah Misch) @@ -5972,12 +5972,12 @@ Branch: REL9_4_STABLE [b2ed1682d] 2015-06-20 12:10:56 -0400 Fix postmaster startup failure due to not - copying setlocale()'s return value (Noah Misch) + copying setlocale()'s return value (Noah Misch) This has been reported on Windows systems with the ANSI code page set - to CP936 (Chinese (Simplified, PRC)), and may occur with + to CP936 (Chinese (Simplified, PRC)), and may occur with other multibyte code pages. @@ -5995,7 +5995,7 @@ Branch: REL9_0_STABLE [341b877d3] 2015-07-07 16:39:25 +0300 - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) @@ -6013,9 +6013,9 @@ Branch: REL9_0_STABLE [29ff43adf] 2015-07-05 12:01:02 -0400 - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) @@ -6032,10 +6032,10 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -6067,7 +6067,7 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 However, if you are upgrading an installation that was previously - upgraded using a pg_upgrade version between 9.3.0 and + upgraded using a pg_upgrade version between 9.3.0 and 9.3.4 inclusive, see the first changelog entry below. @@ -6096,46 +6096,46 @@ Branch: REL9_3_STABLE [2a9b01928] 2015-06-05 09:34:15 -0400 - Recent PostgreSQL releases introduced mechanisms to + Recent PostgreSQL releases introduced mechanisms to protect against multixact wraparound, but some of that code did not account for the possibility that it would need to run during crash recovery, when the database may not be in a consistent state. This could result in failure to restart after a crash, or failure to start up a secondary server. The lingering effects of a previously-fixed - bug in pg_upgrade could also cause such a failure, in - installations that had used pg_upgrade versions + bug in pg_upgrade could also cause such a failure, in + installations that had used pg_upgrade versions between 9.3.0 and 9.3.4. - The pg_upgrade bug in question was that it would - set oldestMultiXid to 1 in pg_control even + The pg_upgrade bug in question was that it would + set oldestMultiXid to 1 in pg_control even if the true value should be higher. With the fixes introduced in this release, such a situation will result in immediate emergency - autovacuuming until a correct oldestMultiXid value can + autovacuuming until a correct oldestMultiXid value can be determined. If that would pose a hardship, users can avoid it by - doing manual vacuuming before upgrading to this release. + doing manual vacuuming before upgrading to this release. In detail: - Check whether pg_controldata reports Latest - checkpoint's oldestMultiXid to be 1. If not, there's nothing + Check whether pg_controldata reports Latest + checkpoint's oldestMultiXid to be 1. If not, there's nothing to do. - Look in PGDATA/pg_multixact/offsets to see if there's a - file named 0000. If there is, there's nothing to do. + Look in PGDATA/pg_multixact/offsets to see if there's a + file named 0000. If there is, there's nothing to do. Otherwise, for each table that has - pg_class.relminmxid equal to 1, - VACUUM that table with + pg_class.relminmxid equal to 1, + VACUUM that table with both and set to zero. (You can use the vacuum cost delay parameters described @@ -6164,7 +6164,7 @@ Branch: REL9_0_STABLE [2fe1939b0] 2015-06-07 15:32:09 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -6185,13 +6185,13 @@ Branch: REL9_0_STABLE [dbd99c7f0] 2015-06-05 13:22:27 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -6284,12 +6284,12 @@ Branch: REL9_3_STABLE [c2b68b1f7] 2015-05-29 17:02:58 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -6301,7 +6301,7 @@ Branch: REL9_3_STABLE [c2b68b1f7] 2015-05-29 17:02:58 -0400 - Also apply the same rules in initdb --sync-only. + Also apply the same rules in initdb --sync-only. This case is less critical but it should act similarly. @@ -6316,8 +6316,8 @@ Branch: REL9_2_STABLE [f3c67aad4] 2015-05-28 11:24:37 -0400 - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) @@ -6329,7 +6329,7 @@ Branch: REL9_4_STABLE [9b74f32cd] 2015-05-22 10:31:29 -0400 - Fix pushJsonbValue() to unpack jbvBinary + Fix pushJsonbValue() to unpack jbvBinary objects (Andrew Dunstan) @@ -6351,14 +6351,14 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. @@ -6390,8 +6390,8 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -6469,7 +6469,7 @@ Branch: REL9_0_STABLE [cf893530a] 2015-05-19 18:18:56 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -6479,7 +6479,7 @@ Branch: REL9_0_STABLE [cf893530a] 2015-05-19 18:18:56 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -6499,15 +6499,15 @@ Branch: REL9_0_STABLE [b84e5c017] 2015-05-18 10:02:39 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -6557,7 +6557,7 @@ Branch: REL9_3_STABLE [ddebd2119] 2015-05-11 12:16:51 -0400 Under certain usage patterns, the existing defenses against this might - be insufficient, allowing pg_multixact/members files to be + be insufficient, allowing pg_multixact/members files to be removed too early, resulting in data loss. The fix for this includes modifying the server to fail transactions that would result in overwriting old multixact member ID data, and @@ -6578,16 +6578,16 @@ Branch: REL9_1_STABLE [801e250a8] 2015-05-05 15:50:53 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -6595,16 +6595,16 @@ Branch: REL9_1_STABLE [801e250a8] 2015-05-05 15:50:53 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -6616,8 +6616,8 @@ Branch: REL9_4_STABLE [79afe6e66] 2015-02-26 12:34:43 -0500 - Render infinite dates and timestamps as infinity when - converting to json, rather than throwing an error + Render infinite dates and timestamps as infinity when + converting to json, rather than throwing an error (Andrew Dunstan) @@ -6630,8 +6630,8 @@ Branch: REL9_4_STABLE [997066f44] 2015-05-04 12:43:16 -0400 - Fix json/jsonb's populate_record() - and to_record() functions to handle empty input properly + Fix json/jsonb's populate_record() + and to_record() functions to handle empty input properly (Andrew Dunstan) @@ -6671,7 +6671,7 @@ Branch: REL9_4_STABLE [79edb2981] 2015-05-03 11:30:24 -0400 Fix behavior when changing foreign key constraint deferrability status - with ALTER TABLE ... ALTER CONSTRAINT (Tom Lane) + with ALTER TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -6720,7 +6720,7 @@ Branch: REL9_0_STABLE [985da346e] 2015-04-25 16:44:27 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -6768,7 +6768,7 @@ Branch: REL9_0_STABLE [72bbca27e] 2015-02-10 20:37:31 -0500 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -6782,7 +6782,7 @@ Branch: REL9_4_STABLE [f16270ade] 2015-02-25 21:36:40 -0500 Ensure that row locking occurs properly when the target of - an UPDATE or DELETE is a security-barrier view + an UPDATE or DELETE is a security-barrier view (Stephen Frost) @@ -6801,7 +6801,7 @@ Branch: REL9_4_STABLE [fd3dfc236] 2015-04-28 00:18:04 +0200 On some platforms, the previous coding could result in errors like - could not fsync file "pg_replslot/...": Bad file descriptor. + could not fsync file "pg_replslot/...": Bad file descriptor. @@ -6818,7 +6818,7 @@ Branch: REL9_0_STABLE [223a94680] 2015-04-23 21:37:09 +0300 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -6859,7 +6859,7 @@ Branch: REL9_0_STABLE [262fbcb9d] 2015-05-05 09:30:07 -0400 - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -6901,7 +6901,7 @@ Branch: REL9_4_STABLE [ee0d06c0b] 2015-04-03 00:07:29 -0400 This oversight could result in failures in sessions that start - concurrently with a VACUUM FULL on a system catalog. + concurrently with a VACUUM FULL on a system catalog. @@ -6913,7 +6913,7 @@ Branch: REL9_4_STABLE [2897e069c] 2015-03-30 13:05:35 -0400 - Fix crash in BackendIdGetTransactionIds() when trying + Fix crash in BackendIdGetTransactionIds() when trying to get status for a backend process that just exited (Tom Lane) @@ -6930,13 +6930,13 @@ Branch: REL9_0_STABLE [87b7fcc87] 2015-02-23 16:14:16 +0100 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -6950,7 +6950,7 @@ Branch: REL9_2_STABLE [effcaa4c2] 2015-02-15 23:26:46 -0500 - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -6995,7 +6995,7 @@ Branch: REL9_4_STABLE [16be9737c] 2015-03-23 16:52:17 +0100 - Avoid busy-waiting with short recovery_min_apply_delay + Avoid busy-waiting with short recovery_min_apply_delay values (Andres Freund) @@ -7061,9 +7061,9 @@ Branch: REL9_0_STABLE [152c94632] 2015-03-29 15:04:38 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. @@ -7078,10 +7078,10 @@ Branch: REL9_1_STABLE [4a4fd2b0c] 2015-03-12 13:38:49 -0400 - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -7127,14 +7127,14 @@ Branch: REL9_0_STABLE [c981e5999] 2015-05-08 19:40:15 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. @@ -7157,7 +7157,7 @@ Branch: REL9_0_STABLE [e48ce4f33] 2015-02-17 12:49:18 -0500 - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -7170,7 +7170,7 @@ Branch: REL9_0_STABLE [e48ce4f33] 2015-02-17 12:49:18 -0500 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) @@ -7197,7 +7197,7 @@ Branch: REL9_4_STABLE [a1f4ade01] 2015-04-02 14:39:18 -0400 After a database crash, don't restart background workers that are - marked BGW_NEVER_RESTART (Amit Khandekar) + marked BGW_NEVER_RESTART (Amit Khandekar) @@ -7212,13 +7212,13 @@ Branch: REL9_1_STABLE [0d36d9f2b] 2015-02-06 11:32:42 +0200 - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -7236,7 +7236,7 @@ Branch: REL9_0_STABLE [78ce2dc8e] 2015-05-07 15:10:01 +0200 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) @@ -7253,7 +7253,7 @@ Branch: REL9_0_STABLE [8878eaaa8] 2015-02-23 13:32:53 +0200 - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -7262,12 +7262,12 @@ Branch: REL9_0_STABLE [8878eaaa8] 2015-02-23 13:32:53 +0200 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. @@ -7281,7 +7281,7 @@ Branch: REL9_2_STABLE [83c3115dd] 2015-02-21 12:59:43 -0500 - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) @@ -7298,7 +7298,7 @@ Branch: REL9_0_STABLE [ce2fcc58e] 2015-02-11 11:30:11 +0100 - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) @@ -7314,8 +7314,8 @@ Branch: REL9_0_STABLE [557fcfae3] 2015-04-01 20:00:07 -0300 - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -7338,17 +7338,17 @@ Branch: REL9_0_STABLE [396ef6fd8] 2015-03-14 13:43:26 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. @@ -7364,7 +7364,7 @@ Branch: REL9_0_STABLE [8e70f3c40] 2015-02-10 22:38:29 -0500 - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) @@ -7380,8 +7380,8 @@ Branch: REL9_1_STABLE [b0d53b2e3] 2015-02-18 11:43:00 -0500 - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) @@ -7397,7 +7397,7 @@ Branch: REL9_1_STABLE [dcb467b8e] 2015-03-02 14:12:43 -0500 - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -7417,7 +7417,7 @@ Branch: REL9_3_STABLE [d645273cf] 2015-03-06 13:27:46 -0500 - Avoid possible pg_dump failure when concurrent sessions + Avoid possible pg_dump failure when concurrent sessions are creating and dropping temporary functions (Tom Lane) @@ -7434,7 +7434,7 @@ Branch: REL9_0_STABLE [7a501bcbf] 2015-02-25 12:01:12 -0500 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) @@ -7448,7 +7448,7 @@ Branch: REL9_4_STABLE [70fac4844] 2015-05-01 13:03:23 -0400 Ensure that a view's replication identity is correctly set - to nothing during dump/restore (Marko Tiikkaja) + to nothing during dump/restore (Marko Tiikkaja) @@ -7472,7 +7472,7 @@ Branch: REL9_3_STABLE [4e9935979] 2015-05-16 15:16:28 -0400 - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -7494,7 +7494,7 @@ Branch: REL9_0_STABLE [2194aa92b] 2015-05-16 00:10:03 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -7512,8 +7512,8 @@ Branch: REL9_0_STABLE [4ae178f60] 2015-02-11 22:06:04 -0500 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) @@ -7530,14 +7530,14 @@ Branch: REL9_0_STABLE [85dac37ee] 2015-02-11 21:02:06 -0500 - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. @@ -7553,7 +7553,7 @@ Branch: REL9_0_STABLE [bf22a8e58] 2015-03-30 17:18:10 -0400 - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -7571,8 +7571,8 @@ Branch: REL9_1_STABLE [d7d294f59] 2015-02-17 11:08:40 -0500 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) @@ -7589,7 +7589,7 @@ Branch: REL9_0_STABLE [40b0c10b7] 2015-03-15 23:22:03 -0400 - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -7637,7 +7637,7 @@ Branch: REL9_0_STABLE [3c3749a3b] 2015-05-15 19:36:20 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -7672,12 +7672,12 @@ Branch: REL9_0_STABLE [3c3749a3b] 2015-05-15 19:36:20 -0400 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway - or norwegian-bokmal locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway + or norwegian-bokmal locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -7705,15 +7705,15 @@ Branch: REL9_0_STABLE [56b970f2e] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -7733,27 +7733,27 @@ Branch: REL9_0_STABLE [9e05c5063] 2015-02-02 10:00:52 -0500 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -7778,12 +7778,12 @@ Branch: REL9_0_STABLE [0a3ee8a5f] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -7844,7 +7844,7 @@ Branch: REL9_0_STABLE [3a2063369] 2015-01-28 12:33:29 -0500 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -7893,20 +7893,20 @@ Branch: REL9_2_STABLE [6bf343c6e] 2015-01-16 13:10:23 +0200 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. - 9.4.0 mapped the troublesome name to norwegian-bokmal, + 9.4.0 mapped the troublesome name to norwegian-bokmal, but that turns out not to work on all Windows configurations. - Norwegian_Norway is now recommended instead. + Norwegian_Norway is now recommended instead. @@ -7927,7 +7927,7 @@ Branch: REL9_0_STABLE [5308e085b] 2015-01-15 18:52:38 -0500 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. @@ -7956,8 +7956,8 @@ Branch: REL9_3_STABLE [54a8abc2b] 2015-01-04 15:48:29 -0300 Fix failure to wait when a transaction tries to acquire a FOR - NO KEY EXCLUSIVE tuple lock, while multiple other transactions - currently hold FOR SHARE locks (Álvaro Herrera) + NO KEY EXCLUSIVE tuple lock, while multiple other transactions + currently hold FOR SHARE locks (Álvaro Herrera) @@ -7970,7 +7970,7 @@ Branch: REL9_3_STABLE [939f0fb67] 2015-01-15 13:18:19 -0500 - Improve performance of EXPLAIN with large range tables + Improve performance of EXPLAIN with large range tables (Tom Lane) @@ -7983,41 +7983,41 @@ Branch: REL9_4_STABLE [4cbf390d5] 2015-01-30 14:44:49 -0500 - Fix jsonb Unicode escape processing, and in consequence - disallow \u0000 (Tom Lane) + Fix jsonb Unicode escape processing, and in consequence + disallow \u0000 (Tom Lane) - Previously, the JSON Unicode escape \u0000 was accepted + Previously, the JSON Unicode escape \u0000 was accepted and was stored as those six characters; but that is indistinguishable - from what is stored for the input \\u0000, resulting in + from what is stored for the input \\u0000, resulting in ambiguity. Moreover, in cases where de-escaped textual output is - expected, such as the ->> operator, the sequence was - printed as \u0000, which does not meet the expectation + expected, such as the ->> operator, the sequence was + printed as \u0000, which does not meet the expectation that JSON escaping would be removed. (Consistent behavior would - require emitting a zero byte, but PostgreSQL does not + require emitting a zero byte, but PostgreSQL does not support zero bytes embedded in text strings.) 9.4.0 included an ill-advised attempt to improve this situation by adjusting JSON output conversion rules; but of course that could not fix the fundamental ambiguity, and it turned out to break other usages of Unicode escape sequences. Revert that, and to avoid the core problem, - reject \u0000 in jsonb input. + reject \u0000 in jsonb input. - If a jsonb column contains a \u0000 value stored + If a jsonb column contains a \u0000 value stored with 9.4.0, it will henceforth read out as though it - were \\u0000, which is the other valid interpretation of + were \\u0000, which is the other valid interpretation of the data stored by 9.4.0 for this case. - The json type did not have the storage-ambiguity problem, but + The json type did not have the storage-ambiguity problem, but it did have the problem of inconsistent de-escaped textual output. - Therefore \u0000 will now also be rejected - in json values when conversion to de-escaped form is + Therefore \u0000 will now also be rejected + in json values when conversion to de-escaped form is required. This change does not break the ability to - store \u0000 in json columns so long as no + store \u0000 in json columns so long as no processing is done on the values. This is exactly parallel to the cases in which non-ASCII Unicode escapes are allowed when the database encoding is not UTF8. @@ -8036,14 +8036,14 @@ Branch: REL9_0_STABLE [cebb3f032] 2015-01-17 22:37:32 -0500 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -8063,7 +8063,7 @@ Branch: REL9_3_STABLE [527ff8baf] 2015-01-30 12:30:43 -0500 - This patch fixes corner-case unexpected operator NNNN planner + This patch fixes corner-case unexpected operator NNNN planner errors, and improves the selectivity estimates for some other cases. @@ -8081,7 +8081,7 @@ Branch: REL9_4_STABLE [4e241f7cd] 2014-12-30 14:53:03 +0200 - 9.4.0 could fail with index row size exceeds maximum errors + 9.4.0 could fail with index row size exceeds maximum errors for data that previous versions would accept. @@ -8111,7 +8111,7 @@ Branch: REL9_1_STABLE [37e0f13f2] 2015-01-29 19:37:22 +0200 Fix possible crash when using - nonzero gin_fuzzy_search_limit (Heikki Linnakangas) + nonzero gin_fuzzy_search_limit (Heikki Linnakangas) @@ -8139,7 +8139,7 @@ Branch: REL9_4_STABLE [b337d9657] 2015-01-15 20:52:18 +0200 Fix incorrect replay of WAL parameter change records that report - changes in the wal_log_hints setting (Petr Jelinek) + changes in the wal_log_hints setting (Petr Jelinek) @@ -8155,7 +8155,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -8164,7 +8164,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -8180,7 +8180,7 @@ Branch: REL9_0_STABLE [2e4946169] 2015-01-07 22:46:20 -0500 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) @@ -8193,18 +8193,18 @@ Branch: REL9_4_STABLE [733728ff3] 2015-01-11 12:35:47 -0500 - Fix libpq's behavior when /etc/passwd + Fix libpq's behavior when /etc/passwd isn't readable (Tom Lane) - While doing PQsetdbLogin(), libpq + While doing PQsetdbLogin(), libpq attempts to ascertain the user's operating system name, which on most - Unix platforms involves reading /etc/passwd. As of 9.4, + Unix platforms involves reading /etc/passwd. As of 9.4, failure to do that was treated as a hard error. Restore the previous behavior, which was to fail only if the application does not provide a database role name to connect as. This supports operation in chroot - environments that lack an /etc/passwd file. + environments that lack an /etc/passwd file. @@ -8220,17 +8220,17 @@ Branch: REL9_0_STABLE [2600e4436] 2014-12-31 12:17:12 -0500 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -8245,7 +8245,7 @@ Branch: REL9_3_STABLE [bb1e2426b] 2015-01-05 19:27:09 -0500 - Fix pg_dump to handle comments on event triggers + Fix pg_dump to handle comments on event triggers without failing (Tom Lane) @@ -8259,8 +8259,8 @@ Branch: REL9_3_STABLE [cc609c46f] 2015-01-30 09:01:36 -0600 - Allow parallel pg_dump to - use (Kevin Grittner) @@ -8275,7 +8275,7 @@ Branch: REL9_1_STABLE [2a0bfa4d6] 2015-01-03 20:54:13 +0100 - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) @@ -8293,12 +8293,12 @@ Branch: REL9_0_STABLE [dc9a506e6] 2015-01-29 20:18:46 -0500 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. @@ -8392,14 +8392,14 @@ Branch: REL9_4_STABLE [adb355106] 2015-01-14 11:08:17 -0500 - Allow CFLAGS from configure's environment - to override automatically-supplied CFLAGS (Tom Lane) + Allow CFLAGS from configure's environment + to override automatically-supplied CFLAGS (Tom Lane) - Previously, configure would add any switches that it + Previously, configure would add any switches that it chose of its own accord to the end of the - user-specified CFLAGS string. Since most compilers + user-specified CFLAGS string. Since most compilers process switches left-to-right, this meant that configure's choices would override the user-specified flags in case of conflicts. That should work the other way around, so adjust the logic to put the @@ -8419,13 +8419,13 @@ Branch: REL9_0_STABLE [338ff75fc] 2015-01-19 23:44:33 -0500 - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -8451,7 +8451,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Update time zone data files to tzdata release 2015a + Update time zone data files to tzdata release 2015a for DST law changes in Chile and Mexico, plus historical changes in Iceland. @@ -8474,7 +8474,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Overview - Major enhancements in PostgreSQL 9.4 include: + Major enhancements in PostgreSQL 9.4 include: @@ -8483,15 +8483,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add jsonb, a more - capable and efficient data type for storing JSON data + Add jsonb, a more + capable and efficient data type for storing JSON data - Add new SQL command - for changing postgresql.conf configuration file entries + Add new SQL command + for changing postgresql.conf configuration file entries @@ -8504,14 +8504,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow materialized views + Allow materialized views to be refreshed without blocking concurrent reads - Add support for logical decoding + Add support for logical decoding of WAL data, to allow database changes to be streamed out in a customizable format @@ -8519,7 +8519,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow background worker processes + Allow background worker processes to be dynamically registered, started and terminated @@ -8558,14 +8558,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, an input array string that started with a single-element sub-array could later contain multi-element sub-arrays, - e.g. '{{1}, {2,3}}'::int[] would be accepted. + e.g. '{{1}, {2,3}}'::int[] would be accepted. - When converting values of type date, timestamp - or timestamptz + When converting values of type date, timestamp + or timestamptz to JSON, render the values in a format compliant with ISO 8601 (Andrew Dunstan) @@ -8575,7 +8575,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 setting; but many JSON processors require timestamps to be in ISO 8601 format. If necessary, the previous behavior can be obtained by explicitly casting the datetime - value to text before passing it to the JSON conversion + value to text before passing it to the JSON conversion function. @@ -8583,15 +8583,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 The json - #> text[] path extraction operator now + #> text[] path extraction operator now returns its lefthand input, not NULL, if the array is empty (Tom Lane) This is consistent with the notion that this represents zero applications of the simple field/element extraction - operator ->. Similarly, json - #>> text[] with an empty array merely + operator ->. Similarly, json + #>> text[] with an empty array merely coerces its lefthand input to text. @@ -8616,26 +8616,26 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Cause consecutive whitespace in to_timestamp() - and to_date() format strings to consume a corresponding + linkend="functions-formatting-table">to_timestamp() + and to_date() format strings to consume a corresponding number of characters in the input string (whitespace or not), then - conditionally consume adjacent whitespace, if not in FX + conditionally consume adjacent whitespace, if not in FX mode (Jeevan Chalke) - Previously, consecutive whitespace characters in a non-FX + Previously, consecutive whitespace characters in a non-FX format string behaved like a single whitespace character and consumed all adjacent whitespace in the input string. For example, previously a format string of three spaces would consume only the first space in - ' 12', but it will now consume all three characters. + ' 12', but it will now consume all three characters. Fix ts_rank_cd() + linkend="textsearch-functions-table">ts_rank_cd() to ignore stripped lexemes (Alex Hill) @@ -8649,15 +8649,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 For functions declared to take VARIADIC - "any", an actual parameter marked as VARIADIC + "any", an actual parameter marked as VARIADIC must be of a determinable array type (Pavel Stehule) Such parameters can no longer be written as an undecorated string - literal or NULL; a cast to an appropriate array data type + literal or NULL; a cast to an appropriate array data type will now be required. Note that this does not affect parameters not - marked VARIADIC. + marked VARIADIC. @@ -8669,8 +8669,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Constructs like row_to_json(tab.*) now always emit column - names that match the column aliases visible for table tab + Constructs like row_to_json(tab.*) now always emit column + names that match the column aliases visible for table tab at the point of the call. In previous releases the emitted column names would sometimes be the table's actual column names regardless of any aliases assigned in the query. @@ -8687,7 +8687,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Rename EXPLAIN - ANALYZE's total runtime output + ANALYZE's total runtime output to execution time (Tom Lane) @@ -8699,15 +8699,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - SHOW TIME ZONE now - outputs simple numeric UTC offsets in POSIX timezone + SHOW TIME ZONE now + outputs simple numeric UTC offsets in POSIX timezone format (Tom Lane) Previously, such timezone settings were displayed as interval values. - The new output is properly interpreted by SET TIME ZONE + linkend="datatype-interval-output">interval values. + The new output is properly interpreted by SET TIME ZONE when passed as a simple string, whereas the old output required special treatment to be re-parsed correctly. @@ -8716,25 +8716,25 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Foreign data wrappers that support updating foreign tables must - consider the possible presence of AFTER ROW triggers + consider the possible presence of AFTER ROW triggers (Noah Misch) - When an AFTER ROW trigger is present, all columns of the + When an AFTER ROW trigger is present, all columns of the table must be returned by updating actions, since the trigger might inspect any or all of them. Previously, foreign tables never had triggers, so the FDW might optimize away fetching columns not mentioned - in the RETURNING clause (if any). + in the RETURNING clause (if any). Prevent CHECK + linkend="ddl-constraints-check-constraints">CHECK constraints from referencing system columns, except - tableoid (Amit Kapila) + tableoid (Amit Kapila) @@ -8752,7 +8752,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, there was an undocumented precedence order among - the recovery_target_xxx parameters. + the recovery_target_xxx parameters. @@ -8766,14 +8766,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 User commands that did their own quote preservation might need adjustment. This is likely to be an issue for commands used in , , - and COPY TO/FROM PROGRAM. + and COPY TO/FROM PROGRAM. Remove catalog column pg_class.reltoastidxid + linkend="catalog-pg-class">pg_class.reltoastidxid (Michael Paquier) @@ -8781,33 +8781,33 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Remove catalog column pg_rewrite.ev_attr + linkend="catalog-pg-rewrite">pg_rewrite.ev_attr (Kevin Grittner) Per-column rules have not been supported since - PostgreSQL 7.3. + PostgreSQL 7.3. - Remove native support for Kerberos authentication - (, etc) (Magnus Hagander) - The supported way to use Kerberos authentication is - with GSSAPI. The native code has been deprecated since - PostgreSQL 8.3. + The supported way to use Kerberos authentication is + with GSSAPI. The native code has been deprecated since + PostgreSQL 8.3. - In PL/Python, handle domains over arrays like the + In PL/Python, handle domains over arrays like the underlying array type (Rodolfo Campero) @@ -8819,9 +8819,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make libpq's PQconnectdbParams() + linkend="libpq-pqconnectdbparams">PQconnectdbParams() and PQpingParams() + linkend="libpq-pqpingparams">PQpingParams() functions process zero-length strings as defaults (Adrian Vondendriesch) @@ -8841,20 +8841,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, empty arrays were returned as zero-length one-dimensional arrays, whose text representation looked the same as zero-dimensional - arrays ({}), but they acted differently in array - operations. intarray's behavior in this area now + arrays ({}), but they acted differently in array + operations. intarray's behavior in this area now matches the built-in array operators. - now uses - Previously this option was spelled or , but that was inconsistent with other tools. @@ -8884,7 +8884,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - The new worker_spi module shows an example of use + The new worker_spi module shows an example of use of this feature. @@ -8904,7 +8904,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 During crash recovery or immediate shutdown, send uncatchable - termination signals (SIGKILL) to child processes + termination signals (SIGKILL) to child processes that do not shut down promptly (MauMau, Álvaro Herrera) @@ -8912,7 +8912,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This reduces the likelihood of leaving orphaned child processes behind after shutdown, as well as ensuring that crash recovery can proceed if some child processes - have become stuck. + have become stuck. @@ -8942,13 +8942,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce GIN index size + Reduce GIN index size (Alexander Korotkov, Heikki Linnakangas) Indexes upgraded via will work fine - but will still be in the old, larger GIN format. + but will still be in the old, larger GIN format. Use to recreate old GIN indexes in the new format. @@ -8957,16 +8957,16 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of multi-key GIN lookups (Alexander Korotkov, + linkend="GIN">GIN lookups (Alexander Korotkov, Heikki Linnakangas) - Add GiST index support - for inet and - cidr data types + Add GiST index support + for inet and + cidr data types (Emre Hasegeli) @@ -9002,7 +9002,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow multiple backends to insert - into WAL buffers + into WAL buffers concurrently (Heikki Linnakangas) @@ -9014,7 +9014,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Conditionally write only the modified portion of updated rows to - WAL (Amit Kapila) + WAL (Amit Kapila) @@ -9029,7 +9029,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of aggregates that - use numeric state + use numeric state values (Hadi Moshayedi) @@ -9039,7 +9039,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Attempt to freeze tuples when tables are rewritten with or VACUUM FULL (Robert Haas, + linkend="SQL-VACUUM">VACUUM FULL (Robert Haas, Andres Freund) @@ -9051,7 +9051,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of with default nextval() + linkend="functions-sequence-table">nextval() columns (Simon Riggs) @@ -9073,7 +9073,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce memory allocated by PL/pgSQL + Reduce memory allocated by PL/pgSQL blocks (Tom Lane) @@ -9081,18 +9081,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make the planner more aggressive about extracting restriction clauses - from mixed AND/OR clauses (Tom Lane) + from mixed AND/OR clauses (Tom Lane) - Disallow pushing volatile WHERE clauses down - into DISTINCT subqueries (Tom Lane) + Disallow pushing volatile WHERE clauses down + into DISTINCT subqueries (Tom Lane) - Pushing down a WHERE clause can produce a more + Pushing down a WHERE clause can produce a more efficient plan overall, but at the cost of evaluating the clause more often than is implied by the text of the query; so don't do it if the clause contains any volatile functions. @@ -9122,14 +9122,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add system view to - report WAL archiver activity + report WAL archiver activity (Gabriele Bartolini) - Add n_mod_since_analyze columns to + Add n_mod_since_analyze columns to and related system views (Mark Kirkwood) @@ -9143,9 +9143,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add backend_xid and backend_xmin + Add backend_xid and backend_xmin columns to the system view , - and a backend_xmin column to + and a backend_xmin column to (Christian Kruse) @@ -9155,22 +9155,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <acronym>SSL</> + <acronym>SSL</acronym> - Add support for SSL ECDH key exchange + Add support for SSL ECDH key exchange (Marko Kreen) This allows use of Elliptic Curve keys for server authentication. - Such keys are faster and have better security than RSA + Such keys are faster and have better security than RSA keys. The new configuration parameter - controls which curve is used for ECDH. + controls which curve is used for ECDH. @@ -9184,14 +9184,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 By default, the server not the client now controls the preference - order of SSL ciphers + order of SSL ciphers (Marko Kreen) Previously, the order specified by was usually ignored in favor of client-side defaults, which are not - configurable in most PostgreSQL clients. If + configurable in most PostgreSQL clients. If desired, the old behavior can be restored via the new configuration parameter . @@ -9199,14 +9199,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make show SSL + Make show SSL encryption information (Andreas Kunert) - Improve SSL renegotiation handling (Álvaro + Improve SSL renegotiation handling (Álvaro Herrera) @@ -9222,14 +9222,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command - for changing postgresql.conf configuration file entries + Add new SQL command + for changing postgresql.conf configuration file entries (Amit Kapila) Previously such settings could only be changed by manually - editing postgresql.conf. + editing postgresql.conf. @@ -9274,7 +9274,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 In contrast to , this parameter can load any shared library, not just those in - the $libdir/plugins directory. + the $libdir/plugins directory. @@ -9287,7 +9287,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Hint bit changes are not normally logged, except when checksums are enabled. This is useful for external tools - like pg_rewind. + like pg_rewind. @@ -9320,14 +9320,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow terabyte units (TB) to be used when specifying + Allow terabyte units (TB) to be used when specifying configuration variable values (Simon Riggs) - Show PIDs of lock holders and waiters and improve + Show PIDs of lock holders and waiters and improve information about relations in log messages (Christian Kruse) @@ -9340,14 +9340,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - The previous level was LOG, which was too verbose + The previous level was LOG, which was too verbose for libraries loaded per-session. - On Windows, make SQL_ASCII-encoded databases and server + On Windows, make SQL_ASCII-encoded databases and server processes (e.g., ) emit messages in the character encoding of the server's Windows user locale (Alexander Law, Noah Misch) @@ -9355,7 +9355,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously these messages were output in the Windows - ANSI code page. + ANSI code page. @@ -9379,7 +9379,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Replication slots allow preservation of resources like - WAL files on the primary until they are no longer + WAL files on the primary until they are no longer needed by standby servers. @@ -9400,8 +9400,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add - option @@ -9413,7 +9413,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 The timestamp reported - by pg_last_xact_replay_timestamp() + by pg_last_xact_replay_timestamp() now reflects already-committed records, not transactions about to be committed. Recovering to a restore point now replays the restore point, rather than stopping just before the restore point. @@ -9423,34 +9423,34 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 pg_switch_xlog() - now clears any unused trailing space in the old WAL file + linkend="functions-admin-backup-table">pg_switch_xlog() + now clears any unused trailing space in the old WAL file (Heikki Linnakangas) - This improves the compression ratio for WAL files. + This improves the compression ratio for WAL files. Report failure return codes from external recovery commands + linkend="archive-recovery-settings">external recovery commands (Peter Eisentraut) - Reduce spinlock contention during WAL replay (Heikki + Reduce spinlock contention during WAL replay (Heikki Linnakangas) - Write WAL records of running transactions more + Write WAL records of running transactions more frequently (Andres Freund) @@ -9463,12 +9463,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="logicaldecoding">Logical Decoding</> + <link linkend="logicaldecoding">Logical Decoding</link> Logical decoding allows database changes to be streamed in a configurable format. The data is read from - the WAL and transformed into the + the WAL and transformed into the desired target format. To implement this feature, the following changes were made: @@ -9477,7 +9477,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add support for logical decoding + Add support for logical decoding of WAL data, to allow database changes to be streamed out in a customizable format (Andres Freund) @@ -9486,8 +9486,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new setting @@ -9495,7 +9495,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add table-level parameter REPLICA IDENTITY + linkend="catalog-pg-class">REPLICA IDENTITY to control logical replication (Andres Freund) @@ -9503,7 +9503,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add relation option to identify user-created tables involved in logical change-set encoding (Andres Freund) @@ -9519,7 +9519,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add module to illustrate logical - decoding at the SQL level (Andres Freund) + decoding at the SQL level (Andres Freund) @@ -9537,22 +9537,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add WITH - ORDINALITY syntax to number the rows returned from a - set-returning function in the FROM clause + ORDINALITY syntax to number the rows returned from a + set-returning function in the FROM clause (Andrew Gierth, David Fetter) This is particularly useful for functions like - unnest(). + unnest(). Add ROWS - FROM() syntax to allow horizontal concatenation of - set-returning functions in the FROM clause (Andrew Gierth) + FROM() syntax to allow horizontal concatenation of + set-returning functions in the FROM clause (Andrew Gierth) @@ -9571,7 +9571,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Ensure that SELECT ... FOR UPDATE - NOWAIT does not wait in corner cases involving + NOWAIT does not wait in corner cases involving already-concurrently-updated tuples (Craig Ringer and Thomas Munro) @@ -9588,21 +9588,21 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add DISCARD - SEQUENCES command to discard cached sequence-related state + SEQUENCES command to discard cached sequence-related state (Fabrízio de Royes Mello, Robert Haas) - DISCARD ALL will now also discard such information. + DISCARD ALL will now also discard such information. - Add FORCE NULL option - to COPY FROM, which + Add FORCE NULL option + to COPY FROM, which causes quoted strings matching the specified null string to be - converted to NULLs in CSV mode (Ian Barwick, Michael + converted to NULLs in CSV mode (Ian Barwick, Michael Paquier) @@ -9620,8 +9620,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 New warnings are issued for SET - LOCAL, SET CONSTRAINTS, SET TRANSACTION and - ABORT when used outside a transaction block. + LOCAL, SET CONSTRAINTS, SET TRANSACTION and + ABORT when used outside a transaction block. @@ -9634,21 +9634,21 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make EXPLAIN ANALYZE show planning time (Andreas + Make EXPLAIN ANALYZE show planning time (Andreas Karlsson) - Make EXPLAIN show the grouping columns in Agg and + Make EXPLAIN show the grouping columns in Agg and Group nodes (Tom Lane) - Make EXPLAIN ANALYZE show exact and lossy + Make EXPLAIN ANALYZE show exact and lossy block counts in bitmap heap scans (Etsuro Fujita) @@ -9664,7 +9664,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow a materialized view + Allow a materialized view to be refreshed without blocking other sessions from reading the view meanwhile (Kevin Grittner) @@ -9672,7 +9672,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This is done with REFRESH MATERIALIZED - VIEW CONCURRENTLY. + VIEW CONCURRENTLY. @@ -9687,28 +9687,28 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously the presence of non-updatable output columns such as expressions, literals, and function calls prevented automatic - updates. Now INSERTs, UPDATEs and - DELETEs are supported, provided that they do not + updates. Now INSERTs, UPDATEs and + DELETEs are supported, provided that they do not attempt to assign new values to any of the non-updatable columns. - Allow control over whether INSERTs and - UPDATEs can add rows to an auto-updatable view that + Allow control over whether INSERTs and + UPDATEs can add rows to an auto-updatable view that would not appear in the view (Dean Rasheed) This is controlled with the new - clause WITH CHECK OPTION. + clause WITH CHECK OPTION. - Allow security barrier views + Allow security barrier views to be automatically updatable (Dean Rasheed) @@ -9727,14 +9727,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support triggers on foreign - tables (Ronan Dunklau) + tables (Ronan Dunklau) Allow moving groups of objects from one tablespace to another - using the ALL IN TABLESPACE ... SET TABLESPACE form of + using the ALL IN TABLESPACE ... SET TABLESPACE form of , , or (Stephen Frost) @@ -9744,7 +9744,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow changing foreign key constraint deferrability via ... ALTER - CONSTRAINT (Simon Riggs) + CONSTRAINT (Simon Riggs) @@ -9756,12 +9756,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Specifically, VALIDATE CONSTRAINT, CLUSTER - ON, SET WITHOUT CLUSTER, ALTER COLUMN - SET STATISTICS, ALTER COLUMN SET - @@ -9791,7 +9791,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Fix DROP IF EXISTS to avoid errors for non-existent + Fix DROP IF EXISTS to avoid errors for non-existent objects in more cases (Pavel Stehule, Dean Rasheed) @@ -9803,7 +9803,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Previously, relations once moved into the pg_catalog + Previously, relations once moved into the pg_catalog schema could no longer be modified or dropped. @@ -9820,14 +9820,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Fully implement the line data type (Peter + linkend="datatype-line">line data type (Peter Eisentraut) - The line segment data type (lseg) has always been - fully supported. The previous line data type (which was + The line segment data type (lseg) has always been + fully supported. The previous line data type (which was enabled only via a compile-time option) is not binary or dump-compatible with the new implementation. @@ -9835,17 +9835,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add pg_lsn - data type to represent a WAL log sequence number - (LSN) (Robert Haas, Michael Paquier) + Add pg_lsn + data type to represent a WAL log sequence number + (LSN) (Robert Haas, Michael Paquier) Allow single-point polygons to be converted - to circles + linkend="datatype-polygon">polygons to be converted + to circles (Bruce Momjian) @@ -9857,31 +9857,31 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. - Allow 5+ digit years for non-ISO timestamp and - date strings, where appropriate (Bruce Momjian) + Allow 5+ digit years for non-ISO timestamp and + date strings, where appropriate (Bruce Momjian) Add checks for overflow/underflow of interval values + linkend="datatype-datetime">interval values (Bruce Momjian) @@ -9889,14 +9889,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="datatype-json"><acronym>JSON</></link> + <link linkend="datatype-json"><acronym>JSON</acronym></link> - Add jsonb, a more - capable and efficient data type for storing JSON data + Add jsonb, a more + capable and efficient data type for storing JSON data (Oleg Bartunov, Teodor Sigaev, Alexander Korotkov, Peter Geoghegan, Andrew Dunstan) @@ -9904,9 +9904,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This new type allows faster access to values within a JSON document, and faster and more useful indexing of JSON columns. - Scalar values in jsonb documents are stored as appropriate + Scalar values in jsonb documents are stored as appropriate scalar SQL types, and the JSON document structure is pre-parsed - rather than being stored as text as in the original json + rather than being stored as text as in the original json data type. @@ -9919,18 +9919,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 New functions include json_array_elements_text(), - json_build_array(), json_object(), - json_object_agg(), json_to_record(), - and json_to_recordset(). + linkend="functions-json-processing-table">json_array_elements_text(), + json_build_array(), json_object(), + json_object_agg(), json_to_record(), + and json_to_recordset(). Add json_typeof() - to return the data type of a json value (Andrew Tipton) + linkend="functions-json-processing-table">json_typeof() + to return the data type of a json value (Andrew Tipton) @@ -9948,13 +9948,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add pg_sleep_for(interval) - and pg_sleep_until(timestamp) to specify + linkend="functions-datetime-delay">pg_sleep_for(interval) + and pg_sleep_until(timestamp) to specify delays more flexibly (Vik Fearing, Julien Rouhaud) - The existing pg_sleep() function only supports delays + The existing pg_sleep() function only supports delays specified in seconds. @@ -9962,7 +9962,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add cardinality() + linkend="array-functions-table">cardinality() function for arrays (Marko Tiikkaja) @@ -9974,7 +9974,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add SQL functions to allow large + Add SQL functions to allow large object reads/writes at arbitrary offsets (Pavel Stehule) @@ -9982,7 +9982,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow unnest() + linkend="array-functions-table">unnest() to take multiple arguments, which are individually unnested then horizontally concatenated (Andrew Gierth) @@ -9990,36 +9990,36 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add functions to construct times, dates, - timestamps, timestamptzs, and intervals + Add functions to construct times, dates, + timestamps, timestamptzs, and intervals from individual values, rather than strings (Pavel Stehule) - These functions' names are prefixed with make_, - e.g. make_date(). + These functions' names are prefixed with make_, + e.g. make_date(). Make to_char()'s - TZ format specifier return a useful value for simple + linkend="functions-formatting-table">to_char()'s + TZ format specifier return a useful value for simple numeric time zone offsets (Tom Lane) - Previously, to_char(CURRENT_TIMESTAMP, 'TZ') returned - an empty string if the timezone was set to a constant - like -4. + Previously, to_char(CURRENT_TIMESTAMP, 'TZ') returned + an empty string if the timezone was set to a constant + like -4. - Add timezone offset format specifier OF to to_char() + Add timezone offset format specifier OF to to_char() (Bruce Momjian) @@ -10027,7 +10027,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve the random seed used for random() + linkend="functions-math-random-table">random() (Honza Horak) @@ -10035,7 +10035,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Tighten validity checking for Unicode code points in chr(int) + linkend="functions-string-other">chr(int) (Tom Lane) @@ -10054,18 +10054,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add functions for looking up objects in pg_class, - pg_proc, pg_type, and - pg_operator that do not generate errors for + Add functions for looking up objects in pg_class, + pg_proc, pg_type, and + pg_operator that do not generate errors for non-existent objects (Yugo Nagata, Nozomi Anzai, Robert Haas) For example, to_regclass() - does a lookup in pg_class similarly to - the regclass input function, but it returns NULL for a + linkend="functions-info-catalog-table">to_regclass() + does a lookup in pg_class similarly to + the regclass input function, but it returns NULL for a non-existent object instead of failing. @@ -10073,7 +10073,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add function pg_filenode_relation() + linkend="functions-admin-dblocation">pg_filenode_relation() to allow for more efficient lookup of relation names from filenodes (Andres Freund) @@ -10081,8 +10081,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add parameter_default column to information_schema.parameters + Add parameter_default column to information_schema.parameters view (Peter Eisentraut) @@ -10090,7 +10090,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make information_schema.schemata + linkend="infoschema-schemata">information_schema.schemata show all accessible schemas (Peter Eisentraut) @@ -10112,7 +10112,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add control over which rows are passed into aggregate functions via the FILTER clause + linkend="syntax-aggregates">FILTER clause (David Fetter) @@ -10120,7 +10120,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support ordered-set (WITHIN GROUP) + linkend="syntax-aggregates">WITHIN GROUP) aggregates (Atri Sharma, Andrew Gierth, Tom Lane) @@ -10128,11 +10128,11 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add standard ordered-set aggregates percentile_cont(), - percentile_disc(), mode(), rank(), - dense_rank(), percent_rank(), and - cume_dist() + linkend="functions-orderedset-table">percentile_cont(), + percentile_disc(), mode(), rank(), + dense_rank(), percent_rank(), and + cume_dist() (Atri Sharma, Andrew Gierth) @@ -10140,7 +10140,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support VARIADIC + linkend="xfunc-sql-variadic-functions">VARIADIC aggregate functions (Tom Lane) @@ -10152,7 +10152,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This allows proper declaration in SQL of aggregates like the built-in - aggregate array_agg(). + aggregate array_agg(). @@ -10169,20 +10169,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add event trigger support to PL/Perl - and PL/Tcl (Dimitri Fontaine) + Add event trigger support to PL/Perl + and PL/Tcl (Dimitri Fontaine) - Convert numeric - values to decimal in PL/Python + Convert numeric + values to decimal in PL/Python (Szymon Guz, Ronan Dunklau) - Previously such values were converted to Python float values, + Previously such values were converted to Python float values, risking loss of precision. @@ -10198,7 +10198,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add ability to retrieve the current PL/pgSQL call stack using GET - DIAGNOSTICS + DIAGNOSTICS (Pavel Stehule, Stephen Frost) @@ -10206,17 +10206,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add option to display the parameters passed to a query that violated a - STRICT constraint (Marko Tiikkaja) + STRICT constraint (Marko Tiikkaja) Add variables plpgsql.extra_warnings - and plpgsql.extra_errors to enable additional PL/pgSQL + linkend="plpgsql-extra-checks">plpgsql.extra_warnings + and plpgsql.extra_errors to enable additional PL/pgSQL warnings and errors (Marko Tiikkaja, Petr Jelinek) @@ -10232,13 +10232,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> Make libpq's PQconndefaults() + linkend="libpq-pqconndefaults">PQconndefaults() function ignore invalid service files (Steve Singer, Bruce Momjian) @@ -10250,7 +10250,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Accept TLS protocol versions beyond TLSv1 + Accept TLS protocol versions beyond TLSv1 in libpq (Marko Kreen) @@ -10266,7 +10266,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add option @@ -10274,7 +10274,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add - option to analyze in stages of increasing granularity (Peter Eisentraut) @@ -10285,8 +10285,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_resetxlog - with option output current and potentially changed values (Rajeev Rastogi) @@ -10301,19 +10301,19 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make return exit code 4 for + Make return exit code 4 for an inaccessible data directory (Amit Kapila, Bruce Momjian) This behavior more closely matches the Linux Standard Base - (LSB) Core Specification. + (LSB) Core Specification. - On Windows, ensure that a non-absolute path specification is interpreted relative to 's current directory (Kumar Rajeev Rastogi) @@ -10327,7 +10327,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow sizeof() in ECPG + Allow sizeof() in ECPG C array definitions (Michael Meskes) @@ -10335,7 +10335,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make ECPG properly handle nesting - of C-style comments in both C and SQL text + of C-style comments in both C and SQL text (Michael Meskes) @@ -10349,15 +10349,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Suppress No rows output in psql mode when the footer is disabled (Bruce Momjian) - Allow Control-C to abort psql when it's hung at + Allow Control-C to abort psql when it's hung at connection startup (Peter Eisentraut) @@ -10371,22 +10371,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make psql's \db+ show tablespace options + Make psql's \db+ show tablespace options (Magnus Hagander) - Make \do+ display the functions + Make \do+ display the functions that implement the operators (Marko Tiikkaja) - Make \d+ output an - OID line only if an oid column + Make \d+ output an + OID line only if an oid column exists in the table (Bruce Momjian) @@ -10398,7 +10398,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make \d show disabled system triggers (Bruce + Make \d show disabled system triggers (Bruce Momjian) @@ -10410,55 +10410,55 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Fix \copy to no longer require - a space between stdin and a semicolon (Etsuro Fujita) + Fix \copy to no longer require + a space between stdin and a semicolon (Etsuro Fujita) - Output the row count at the end of \copy, just - like COPY already did (Kumar Rajeev Rastogi) + Output the row count at the end of \copy, just + like COPY already did (Kumar Rajeev Rastogi) - Fix \conninfo to display the - server's IP address for connections using - hostaddr (Fujii Masao) + Fix \conninfo to display the + server's IP address for connections using + hostaddr (Fujii Masao) - Previously \conninfo could not display the server's - IP address in such cases. + Previously \conninfo could not display the server's + IP address in such cases. - Show the SSL protocol version in - \conninfo (Marko Kreen) + Show the SSL protocol version in + \conninfo (Marko Kreen) - Add tab completion for \pset + Add tab completion for \pset (Pavel Stehule) - Allow \pset with no arguments + Allow \pset with no arguments to show all settings (Gilles Darold) - Make \s display the name of the history file it wrote + Make \s display the name of the history file it wrote without converting it to an absolute path (Tom Lane) @@ -10482,7 +10482,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow options - , , and to be specified multiple times (Heikki Linnakangas) @@ -10493,17 +10493,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Optionally add IF EXISTS clauses to the DROP + Optionally add IF EXISTS clauses to the DROP commands emitted when removing old objects during a restore (Pavel Stehule) This change prevents unnecessary errors when removing old objects. - The new option for , , and is only available - when is also specified. @@ -10518,20 +10518,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add pg_basebackup option - Allow pg_basebackup to relocate tablespaces in + Allow pg_basebackup to relocate tablespaces in the backup copy (Steeve Lennmark) - This is particularly useful for using pg_basebackup + This is particularly useful for using pg_basebackup on the same machine as the primary. @@ -10542,8 +10542,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - This can be controlled with the pg_basebackup - parameter. @@ -10574,13 +10574,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 No longer require function prototypes for functions marked with the - PG_FUNCTION_INFO_V1 + PG_FUNCTION_INFO_V1 macro (Peter Eisentraut) This change eliminates the need to write boilerplate prototypes. - Note that the PG_FUNCTION_INFO_V1 macro must appear + Note that the PG_FUNCTION_INFO_V1 macro must appear before the corresponding function definition to avoid compiler warnings. @@ -10588,41 +10588,41 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Remove SnapshotNow and - HeapTupleSatisfiesNow() (Robert Haas) + Remove SnapshotNow and + HeapTupleSatisfiesNow() (Robert Haas) All existing uses have been switched to more appropriate snapshot - types. Catalog scans now use MVCC snapshots. + types. Catalog scans now use MVCC snapshots. - Add an API to allow memory allocations over one gigabyte + Add an API to allow memory allocations over one gigabyte (Noah Misch) - Add psprintf() to simplify memory allocation during + Add psprintf() to simplify memory allocation during string composition (Peter Eisentraut, Tom Lane) - Support printf() size modifier z to - print size_t values (Andres Freund) + Support printf() size modifier z to + print size_t values (Andres Freund) - Change API of appendStringInfoVA() - to better use vsnprintf() (David Rowley, Tom Lane) + Change API of appendStringInfoVA() + to better use vsnprintf() (David Rowley, Tom Lane) @@ -10642,7 +10642,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve spinlock speed on x86_64 CPUs (Heikki + Improve spinlock speed on x86_64 CPUs (Heikki Linnakangas) @@ -10650,56 +10650,56 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Remove spinlock support for unsupported platforms - SINIX, Sun3, and - NS32K (Robert Haas) + SINIX, Sun3, and + NS32K (Robert Haas) - Remove IRIX port (Robert Haas) + Remove IRIX port (Robert Haas) Reduce the number of semaphores required by - builds (Robert Haas) - Rewrite duplicate_oids Unix shell script in - Perl (Andrew Dunstan) + Rewrite duplicate_oids Unix shell script in + Perl (Andrew Dunstan) - Add Test Anything Protocol (TAP) tests for client + Add Test Anything Protocol (TAP) tests for client programs (Peter Eisentraut) - Currently, these tests are run by make check-world - only if the - Add make targets and + , which allow selection of individual tests to be run (Andrew Dunstan) - Remove makefile rule (Peter Eisentraut) @@ -10709,7 +10709,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve support for VPATH builds of PGXS + Improve support for VPATH builds of PGXS modules (Cédric Villemain, Andrew Dunstan, Peter Eisentraut) @@ -10722,8 +10722,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add a configure flag that appends custom text to the - PG_VERSION string (Oskari Saarenmaa) + Add a configure flag that appends custom text to the + PG_VERSION string (Oskari Saarenmaa) @@ -10733,46 +10733,46 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve DocBook XML validity (Peter Eisentraut) + Improve DocBook XML validity (Peter Eisentraut) Fix various minor security and sanity issues reported by the - Coverity scanner (Stephen Frost) + Coverity scanner (Stephen Frost) Improve detection of invalid memory usage when testing - PostgreSQL with Valgrind + PostgreSQL with Valgrind (Noah Misch) - Improve sample Emacs configuration file - emacs.samples (Peter Eisentraut) + Improve sample Emacs configuration file + emacs.samples (Peter Eisentraut) - Also add .dir-locals.el to the top of the source tree. + Also add .dir-locals.el to the top of the source tree. - Allow pgindent to accept a command-line list + Allow pgindent to accept a command-line list of typedefs (Bruce Momjian) - Make pgindent smarter about blank lines + Make pgindent smarter about blank lines around preprocessor conditionals (Bruce Momjian) @@ -10780,14 +10780,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Avoid most uses of dlltool - in Cygwin and - Mingw builds (Marco Atzeri, Hiroshi Inoue) + in Cygwin and + Mingw builds (Marco Atzeri, Hiroshi Inoue) - Support client-only installs in MSVC (Windows) builds + Support client-only installs in MSVC (Windows) builds (MauMau) @@ -10814,13 +10814,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add UUID random number generator - gen_random_uuid() to + Add UUID random number generator + gen_random_uuid() to (Oskari Saarenmaa) - This allows creation of version 4 UUIDs without + This allows creation of version 4 UUIDs without requiring installation of . @@ -10828,12 +10828,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow to work with - the BSD or e2fsprogs UUID libraries, - not only the OSSP UUID library (Matteo Beccati) + the BSD or e2fsprogs UUID libraries, + not only the OSSP UUID library (Matteo Beccati) - This improves the uuid-ossp module's portability + This improves the uuid-ossp module's portability since it no longer has to have the increasingly-obsolete OSSP library. The module's name is now rather a misnomer, but we won't change it. @@ -10887,8 +10887,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow pg_xlogdump - to report a live log stream with (Heikki Linnakangas) @@ -10920,7 +10920,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Pass 's user name ( @@ -10934,31 +10934,31 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Remove line length limit for pgbench scripts (Sawada + Remove line length limit for pgbench scripts (Sawada Masahiko) - The previous line limit was BUFSIZ. + The previous line limit was BUFSIZ. - Add long option names to pgbench (Fabien Coelho) + Add long option names to pgbench (Fabien Coelho) - Add pgbench option to control the transaction rate (Fabien Coelho) - Add pgbench option to print periodic progress reports (Fabien Coelho) @@ -10975,7 +10975,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_stat_statements use a file, rather than + Make pg_stat_statements use a file, rather than shared memory, for query text storage (Peter Geoghegan) @@ -10987,7 +10987,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow reporting of pg_stat_statements's internal + Allow reporting of pg_stat_statements's internal query hash identifier (Daniel Farina, Sameer Thakur, Peter Geoghegan) @@ -10995,7 +10995,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add the ability to retrieve all pg_stat_statements + Add the ability to retrieve all pg_stat_statements information except the query text (Peter Geoghegan) @@ -11008,20 +11008,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_stat_statements ignore DEALLOCATE + Make pg_stat_statements ignore DEALLOCATE commands (Fabien Coelho) - It already ignored PREPARE, as well as planning time in + It already ignored PREPARE, as well as planning time in general, so this seems more consistent. - Save the statistics file into $PGDATA/pg_stat at server - shutdown, rather than $PGDATA/global (Fujii Masao) + Save the statistics file into $PGDATA/pg_stat at server + shutdown, rather than $PGDATA/global (Fujii Masao) diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index 0f700dd5d3..2f23abe329 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -36,20 +36,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -88,21 +88,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -119,7 +119,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -131,7 +131,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -139,13 +139,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -157,18 +157,18 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. - Fix make check to behave correctly when invoked via a + Fix make check to behave correctly when invoked via a non-GNU make program (Thomas Munro) @@ -218,7 +218,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -226,11 +226,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -245,15 +245,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -284,15 +284,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -306,7 +306,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -320,16 +320,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -337,13 +337,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -352,12 +352,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Correct the documentation about the process for upgrading standby - servers with pg_upgrade (Bruce Momjian) + servers with pg_upgrade (Bruce Momjian) The previous documentation instructed users to start/stop the primary - server after running pg_upgrade but before syncing + server after running pg_upgrade but before syncing the standby servers. This sequence is unsafe. @@ -463,21 +463,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -491,7 +491,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -539,7 +539,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -547,7 +547,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -555,19 +555,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Fix dangling pointer in ALTER TABLE when there is a + Fix dangling pointer in ALTER TABLE when there is a comment on a constraint belonging to the table (David Rowley) @@ -579,44 +579,44 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -627,20 +627,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -653,9 +653,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -666,8 +666,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -678,15 +678,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -699,14 +699,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -717,14 +717,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -732,13 +732,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -746,7 +746,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -758,20 +758,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_rewind to correctly handle files exceeding 2GB + Fix pg_rewind to correctly handle files exceeding 2GB (Kuntal Ghosh, Michael Paquier) - Ordinarily such files won't appear in PostgreSQL data + Ordinarily such files won't appear in PostgreSQL data directories, but they could be present in some cases. - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -783,16 +783,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_xlogdump's computation of WAL record length + Fix pg_xlogdump's computation of WAL record length (Andres Freund) - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -803,7 +803,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -815,14 +815,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -849,34 +849,34 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -913,7 +913,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -930,18 +930,18 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -965,7 +965,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -979,17 +979,17 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -1020,7 +1020,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1033,7 +1033,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1041,14 +1041,14 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) @@ -1062,7 +1062,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1079,14 +1079,14 @@ Author: Andrew Gierth Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 --> - Fix crash or wrong answers when a GROUPING SETS column's + Fix crash or wrong answers when a GROUPING SETS column's data type is hashable but not sortable (Pavan Deolasee) - Avoid applying physical targetlist optimization to custom + Avoid applying physical targetlist optimization to custom scans (Dmitry Ivanov, Tom Lane) @@ -1099,13 +1099,13 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Use the correct sub-expression when applying a FOR ALL + Use the correct sub-expression when applying a FOR ALL row-level-security policy (Stephen Frost) - In some cases the WITH CHECK restriction would be applied - when the USING restriction is more appropriate. + In some cases the WITH CHECK restriction would be applied + when the USING restriction is more appropriate. @@ -1119,19 +1119,19 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1139,20 +1139,20 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Avoid dangling pointer in COPY ... TO when row-level + Avoid dangling pointer in COPY ... TO when row-level security is active for the source table (Tom Lane) @@ -1164,8 +1164,8 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Avoid accessing an already-closed relcache entry in CLUSTER - and VACUUM FULL (Tom Lane) + Avoid accessing an already-closed relcache entry in CLUSTER + and VACUUM FULL (Tom Lane) @@ -1176,14 +1176,14 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -1197,12 +1197,12 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -1211,21 +1211,21 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -1237,14 +1237,14 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix pg_get_object_address() to handle members of operator + Fix pg_get_object_address() to handle members of operator families correctly (Álvaro Herrera) - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -1258,13 +1258,13 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -1282,21 +1282,21 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1311,20 +1311,20 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1336,26 +1336,26 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1374,7 +1374,7 @@ Branch: REL9_4_STABLE [f14bf0a8f] 2017-05-06 22:19:56 -0400 Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 --> - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1405,7 +1405,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1419,9 +1419,9 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1434,15 +1434,15 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1495,15 +1495,15 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1520,7 +1520,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. @@ -1530,7 +1530,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - The WAL record emitted for a BRIN revmap page when moving an + The WAL record emitted for a BRIN revmap page when moving an index tuple to a different page was incorrect. Replay would make the related portion of the index useless, forcing it to be recomputed. @@ -1538,13 +1538,13 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1615,7 +1615,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1630,7 +1630,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -1646,29 +1646,29 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. - Fix ALTER TABLE ... SET DATA TYPE ... USING when child + Fix ALTER TABLE ... SET DATA TYPE ... USING when child table has different column ordering than the parent (Álvaro Herrera) - Failure to adjust the column numbering in the USING + Failure to adjust the column numbering in the USING expression led to errors, - typically attribute N has wrong type. + typically attribute N has wrong type. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1681,7 +1681,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -1694,7 +1694,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1706,8 +1706,8 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Fix commit timestamp mechanism to not fail when queried about - the special XIDs FrozenTransactionId - and BootstrapTransactionId (Craig Ringer) + the special XIDs FrozenTransactionId + and BootstrapTransactionId (Craig Ringer) @@ -1745,28 +1745,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 The symptom was spurious ON CONFLICT is not supported on table - ... used as a catalog table errors when the target - of INSERT ... ON CONFLICT is a view with cascade option. + ... used as a catalog table errors when the target + of INSERT ... ON CONFLICT is a view with cascade option. - Fix incorrect target lists can have at most N - entries complaint when using ON CONFLICT with + Fix incorrect target lists can have at most N + entries complaint when using ON CONFLICT with wide tables (Tom Lane) - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1774,12 +1774,12 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1794,15 +1794,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1813,20 +1813,20 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) @@ -1834,27 +1834,27 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Improve speed of user-defined aggregates that - use array_append() as transition function (Tom Lane) + use array_append() as transition function (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix possible crash in array_position() - or array_positions() when processing arrays of records + Fix possible crash in array_position() + or array_positions() when processing arrays of records (Junseok Yang) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1866,8 +1866,8 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1880,7 +1880,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Disable transform that attempted to remove no-op AT TIME - ZONE conversions (Tom Lane) + ZONE conversions (Tom Lane) @@ -1891,15 +1891,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1919,28 +1919,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -1953,15 +1953,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1969,7 +1969,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Fix possible mishandling of expanded arrays in domain check - constraints and CASE execution (Tom Lane) + constraints and CASE execution (Tom Lane) @@ -2001,21 +2001,21 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -2027,23 +2027,23 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -2054,28 +2054,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. - Fix portability problems in contrib/pageinspect's + Fix portability problems in contrib/pageinspect's functions for GIN indexes (Peter Eisentraut, Tom Lane) @@ -2102,7 +2102,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -2165,7 +2165,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -2173,7 +2173,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -2191,7 +2191,7 @@ Branch: REL9_4_STABLE [a69443564] 2016-09-03 13:28:53 -0400 - The typical symptom was unexpected GIN leaf action errors + The typical symptom was unexpected GIN leaf action errors during WAL replay. @@ -2206,13 +2206,13 @@ Branch: REL9_4_STABLE [8778da2af] 2016-09-09 15:54:29 -0300 Branch: REL9_3_STABLE [dfe7121df] 2016-09-09 15:54:29 -0300 --> - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -2248,13 +2248,13 @@ Branch: REL9_5_STABLE [94bc30725] 2016-08-17 17:03:36 -0700 --> Fix deletion of speculatively inserted TOAST tuples when backing out - of INSERT ... ON CONFLICT (Oskari Saarenmaa) + of INSERT ... ON CONFLICT (Oskari Saarenmaa) In the race condition where two transactions try to insert conflicting tuples at about the same time, the loser would fail with - an attempted to delete invisible tuple error if its + an attempted to delete invisible tuple error if its insertion included any TOAST'ed fields. @@ -2262,7 +2262,7 @@ Branch: REL9_5_STABLE [94bc30725] 2016-08-17 17:03:36 -0700 Don't throw serialization errors for self-conflicting insertions - in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) + in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) @@ -2300,29 +2300,29 @@ Branch: REL9_5_STABLE [46bd14a10] 2016-08-24 22:20:01 -0400 Branch: REL9_4_STABLE [566afa15c] 2016-08-24 22:20:01 -0400 --> - Fix query-lifespan memory leak in a bulk UPDATE on a table - with a PRIMARY KEY or REPLICA IDENTITY index + Fix query-lifespan memory leak in a bulk UPDATE on a table + with a PRIMARY KEY or REPLICA IDENTITY index (Tom Lane) - Fix COPY with a column name list from a table that has + Fix COPY with a column name list from a table that has row-level security enabled (Adam Brightwell) - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. @@ -2337,20 +2337,20 @@ Branch: REL9_2_STABLE [ceb005319] 2016-08-12 12:13:04 -0400 --> Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix statistics update for TRUNCATE in a prepared + Fix statistics update for TRUNCATE in a prepared transaction (Stas Kelvich) @@ -2367,37 +2367,37 @@ Branch: REL9_2_STABLE [eaf6fe7fa] 2016-09-09 11:45:40 +0100 Branch: REL9_1_STABLE [3ed7f54bc] 2016-09-09 11:46:03 +0100 --> - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Show a sensible value - in pg_settings.unit - for min_wal_size and max_wal_size (Tom Lane) + in pg_settings.unit + for min_wal_size and max_wal_size (Tom Lane) @@ -2413,15 +2413,15 @@ Branch: REL9_1_STABLE [7e01c8ef3] 2016-08-14 15:06:02 -0400 --> Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -2467,7 +2467,7 @@ Branch: REL9_5_STABLE [da9659f87] 2016-08-22 15:30:37 -0400 In the worst case, this could result in a corrupt btree index, which - would need to be rebuilt using REINDEX. However, the + would need to be rebuilt using REINDEX. However, the situation is believed to be rare. @@ -2501,7 +2501,7 @@ Branch: REL9_2_STABLE [823df401d] 2016-08-31 08:52:13 -0400 Branch: REL9_1_STABLE [e3439a455] 2016-08-31 08:52:13 -0400 --> - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -2527,7 +2527,7 @@ Branch: REL9_4_STABLE [690a2fb90] 2016-08-17 13:15:04 -0700 This failure to reset all of the fields of the slot could - prevent VACUUM from removing dead tuples. + prevent VACUUM from removing dead tuples. @@ -2538,7 +2538,7 @@ Branch: REL9_4_STABLE [690a2fb90] 2016-08-17 13:15:04 -0700 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -2564,7 +2564,7 @@ Branch: REL9_4_STABLE [32cdf680f] 2016-09-23 09:54:11 -0400 Previously, the same value would be chosen every time, because it was - derived from random() but srandom() had not + derived from random() but srandom() had not yet been called. While relatively harmless, this was not the intended behavior. @@ -2584,8 +2584,8 @@ Branch: REL9_4_STABLE [c23b2523d] 2016-09-20 12:12:36 -0400 - Windows sometimes returns ERROR_ACCESS_DENIED rather - than ERROR_ALREADY_EXISTS when there is an existing + Windows sometimes returns ERROR_ACCESS_DENIED rather + than ERROR_ALREADY_EXISTS when there is an existing segment. This led to postmaster startup failure due to believing that the former was an unrecoverable error. @@ -2599,8 +2599,8 @@ Branch: REL9_6_STABLE Release: REL9_6_0 [c81c71d88] 2016-08-18 14:48:51 -0400 Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 --> - Fix PL/pgSQL to not misbehave with parameters and - local variables of type int2vector or oidvector + Fix PL/pgSQL to not misbehave with parameters and + local variables of type int2vector or oidvector (Tom Lane) @@ -2608,7 +2608,7 @@ Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -2619,12 +2619,12 @@ Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. @@ -2640,7 +2640,7 @@ Branch: REL9_2_STABLE [a4a3fac16] 2016-09-18 14:00:13 +0300 Branch: REL9_1_STABLE [ed29d2de2] 2016-09-18 14:07:30 +0300 --> - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) @@ -2658,12 +2658,12 @@ Branch: REL9_5_STABLE [b93d37474] 2016-09-21 13:16:20 +0300 Branch: REL9_4_STABLE [f16d4a241] 2016-09-21 13:16:24 +0300 --> - Fix pgbench's calculation of average latency + Fix pgbench's calculation of average latency (Fabien Coelho) - The calculation was incorrect when there were \sleep + The calculation was incorrect when there were \sleep commands in the script, or when the test duration was specified in number of transactions rather than total time. @@ -2671,7 +2671,7 @@ Branch: REL9_4_STABLE [f16d4a241] 2016-09-21 13:16:24 +0300 - In pg_upgrade, check library loadability in name order + In pg_upgrade, check library loadability in name order (Tom Lane) @@ -2693,12 +2693,12 @@ Branch: REL9_3_STABLE [f39bb487d] 2016-09-23 13:49:27 -0400 Branch: REL9_2_STABLE [53b29d986] 2016-09-23 13:49:27 -0400 --> - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -2712,9 +2712,9 @@ Branch: REL9_6_STABLE Release: REL9_6_0 [a88cee90f] 2016-09-08 10:48:03 -0400 Branch: REL9_5_STABLE [142a110b3] 2016-09-08 10:48:03 -0400 --> - In pg_dump with @@ -2727,27 +2727,27 @@ Branch: REL9_5_STABLE [9050e5c89] 2016-08-29 12:18:57 +0100 Branch: REL9_5_STABLE [3aa233f82] 2016-08-29 18:12:04 -0300 --> - Make pg_receivexlog work correctly - with without slots (Gabriele Bartolini) - Disallow specifying both - Make pg_rewind turn off synchronous_commit + Make pg_rewind turn off synchronous_commit in its session on the source server (Michael Banck, Michael Paquier) - This allows pg_rewind to work even when the source + This allows pg_rewind to work even when the source server is using synchronous replication that is not working for some reason. @@ -2755,8 +2755,8 @@ Branch: REL9_5_STABLE [3aa233f82] 2016-08-29 18:12:04 -0300 - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -2775,7 +2775,7 @@ Branch: REL9_4_STABLE [314a25fb3] 2016-08-29 14:38:17 +0900 Branch: REL9_3_STABLE [5833306dd] 2016-08-29 15:51:30 +0900 --> - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -2790,8 +2790,8 @@ Branch: REL9_5_STABLE [60b6d99da] 2016-09-15 09:30:36 -0400 Branch: REL9_4_STABLE [1336bd986] 2016-09-15 09:22:52 -0400 --> - Fix contrib/pg_buffercache to work - when shared_buffers exceeds 256GB (KaiGai Kohei) + Fix contrib/pg_buffercache to work + when shared_buffers exceeds 256GB (KaiGai Kohei) @@ -2807,8 +2807,8 @@ Branch: REL9_2_STABLE [60bb1bb12] 2016-08-17 15:51:11 -0400 Branch: REL9_1_STABLE [9942376a5] 2016-08-17 15:51:11 -0400 --> - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -2842,11 +2842,11 @@ Branch: REL9_4_STABLE [5d41f27a9] 2016-09-23 15:50:00 -0400 - When PostgreSQL has been configured - with @@ -2859,7 +2859,7 @@ Branch: REL9_5_STABLE [52acf020a] 2016-09-19 14:27:08 -0400 Branch: REL9_4_STABLE [ca93b816f] 2016-09-19 14:27:13 -0400 --> - In MSVC builds, include pg_recvlogical in a + In MSVC builds, include pg_recvlogical in a client-only installation (MauMau) @@ -2899,17 +2899,17 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -2922,15 +2922,15 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -2984,17 +2984,17 @@ Branch: REL9_1_STABLE [5327b764a] 2016-08-08 10:33:47 -0400 --> Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -3055,7 +3055,7 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -3064,22 +3064,22 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -3089,7 +3089,7 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) @@ -3111,18 +3111,18 @@ Branch: REL9_2_STABLE [7b8526e5d] 2016-07-28 16:09:15 -0400 Branch: REL9_1_STABLE [c0e5096fc] 2016-07-28 16:09:15 -0400 --> - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. @@ -3134,8 +3134,8 @@ Branch: master [eae1ad9b6] 2016-05-23 19:23:36 -0400 Branch: REL9_5_STABLE [e504d915b] 2016-05-23 19:23:36 -0400 --> - Fix unrecognized node type error for INSERT ... ON - CONFLICT within a recursive CTE (a WITH item) (Peter + Fix unrecognized node type error for INSERT ... ON + CONFLICT within a recursive CTE (a WITH item) (Peter Geoghegan) @@ -3147,7 +3147,7 @@ Branch: master [26e66184d] 2016-05-11 16:20:23 -0400 Branch: REL9_5_STABLE [58d802410] 2016-05-11 16:20:03 -0400 --> - Fix INSERT ... ON CONFLICT to successfully match index + Fix INSERT ... ON CONFLICT to successfully match index expressions or index predicates that are simplified during the planner's expression preprocessing phase (Tom Lane) @@ -3161,7 +3161,7 @@ Branch: REL9_5_STABLE [31ce32ade] 2016-07-04 16:09:11 -0400 --> Correctly handle violations of exclusion constraints that apply to - the target table of an INSERT ... ON CONFLICT command, + the target table of an INSERT ... ON CONFLICT command, but are not one of the selected arbiter indexes (Tom Lane) @@ -3178,7 +3178,7 @@ Branch: master [8a13d5e6d] 2016-05-11 17:06:53 -0400 Branch: REL9_5_STABLE [428484ce1] 2016-05-11 17:06:53 -0400 --> - Fix INSERT ... ON CONFLICT to not fail if the target + Fix INSERT ... ON CONFLICT to not fail if the target table has a unique index on OID (Tom Lane) @@ -3194,7 +3194,7 @@ Branch: REL9_2_STABLE [f66e0fec3] 2016-06-16 17:16:53 -0400 Branch: REL9_1_STABLE [7b97dafa2] 2016-06-16 17:16:58 -0400 --> - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) @@ -3210,8 +3210,8 @@ Branch: REL9_2_STABLE [89b301104] 2016-07-16 14:42:37 -0400 Branch: REL9_1_STABLE [608cc0c41] 2016-07-16 14:42:37 -0400 --> - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -3229,7 +3229,7 @@ Branch: REL9_4_STABLE [b25d87f91] 2016-07-01 11:40:22 -0400 Branch: REL9_3_STABLE [b0f20c2ea] 2016-07-01 11:40:22 -0400 --> - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) @@ -3245,12 +3245,12 @@ Branch: REL9_2_STABLE [b0134fe84] 2016-08-08 11:13:45 -0400 Branch: REL9_1_STABLE [d555d2642] 2016-08-08 11:13:51 -0400 --> - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -3267,8 +3267,8 @@ Branch: REL9_3_STABLE [17bfef80e] 2016-06-27 15:57:21 -0400 --> Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -3291,7 +3291,7 @@ Branch: REL9_1_STABLE [37276017f] 2016-07-15 17:49:49 -0700 --> Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -3331,8 +3331,8 @@ Branch: REL9_4_STABLE [166873dd0] 2016-07-15 14:17:20 -0400 Branch: REL9_3_STABLE [6c243f90a] 2016-07-15 14:17:20 -0400 --> - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) @@ -3346,14 +3346,14 @@ Branch: master [9eaf5be50] 2016-06-03 18:07:14 -0400 Branch: REL9_5_STABLE [8355897ff] 2016-06-03 18:07:14 -0400 --> - Make sure expanded datums returned by a plan node are + Make sure expanded datums returned by a plan node are read-only (Tom Lane) This avoids failures in some cases where the result of a lower plan node is referenced in multiple places in upper nodes. So far as - core PostgreSQL is concerned, only array values + core PostgreSQL is concerned, only array values returned by PL/pgSQL functions are at risk; but extensions might use expanded datums for other things. @@ -3374,7 +3374,7 @@ Branch: REL9_3_STABLE [dafdcbb6c] 2016-06-22 11:55:32 -0400 Branch: REL9_2_STABLE [dd41661d2] 2016-06-22 11:55:35 -0400 --> - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -3470,12 +3470,12 @@ Branch: REL9_2_STABLE [4cf0978ea] 2016-05-24 15:47:51 -0400 Branch: REL9_1_STABLE [5551dac59] 2016-05-24 15:47:51 -0400 --> - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -3498,7 +3498,7 @@ Branch: REL9_3_STABLE [28f294afd] 2016-06-24 18:29:28 -0400 The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. @@ -3514,8 +3514,8 @@ Branch: REL9_2_STABLE [3201709de] 2016-06-06 17:44:18 -0400 Branch: REL9_1_STABLE [32ceb8dfb] 2016-06-06 17:44:18 -0400 --> - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -3536,7 +3536,7 @@ Branch: REL9_2_STABLE [127d73009] 2016-08-07 18:52:02 -0400 Branch: REL9_1_STABLE [a449ad095] 2016-08-07 18:52:02 -0400 --> - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -3600,7 +3600,7 @@ Branch: REL9_4_STABLE [98d5f366b] 2016-08-06 14:28:38 -0400 - This mistake prevented VACUUM from completing in some + This mistake prevented VACUUM from completing in some cases involving corrupt b-tree indexes. @@ -3612,7 +3612,7 @@ Branch: master [8cf739de8] 2016-06-24 16:57:36 -0400 Branch: REL9_5_STABLE [07f69137b] 2016-06-24 16:57:36 -0400 --> - Fix building of large (bigger than shared_buffers) + Fix building of large (bigger than shared_buffers) hash indexes (Tom Lane) @@ -3646,9 +3646,9 @@ Branch: master [8a859691d] 2016-06-05 11:53:06 -0400 Branch: REL9_5_STABLE [a7aa61ffe] 2016-06-05 11:53:06 -0400 --> - Fix possible crash during a nearest-neighbor (ORDER BY - distance) indexscan on a contrib/btree_gist index on - an interval column (Peter Geoghegan) + Fix possible crash during a nearest-neighbor (ORDER BY + distance) indexscan on a contrib/btree_gist index on + an interval column (Peter Geoghegan) @@ -3659,7 +3659,7 @@ Branch: master [975ad4e60] 2016-05-30 14:47:22 -0400 Branch: REL9_5_STABLE [2973d7d02] 2016-05-30 14:47:22 -0400 --> - Fix PANIC: failed to add BRIN tuple error when attempting + Fix PANIC: failed to add BRIN tuple error when attempting to update a BRIN index entry (Álvaro Herrera) @@ -3682,8 +3682,8 @@ Branch: master [baebab3ac] 2016-07-12 18:07:03 -0400 Branch: REL9_5_STABLE [a0943dbbe] 2016-07-12 18:06:50 -0400 --> - Fix PL/pgSQL's handling of the INTO clause - within IMPORT FOREIGN SCHEMA commands (Tom Lane) + Fix PL/pgSQL's handling of the INTO clause + within IMPORT FOREIGN SCHEMA commands (Tom Lane) @@ -3698,8 +3698,8 @@ Branch: REL9_2_STABLE [6c0be49b2] 2016-07-17 09:39:51 -0400 Branch: REL9_1_STABLE [84d679204] 2016-07-17 09:41:08 -0400 --> - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -3721,7 +3721,7 @@ Branch: REL9_1_STABLE [1f63b0e09] 2016-08-05 18:58:36 -0400 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. @@ -3737,7 +3737,7 @@ Branch: REL9_2_STABLE [295edbecf] 2016-08-01 15:08:48 +0200 Branch: REL9_1_STABLE [c15f502b6] 2016-08-01 15:08:36 +0200 --> - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) @@ -3752,8 +3752,8 @@ Branch: REL9_3_STABLE [6693c9d7b] 2016-08-02 12:49:09 -0400 Branch: REL9_2_STABLE [a5a7caaa1] 2016-08-02 12:49:15 -0400 --> - In pg_dump with both @@ -3771,15 +3771,15 @@ Branch: REL9_4_STABLE [53c2601a5] 2016-06-03 11:29:20 -0400 Branch: REL9_3_STABLE [4a21c6fd7] 2016-06-03 11:29:20 -0400 --> - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. @@ -3792,17 +3792,17 @@ Branch: REL9_4_STABLE [ea274b2f4] 2016-05-25 12:39:57 -0400 Branch: REL9_3_STABLE [1c8205159] 2016-05-25 12:39:57 -0400 --> - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -3817,8 +3817,8 @@ Branch: REL9_4_STABLE [d32bc204c] 2016-05-26 10:50:42 -0400 Branch: REL9_3_STABLE [b9784e1f7] 2016-05-26 10:50:46 -0400 --> - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -3835,13 +3835,13 @@ Branch: master [d74048def] 2016-05-26 22:14:23 +0200 Branch: REL9_5_STABLE [47e596976] 2016-05-26 22:18:04 +0200 --> - Make parallel pg_dump fail cleanly when run against a + Make parallel pg_dump fail cleanly when run against a standby server (Magnus Hagander) This usage is not supported - unless is specified, but the error was not handled very well. @@ -3855,7 +3855,7 @@ Branch: REL9_4_STABLE [f2f18a37c] 2016-05-26 11:51:16 -0400 Branch: REL9_3_STABLE [99565a1ef] 2016-05-26 11:51:20 -0400 --> - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -3876,7 +3876,7 @@ Branch: REL9_2_STABLE [a21617759] 2016-08-01 17:38:00 +0900 Branch: REL9_1_STABLE [366f4a962] 2016-08-01 17:38:05 +0900 --> - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -3922,13 +3922,13 @@ Branch: REL9_4_STABLE [c2651cd24] 2016-05-27 10:40:20 -0400 Branch: REL9_3_STABLE [1f1e70a87] 2016-05-27 10:40:20 -0400 --> - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -3981,7 +3981,7 @@ Branch: REL9_1_STABLE [d70df7867] 2016-07-19 17:53:31 -0400 --> Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -4002,7 +4002,7 @@ Branch: REL9_2_STABLE [7822792f7] 2016-08-05 12:58:58 -0400 Branch: REL9_1_STABLE [a44388ffe] 2016-08-05 12:59:02 -0400 --> - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -4066,7 +4066,7 @@ Branch: REL9_1_STABLE [9b676fd49] 2016-05-07 00:09:37 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -4084,7 +4084,7 @@ Branch: REL9_2_STABLE [ad2d32b57] 2016-04-21 20:05:58 -0400 Branch: REL9_1_STABLE [6882dbd34] 2016-04-21 20:05:58 -0400 --> - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -4106,10 +4106,10 @@ Branch: REL9_2_STABLE [f02cb8c9a] 2016-04-29 20:19:38 -0400 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -4128,7 +4128,7 @@ Branch: REL9_5_STABLE [81deadd31] 2016-04-21 23:17:36 -0400 - An example is that SELECT (ARRAY[])::text[] gave an error, + An example is that SELECT (ARRAY[])::text[] gave an error, though it worked without the parentheses. @@ -4160,7 +4160,7 @@ Branch: REL9_4_STABLE [ef35afa35] 2016-04-20 14:25:15 -0400 The memory leak would typically not amount to much in simple queries, but it could be very substantial during a large GIN index build with - high maintenance_work_mem. + high maintenance_work_mem. @@ -4175,8 +4175,8 @@ Branch: REL9_2_STABLE [11247dd99] 2016-05-06 12:09:20 -0400 Branch: REL9_1_STABLE [7bad282c3] 2016-05-06 12:09:20 -0400 --> - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -4197,9 +4197,9 @@ Branch: REL9_2_STABLE [c7c145e4f] 2016-04-21 14:20:18 -0400 Branch: REL9_1_STABLE [663624e60] 2016-04-21 14:20:18 -0400 --> - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) @@ -4212,14 +4212,14 @@ Branch: REL9_5_STABLE [f3d17491c] 2016-04-04 18:05:23 -0400 Branch: REL9_4_STABLE [28148e258] 2016-04-04 18:05:24 -0400 --> - Disallow newlines in ALTER SYSTEM parameter values + Disallow newlines in ALTER SYSTEM parameter values (Tom Lane) The configuration-file parser doesn't support embedded newlines in string literals, so we mustn't allow them in values to be inserted - by ALTER SYSTEM. + by ALTER SYSTEM. @@ -4231,7 +4231,7 @@ Branch: REL9_5_STABLE [8f8e65d34] 2016-04-15 12:11:27 -0400 Branch: REL9_4_STABLE [8eed31ffb] 2016-04-15 12:11:27 -0400 --> - Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to + Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to work properly if an index on OID is selected (David Rowley) @@ -4290,13 +4290,13 @@ Branch: REL9_2_STABLE [1b22368ff] 2016-04-20 23:48:13 -0400 Branch: REL9_1_STABLE [4c1c9f80b] 2016-04-20 23:48:13 -0400 --> - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. @@ -4311,7 +4311,7 @@ Branch: REL9_2_STABLE [6bb42d520] 2016-04-13 18:57:52 -0400 Branch: REL9_1_STABLE [3ef1f3a3e] 2016-04-13 18:57:52 -0400 --> - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -4319,7 +4319,7 @@ Branch: REL9_1_STABLE [3ef1f3a3e] 2016-04-13 18:57:52 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. @@ -4333,13 +4333,13 @@ Branch: REL9_4_STABLE [e1aecebc0] 2016-05-06 22:05:51 -0400 Branch: REL9_3_STABLE [e1d88f983] 2016-05-06 22:05:51 -0400 --> - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -4369,7 +4369,7 @@ Branch: REL9_2_STABLE [b24f7e280] 2016-04-18 13:33:07 -0400 --> Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) @@ -4384,8 +4384,8 @@ Branch: REL9_2_STABLE [0f5491283] 2016-04-23 16:53:15 -0400 Branch: REL9_1_STABLE [cbff4b708] 2016-04-23 16:53:15 -0400 --> - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) @@ -4407,8 +4407,8 @@ Branch: REL9_2_STABLE [b5ebc513d] 2016-04-21 16:59:13 -0400 Branch: REL9_1_STABLE [9028f404e] 2016-04-21 16:59:17 -0400 --> - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -4463,7 +4463,7 @@ Branch: REL9_4_STABLE [c238a4101] 2016-04-22 05:20:07 -0400 Branch: REL9_3_STABLE [ab5c6d01f] 2016-04-22 05:20:18 -0400 --> - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) @@ -4479,12 +4479,12 @@ Branch: REL9_2_STABLE [b4b06931e] 2016-03-29 11:54:58 -0400 Branch: REL9_1_STABLE [6cd30292b] 2016-03-29 11:54:58 -0400 --> - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -4501,9 +4501,9 @@ Branch: REL9_2_STABLE [29d154e36] 2016-05-05 20:09:27 -0400 Branch: REL9_1_STABLE [bfc39da64] 2016-05-05 20:09:32 -0400 --> - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -4536,7 +4536,7 @@ Branch: REL9_1_STABLE [bfc39da64] 2016-05-05 20:09:32 -0400 - However, you may need to REINDEX some indexes after applying + However, you may need to REINDEX some indexes after applying the update, as per the first changelog entry below. @@ -4554,39 +4554,39 @@ Branch: REL9_5_STABLE [8aa6e9780] 2016-03-23 16:04:35 -0400 - Disable abbreviated keys for string sorting in non-C + Disable abbreviated keys for string sorting in non-C locales (Robert Haas) - PostgreSQL 9.5 introduced logic for speeding up + PostgreSQL 9.5 introduced logic for speeding up comparisons of string data types by using the standard C library - function strxfrm() as a substitute - for strcoll(). It now emerges that most versions of + function strxfrm() as a substitute + for strcoll(). It now emerges that most versions of glibc (Linux's implementation of the C library) have buggy - implementations of strxfrm() that, in some locales, + implementations of strxfrm() that, in some locales, can produce string comparison results that do not - match strcoll(). Until this problem can be better - characterized, disable the optimization in all non-C - locales. (C locale is safe since it uses - neither strcoll() nor strxfrm().) + match strcoll(). Until this problem can be better + characterized, disable the optimization in all non-C + locales. (C locale is safe since it uses + neither strcoll() nor strxfrm().) Unfortunately, this problem affects not only sorting but also entry ordering in B-tree indexes, which means that B-tree indexes - on text, varchar, or char columns may now + on text, varchar, or char columns may now be corrupt if they sort according to an affected locale and were - built or modified under PostgreSQL 9.5.0 or 9.5.1. - Users should REINDEX indexes that might be affected. + built or modified under PostgreSQL 9.5.0 or 9.5.1. + Users should REINDEX indexes that might be affected. It is not possible at this time to give an exhaustive list of - known-affected locales. C locale is known safe, and + known-affected locales. C locale is known safe, and there is no evidence of trouble in English-based locales such - as en_US, but some other popular locales such - as de_DE are affected in most glibc versions. + as en_US, but some other popular locales such + as de_DE are affected in most glibc versions. @@ -4619,14 +4619,14 @@ Branch: REL9_5_STABLE [bf78a6f10] 2016-03-28 10:57:46 -0300 Add must-be-superuser checks to some - new contrib/pageinspect functions (Andreas Seltenreich) + new contrib/pageinspect functions (Andreas Seltenreich) - Most functions in the pageinspect extension that - inspect bytea values disallow calls by non-superusers, - but brin_page_type() and brin_metapage_info() - failed to do so. Passing contrived bytea values to them might + Most functions in the pageinspect extension that + inspect bytea values disallow calls by non-superusers, + but brin_page_type() and brin_metapage_info() + failed to do so. Passing contrived bytea values to them might crash the server or disclose a few bytes of server memory. Add the missing permissions checks to prevent misuse. (CVE-2016-3065) @@ -4641,15 +4641,15 @@ Branch: REL9_5_STABLE [bf7ced5e2] 2016-03-03 09:50:38 +0000 - Fix incorrect handling of indexed ROW() comparisons + Fix incorrect handling of indexed ROW() comparisons (Simon Riggs) Flaws in a minor optimization introduced in 9.5 caused incorrect - results if the ROW() comparison matches the index ordering + results if the ROW() comparison matches the index ordering partially but not exactly (for example, differing column order, or the - index contains both ASC and DESC columns). + index contains both ASC and DESC columns). Pending a better solution, the optimization has been removed. @@ -4667,15 +4667,15 @@ Branch: REL9_1_STABLE [d485d9581] 2016-03-09 14:51:02 -0500 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. @@ -4698,7 +4698,7 @@ Branch: REL9_1_STABLE [d0e47bcd4] 2016-03-09 18:53:54 -0800 Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) @@ -4712,14 +4712,14 @@ Branch: REL9_5_STABLE [d8d5a00b1] 2016-03-22 17:56:06 -0400 Fix incorrect behavior when rechecking a just-modified row in a query - that does SELECT FOR UPDATE/SHARE and contains some + that does SELECT FOR UPDATE/SHARE and contains some relations that need not be locked (Tom Lane) Rows from non-locked relations were incorrectly treated as containing all NULLs during the recheck, which could result in incorrectly - deciding that the updated row no longer passes the WHERE + deciding that the updated row no longer passes the WHERE condition, or in incorrectly outputting NULLs. @@ -4733,7 +4733,7 @@ Branch: REL9_4_STABLE [597e41e45] 2016-03-02 23:31:39 -0500 - Fix bug in json_to_record() when a field of its input + Fix bug in json_to_record() when a field of its input object contains a sub-object with a field name matching one of the requested output column names (Tom Lane) @@ -4748,7 +4748,7 @@ Branch: REL9_5_STABLE [68d68ff83] 2016-02-21 10:40:39 -0500 Fix nonsense result from two-argument form - of jsonb_object() when called with empty arrays + of jsonb_object() when called with empty arrays (Michael Paquier, Andrew Dunstan) @@ -4761,7 +4761,7 @@ Branch: REL9_5_STABLE [5f95521b3] 2016-03-23 10:43:24 -0400 - Fix misbehavior in jsonb_set() when converting a path + Fix misbehavior in jsonb_set() when converting a path array element into an integer for use as an array subscript (Michael Paquier) @@ -4777,7 +4777,7 @@ Branch: REL9_4_STABLE [17a250b18] 2016-03-17 15:50:33 -0400 Fix misformatting of negative time zone offsets - by to_char()'s OF format code + by to_char()'s OF format code (Thomas Munro, Tom Lane) @@ -4791,7 +4791,7 @@ Branch: REL9_5_STABLE [3f14d8d59] 2016-03-15 18:04:48 -0400 Fix possible incorrect logging of waits done by - INSERT ... ON CONFLICT (Peter Geoghegan) + INSERT ... ON CONFLICT (Peter Geoghegan) @@ -4815,7 +4815,7 @@ Branch: REL9_4_STABLE [a9613ee69] 2016-03-06 02:43:26 +0900 Previously, standby servers would delay application of WAL records in - response to recovery_min_apply_delay even while replaying + response to recovery_min_apply_delay even while replaying the initial portion of WAL needed to make their database state valid. Since the standby is useless until it's reached a consistent database state, this was deemed unhelpful. @@ -4834,7 +4834,7 @@ Branch: REL9_1_STABLE [ca32f125b] 2016-02-19 08:35:02 +0000 - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) @@ -4870,10 +4870,10 @@ Branch: REL9_5_STABLE [f8a75881f] 2016-03-02 23:43:42 -0800 Trouble cases included tuples larger than one page when replica - identity is FULL, UPDATEs that change a + identity is FULL, UPDATEs that change a primary key within a transaction large enough to be spooled to disk, incorrect reports of subxact logged without previous toplevel - record, and incorrect reporting of a transaction's commit time. + record, and incorrect reporting of a transaction's commit time. @@ -4887,7 +4887,7 @@ Branch: REL9_4_STABLE [9b69d5c1d] 2016-02-29 12:34:33 +0000 Fix planner error with nested security barrier views when the outer - view has a WHERE clause containing a correlated subquery + view has a WHERE clause containing a correlated subquery (Dean Rasheed) @@ -4916,7 +4916,7 @@ Branch: REL9_1_STABLE [7d6c58aa1] 2016-02-28 23:40:35 -0500 - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) @@ -4933,14 +4933,14 @@ Branch: REL9_1_STABLE [fe747b741] 2016-03-06 19:21:03 -0500 - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. @@ -4956,7 +4956,7 @@ Branch: REL9_1_STABLE [e56acbe2a] 2016-02-10 19:30:12 -0500 - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -5020,7 +5020,7 @@ Branch: REL9_1_STABLE [b4895bf79] 2016-03-04 11:57:40 -0500 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) @@ -5036,12 +5036,12 @@ Branch: REL9_1_STABLE [2d61d88d8] 2016-03-14 11:31:49 -0400 - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. @@ -5058,8 +5058,8 @@ Branch: REL9_1_STABLE [f97664cf5] 2016-02-10 20:34:48 -0500 - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -5083,12 +5083,12 @@ Branch: REL9_1_STABLE [5a39c7395] 2016-03-07 10:41:11 -0500 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. @@ -5105,8 +5105,8 @@ Branch: REL9_1_STABLE [1965a8ce1] 2016-03-16 23:18:08 -0400 - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -5120,7 +5120,7 @@ Branch: REL9_3_STABLE [bf26c4f44] 2016-02-18 18:32:26 -0500 - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -5178,7 +5178,7 @@ Branch: REL9_1_STABLE [0f359c7de] 2016-02-18 15:40:36 -0500 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) @@ -5195,7 +5195,7 @@ Branch: REL9_1_STABLE [2aa9fd963] 2016-03-19 18:59:41 -0400 - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) @@ -5212,7 +5212,7 @@ Branch: REL9_1_STABLE [e5fd35cc5] 2016-03-25 19:03:54 -0400 - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -5296,7 +5296,7 @@ Branch: REL9_5_STABLE [87dbc72a7] 2016-02-08 11:03:37 +0100 - Avoid pushdown of HAVING clauses when grouping sets are + Avoid pushdown of HAVING clauses when grouping sets are used (Andrew Gierth) @@ -5309,7 +5309,7 @@ Branch: REL9_5_STABLE [82406d6ff] 2016-02-07 14:57:24 -0500 - Fix deparsing of ON CONFLICT arbiter WHERE + Fix deparsing of ON CONFLICT arbiter WHERE clauses (Peter Geoghegan) @@ -5326,14 +5326,14 @@ Branch: REL9_1_STABLE [b043df093] 2016-01-26 15:38:33 -0500 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -5367,8 +5367,8 @@ Branch: REL9_1_STABLE [ed5f57218] 2016-01-29 10:28:03 +0100 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) @@ -5385,7 +5385,7 @@ Branch: REL9_1_STABLE [b96f6f444] 2016-01-07 11:59:08 -0300 - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -5403,7 +5403,7 @@ Branch: REL9_1_STABLE [5108013db] 2016-01-13 18:55:27 -0500 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) @@ -5417,7 +5417,7 @@ Branch: REL9_5_STABLE [1e910cf5b] 2016-01-22 20:04:35 -0300 Fix improper quoting of domain constraint names - in pg_dump (Elvis Pranskevichus) + in pg_dump (Elvis Pranskevichus) @@ -5433,9 +5433,9 @@ Branch: REL9_1_STABLE [9c704632c] 2016-02-04 00:26:10 -0500 - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) @@ -5451,7 +5451,7 @@ Branch: REL9_1_STABLE [4c8b07d3c] 2016-02-03 09:25:34 -0500 - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -5465,7 +5465,7 @@ Branch: REL9_5_STABLE [7ef311eb4] 2016-01-05 17:25:12 -0300 - Suppress useless warning message when pg_receivexlog + Suppress useless warning message when pg_receivexlog connects to a pre-9.4 server (Marco Nenciarini) @@ -5483,15 +5483,15 @@ Branch: REL9_5_STABLE [5ef26b8de] 2016-01-11 20:06:47 -0500 - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -5508,7 +5508,7 @@ Branch: REL9_5_STABLE [a66c1fcdd] 2016-01-08 11:39:28 -0500 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) @@ -5525,16 +5525,16 @@ Branch: REL9_1_STABLE [b1f591c50] 2016-02-05 20:23:19 -0500 - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. @@ -5551,14 +5551,14 @@ Branch: REL9_4_STABLE [33b26426e] 2016-02-08 11:10:14 +0100 - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. @@ -5572,8 +5572,8 @@ Branch: REL9_3_STABLE [1f2b195eb] 2016-02-03 01:39:08 -0500 - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -5594,8 +5594,8 @@ Branch: REL9_4_STABLE [2099b911d] 2016-02-04 22:27:47 -0500 - In contrib/postgres_fdw, fix bugs triggered by use - of tableoid in data-modifying commands (Etsuro Fujita, + In contrib/postgres_fdw, fix bugs triggered by use + of tableoid in data-modifying commands (Etsuro Fujita, Robert Haas) @@ -5608,7 +5608,7 @@ Branch: REL9_5_STABLE [47acf3add] 2016-01-22 11:53:06 -0500 - Fix ill-advised restriction of NAMEDATALEN to be less + Fix ill-advised restriction of NAMEDATALEN to be less than 256 (Robert Haas, Tom Lane) @@ -5645,7 +5645,7 @@ Branch: REL9_1_STABLE [b1bc38144] 2016-01-19 23:30:28 -0500 - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -5662,7 +5662,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -5685,7 +5685,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 Overview - Major enhancements in PostgreSQL 9.5 include: + Major enhancements in PostgreSQL 9.5 include: @@ -5694,31 +5694,31 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Allow INSERTs + Allow INSERTs that would generate constraint conflicts to be turned into - UPDATEs or ignored + UPDATEs or ignored - Add GROUP BY analysis features GROUPING SETS, - CUBE and - ROLLUP + Add GROUP BY analysis features GROUPING SETS, + CUBE and + ROLLUP - Add row-level security control + Add row-level security control Create mechanisms for tracking - the progress of replication, + the progress of replication, including methods for identifying the origin of individual changes during logical replication @@ -5726,7 +5726,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) @@ -5772,21 +5772,21 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 2015-03-11 [c6b3c93] Tom Lane: Make operator precedence follow the SQL standar.. --> - Adjust operator precedence - to match the SQL standard (Tom Lane) + Adjust operator precedence + to match the SQL standard (Tom Lane) The precedence of <=, >= and <> has been reduced to match that of <, > - and =. The precedence of IS tests - (e.g., x IS NULL) has been reduced to be + and =. The precedence of IS tests + (e.g., x IS NULL) has been reduced to be just below these six comparison operators. - Also, multi-keyword operators beginning with NOT now have + Also, multi-keyword operators beginning with NOT now have the precedence of their base operator (for example, NOT - BETWEEN now has the same precedence as BETWEEN) whereas - before they had inconsistent precedence, behaving like NOT + BETWEEN now has the same precedence as BETWEEN) whereas + before they had inconsistent precedence, behaving like NOT with respect to their left operand but like their base operator with respect to their right operand. The new configuration parameter can be @@ -5801,7 +5801,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Change 's default shutdown mode from - smart to fast (Bruce Momjian) + smart to fast (Bruce Momjian) @@ -5816,18 +5816,18 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Use assignment cast behavior for data type conversions - in PL/pgSQL assignments, rather than converting to and + in PL/pgSQL assignments, rather than converting to and from text (Tom Lane) This change causes conversions of Booleans to strings to - produce true or false, not t - or f. Other type conversions may succeed in more cases - than before; for example, assigning a numeric value 3.9 to + produce true or false, not t + or f. Other type conversions may succeed in more cases + than before; for example, assigning a numeric value 3.9 to an integer variable will now assign 4 rather than failing. If no assignment-grade cast is defined for the particular source and - destination types, PL/pgSQL will fall back to its old + destination types, PL/pgSQL will fall back to its old I/O conversion behavior. @@ -5838,13 +5838,13 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Allow characters in server - command-line options to be escaped with a backslash (Andres Freund) + command-line options to be escaped with a backslash (Andres Freund) Formerly, spaces in the options string always separated options, so there was no way to include a space in an option value. Including - a backslash in an option value now requires writing \\. + a backslash in an option value now requires writing \\. @@ -5854,9 +5854,9 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Change the default value of the GSSAPI include_realm parameter to 1, so - that by default the realm is not removed from a GSS - or SSPI principal name (Stephen Frost) + linkend="gssapi-auth">include_realm parameter to 1, so + that by default the realm is not removed from a GSS + or SSPI principal name (Stephen Frost) @@ -5867,7 +5867,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 2015-06-29 [d661532] Heikki..: Also trigger restartpoints based on max_wal_siz.. --> - Replace configuration parameter checkpoint_segments + Replace configuration parameter checkpoint_segments with and (Heikki Linnakangas) @@ -5889,13 +5889,13 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2014-06-18 [df8b7bc] Tom Lane: Improve our mechanism for controlling the Linux.. --> - Control the Linux OOM killer via new environment + Control the Linux OOM killer via new environment variables PG_OOM_ADJUST_FILE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE, - instead of compile-time options LINUX_OOM_SCORE_ADJ and - LINUX_OOM_ADJ + linkend="linux-memory-overcommit">PG_OOM_ADJUST_VALUE, + instead of compile-time options LINUX_OOM_SCORE_ADJ and + LINUX_OOM_ADJ (Gurjeet Singh) @@ -5907,7 +5907,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB --> Decommission server configuration - parameter ssl_renegotiation_limit, which was deprecated + parameter ssl_renegotiation_limit, which was deprecated in earlier releases (Andres Freund) @@ -5915,8 +5915,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB While SSL renegotiation is a good idea in theory, it has caused enough bugs to be considered a net negative in practice, and it is due to be removed from future versions of the relevant standards. We have - therefore removed support for it from PostgreSQL. - The ssl_renegotiation_limit parameter still exists, but + therefore removed support for it from PostgreSQL. + The ssl_renegotiation_limit parameter still exists, but cannot be set to anything but zero (disabled). It's not documented anymore, either. @@ -5927,7 +5927,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2014-11-05 [525a489] Tom Lane: Remove the last vestige of server-side autocomm.. --> - Remove server configuration parameter autocommit, which + Remove server configuration parameter autocommit, which was already deprecated and non-operational (Tom Lane) @@ -5937,8 +5937,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-03-06 [bb8582a] Peter ..: Remove rolcatupdate --> - Remove the pg_authid - catalog's rolcatupdate field, as it had no usefulness + Remove the pg_authid + catalog's rolcatupdate field, as it had no usefulness (Adam Brightwell) @@ -5949,8 +5949,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB --> The pg_stat_replication - system view's sent field is now NULL, not zero, when + linkend="monitoring-stats-views-table">pg_stat_replication + system view's sent field is now NULL, not zero, when it has no valid value (Magnus Hagander) @@ -5960,13 +5960,13 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-07-17 [89ddd29] Andrew..: Support JSON negative array subscripts everywh.. --> - Allow json and jsonb array extraction operators to + Allow json and jsonb array extraction operators to accept negative subscripts, which count from the end of JSON arrays (Peter Geoghegan, Andrew Dunstan) - Previously, these operators returned NULL for negative + Previously, these operators returned NULL for negative subscripts. @@ -5999,12 +5999,12 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-05-15 [b0b7be6] Alvaro..: Add BRIN infrastructure for "inclusion" opclasses --> - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) (Álvaro Herrera) - BRIN indexes store only summary data (such as minimum + BRIN indexes store only summary data (such as minimum and maximum values) for ranges of heap blocks. They are therefore very compact and cheap to update; but if the data is naturally clustered, they can still provide substantial speedup of searches. @@ -6018,7 +6018,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB Allow queries to perform accurate distance filtering of bounding-box-indexed objects (polygons, circles) using GiST indexes (Alexander Korotkov, Heikki + linkend="GiST">GiST indexes (Alexander Korotkov, Heikki Linnakangas) @@ -6038,7 +6038,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-03-30 [0633a60] Heikki..: Add index-only scan support to range type GiST .. --> - Allow GiST indexes to perform index-only + Allow GiST indexes to perform index-only scans (Anastasia Lubennikova, Heikki Linnakangas, Andreas Karlsson) @@ -6049,14 +6049,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add configuration parameter - to control the size of GIN pending lists (Fujii Masao) + to control the size of GIN pending lists (Fujii Masao) This value can also be set on a per-index basis as an index storage parameter. Previously the pending-list size was controlled by , which was awkward because - appropriate values for work_mem are often much too large + appropriate values for work_mem are often much too large for this purpose. @@ -6067,7 +6067,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Issue a warning during the creation of hash indexes because they are not + linkend="indexes-types">hash indexes because they are not crash-safe (Bruce Momjian) @@ -6088,8 +6088,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-13 [78efd5c] Robert..: Extend abbreviated key infrastructure to datum .. --> - Improve the speed of sorting of varchar, text, - and numeric fields via abbreviated keys + Improve the speed of sorting of varchar, text, + and numeric fields via abbreviated keys (Peter Geoghegan, Andrew Gierth, Robert Haas) @@ -6101,8 +6101,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Extend the infrastructure that allows sorting to be performed by inlined, non-SQL-callable comparison functions to - cover CREATE INDEX, REINDEX, and - CLUSTER (Peter Geoghegan) + cover CREATE INDEX, REINDEX, and + CLUSTER (Peter Geoghegan) @@ -6163,7 +6163,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This particularly addresses scalability problems when running on - systems with multiple CPU sockets. + systems with multiple CPU sockets. @@ -6183,7 +6183,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pushdown of query restrictions into subqueries with window functions, where appropriate + linkend="tutorial-window">window functions, where appropriate (David Rowley) @@ -6206,7 +6206,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Teach the planner to use statistics obtained from an expression index on a boolean-returning function, when a matching function call - appears in WHERE (Tom Lane) + appears in WHERE (Tom Lane) @@ -6215,7 +6215,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-09-23 [cfb2024] Tom Lane: Make ANALYZE compute basic statistics even for.. --> - Make ANALYZE compute basic statistics (null fraction and + Make ANALYZE compute basic statistics (null fraction and average column width) even for columns whose data type lacks an equality function (Oleksandr Shulgin) @@ -6229,7 +6229,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> - Speed up CRC (cyclic redundancy check) computations + Speed up CRC (cyclic redundancy check) computations and switch to CRC-32C (Abhijit Menon-Sen, Heikki Linnakangas) @@ -6249,7 +6249,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-01 [9f03ca9] Robert..: Avoid copying index tuples when building an ind.. --> - Speed up CREATE INDEX by avoiding unnecessary memory + Speed up CREATE INDEX by avoiding unnecessary memory copies (Robert Haas) @@ -6283,7 +6283,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add per-table autovacuum logging control via new - log_autovacuum_min_duration storage parameter + log_autovacuum_min_duration storage parameter (Michael Paquier) @@ -6299,7 +6299,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This string, typically set in postgresql.conf, + linkend="config-setting-configuration-file">postgresql.conf, allows clients to identify the cluster. This name also appears in the process title of all server processes, allowing for easier identification of processes belonging to the same cluster. @@ -6321,7 +6321,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <acronym>SSL</> + <acronym>SSL</acronym> @@ -6331,13 +6331,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Check Subject Alternative - Names in SSL server certificates, if present + Names in SSL server certificates, if present (Alexey Klyukin) When they are present, this replaces checks against the certificate's - Common Name. + Common Name. @@ -6347,8 +6347,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add system view pg_stat_ssl to report - SSL connection information (Magnus Hagander) + linkend="pg-stat-ssl-view">pg_stat_ssl to report + SSL connection information (Magnus Hagander) @@ -6357,22 +6357,22 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-03 [91fa7b4] Heikki..: Add API functions to libpq to interrogate SSL .. --> - Add libpq functions to return SSL + Add libpq functions to return SSL information in an implementation-independent way (Heikki Linnakangas) - While PQgetssl() can - still be used to call OpenSSL functions, it is now + While PQgetssl() can + still be used to call OpenSSL functions, it is now considered deprecated because future versions - of libpq might support other SSL + of libpq might support other SSL implementations. When possible, use the new functions PQsslAttribute(), PQsslAttributeNames(), - and PQsslInUse() - to obtain SSL information in - an SSL-implementation-independent way. + linkend="libpq-pqsslattribute">PQsslAttribute(), PQsslAttributeNames(), + and PQsslInUse() + to obtain SSL information in + an SSL-implementation-independent way. @@ -6381,7 +6381,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [8a0d34e4] Peter ..: libpq: Don't overwrite existing OpenSSL thread.. --> - Make libpq honor any OpenSSL + Make libpq honor any OpenSSL thread callbacks (Jan Urbanski) @@ -6406,20 +6406,20 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-29 [d661532] Heikki..: Also trigger restartpoints based on max_wal_siz.. --> - Replace configuration parameter checkpoint_segments + Replace configuration parameter checkpoint_segments with and (Heikki Linnakangas) - This change allows the allocation of a large number of WAL + This change allows the allocation of a large number of WAL files without keeping them after they are no longer needed. - Therefore the default for max_wal_size has been set - to 1GB, much larger than the old default - for checkpoint_segments. + Therefore the default for max_wal_size has been set + to 1GB, much larger than the old default + for checkpoint_segments. Also note that standby servers perform restartpoints to try to limit - their WAL space consumption to max_wal_size; previously - they did not pay any attention to checkpoint_segments. + their WAL space consumption to max_wal_size; previously + they did not pay any attention to checkpoint_segments. @@ -6428,18 +6428,18 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-18 [df8b7bc] Tom Lane: Improve our mechanism for controlling the Linux.. --> - Control the Linux OOM killer via new environment + Control the Linux OOM killer via new environment variables PG_OOM_ADJUST_FILE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_VALUE (Gurjeet Singh) - The previous OOM control infrastructure involved - compile-time options LINUX_OOM_SCORE_ADJ and - LINUX_OOM_ADJ, which are no longer supported. + The previous OOM control infrastructure involved + compile-time options LINUX_OOM_SCORE_ADJ and + LINUX_OOM_ADJ, which are no longer supported. The new behavior is available in all builds. @@ -6457,8 +6457,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Time stamp information can be accessed using functions pg_xact_commit_timestamp() - and pg_last_committed_xact(). + linkend="functions-commit-timestamp">pg_xact_commit_timestamp() + and pg_last_committed_xact(). @@ -6468,7 +6468,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow to be set - by ALTER ROLE SET (Peter Eisentraut, Kyotaro Horiguchi) + by ALTER ROLE SET (Peter Eisentraut, Kyotaro Horiguchi) @@ -6477,7 +6477,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-03 [a75fb9b] Alvaro..: Have autovacuum workers listen to SIGHUP, too --> - Allow autovacuum workers + Allow autovacuum workers to respond to configuration parameter changes during a run (Michael Paquier) @@ -6496,7 +6496,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This means that assertions can no longer be turned off if they were enabled at compile time, allowing for more efficient code optimization. This change also removes the postgres option. @@ -6517,7 +6517,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add system view pg_file_settings + linkend="view-pg-file-settings">pg_file_settings to show the contents of the server's configuration files (Sawada Masahiko) @@ -6528,8 +6528,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-14 [a486e35] Peter ..: Add pg_settings.pending_restart column --> - Add pending_restart to the system view pg_settings to + Add pending_restart to the system view pg_settings to indicate a change has been made but will not take effect until a database restart (Peter Eisentraut) @@ -6540,14 +6540,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-02 [bd3b7a9] Fujii ..: Support ALTER SYSTEM RESET command. --> - Allow ALTER SYSTEM - values to be reset with ALTER SYSTEM RESET (Vik + Allow ALTER SYSTEM + values to be reset with ALTER SYSTEM RESET (Vik Fearing) This command removes the specified setting - from postgresql.auto.conf. + from postgresql.auto.conf. @@ -6568,7 +6568,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Create mechanisms for tracking - the progress of replication, + the progress of replication, including methods for identifying the origin of individual changes during logical replication (Andres Freund) @@ -6600,14 +6600,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-15 [51c11a7] Andres..: Remove pause_at_recovery_target recovery.conf s.. --> - Add recovery.conf + Add recovery.conf parameter recovery_target_action + linkend="recovery-target-action">recovery_target_action to control post-recovery activity (Petr Jelínek) - This replaces the old parameter pause_at_recovery_target. + This replaces the old parameter pause_at_recovery_target. @@ -6617,8 +6617,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add new value - always to allow standbys to always archive received - WAL files (Fujii Masao) + always to allow standbys to always archive received + WAL files (Fujii Masao) @@ -6629,7 +6629,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Add configuration parameter to - control WAL read retry after failure + control WAL read retry after failure (Alexey Vasiliev, Michael Paquier) @@ -6643,7 +6643,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-11 [57aa5b2] Fujii ..: Add GUC to enable compression of full page imag.. --> - Allow compression of full-page images stored in WAL + Allow compression of full-page images stored in WAL (Rahila Syed, Michael Paquier) @@ -6660,7 +6660,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-08 [de76884] Heikki..: At promotion, archive last segment from old tim.. --> - Archive WAL files with suffix .partial + Archive WAL files with suffix .partial during standby promotion (Heikki Linnakangas) @@ -6677,9 +6677,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. By default, replication commands, e.g. IDENTIFY_SYSTEM, + linkend="protocol-replication">IDENTIFY_SYSTEM, are not logged, even when is set - to all. + to all. @@ -6689,12 +6689,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Report the processes holding replication slots in pg_replication_slots + linkend="view-pg-replication-slots">pg_replication_slots (Craig Ringer) - The new output column is active_pid. + The new output column is active_pid. @@ -6703,9 +6703,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-25 [b3fc672] Heikki..: Allow using connection URI in primary_conninfo. --> - Allow recovery.conf's primary_conninfo setting to - use connection URIs, e.g. postgres:// + Allow recovery.conf's primary_conninfo setting to + use connection URIs, e.g. postgres:// (Alexander Shulgin) @@ -6725,16 +6725,16 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-08 [2c8f483] Andres..: Represent columns requiring insert and update p.. --> - Allow INSERTs + Allow INSERTs that would generate constraint conflicts to be turned into - UPDATEs or ignored (Peter Geoghegan, Heikki + UPDATEs or ignored (Peter Geoghegan, Heikki Linnakangas, Andres Freund) - The syntax is INSERT ... ON CONFLICT DO NOTHING/UPDATE. + The syntax is INSERT ... ON CONFLICT DO NOTHING/UPDATE. This is the Postgres implementation of the popular - UPSERT command. + UPSERT command. @@ -6743,10 +6743,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-16 [f3d3118] Andres..: Support GROUPING SETS, CUBE and ROLLUP. --> - Add GROUP BY analysis features GROUPING SETS, - CUBE and - ROLLUP + Add GROUP BY analysis features GROUPING SETS, + CUBE and + ROLLUP (Andrew Gierth, Atri Sharma) @@ -6757,13 +6757,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow setting multiple target columns in - an UPDATE from the result of + an UPDATE from the result of a single sub-SELECT (Tom Lane) This is accomplished using the syntax UPDATE tab SET - (col1, col2, ...) = (SELECT ...). + (col1, col2, ...) = (SELECT ...). @@ -6772,13 +6772,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-07 [df630b0] Alvaro..: Implement SKIP LOCKED for row-level locks --> - Add SELECT option - SKIP LOCKED to skip locked rows (Thomas Munro) + Add SELECT option + SKIP LOCKED to skip locked rows (Thomas Munro) This does not throw an error for locked rows like - NOWAIT does. + NOWAIT does. @@ -6787,8 +6787,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [f6d208d] Simon ..: TABLESAMPLE, SQL Standard and extensible --> - Add SELECT option - TABLESAMPLE to return a subset of a table (Petr + Add SELECT option + TABLESAMPLE to return a subset of a table (Petr Jelínek) @@ -6796,7 +6796,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature supports the SQL-standard table sampling methods. In addition, there are provisions for user-defined - table sampling methods. + table sampling methods. @@ -6825,13 +6825,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add more details about sort ordering in EXPLAIN output (Marius Timmer, + linkend="SQL-EXPLAIN">EXPLAIN output (Marius Timmer, Lukas Kreft, Arne Scheffer) - Details include COLLATE, DESC, - USING, and NULLS FIRST/LAST. + Details include COLLATE, DESC, + USING, and NULLS FIRST/LAST. @@ -6840,7 +6840,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-18 [35192f0] Alvaro..: Have VACUUM log number of skipped pages due to .. --> - Make VACUUM log the + Make VACUUM log the number of pages skipped due to pins (Jim Nasby) @@ -6850,8 +6850,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-20 [d42358e] Alvaro..: Have TRUNCATE update pgstat tuple counters --> - Make TRUNCATE properly - update the pg_stat* tuple counters (Alexander Shulgin) + Make TRUNCATE properly + update the pg_stat* tuple counters (Alexander Shulgin) @@ -6867,8 +6867,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-09 [fe263d1] Simon ..: REINDEX SCHEMA --> - Allow REINDEX to reindex an entire schema using the - SCHEMA option (Sawada Masahiko) + Allow REINDEX to reindex an entire schema using the + SCHEMA option (Sawada Masahiko) @@ -6877,7 +6877,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [ecd222e] Fujii ..: Support VERBOSE option in REINDEX command. --> - Add VERBOSE option to REINDEX (Sawada + Add VERBOSE option to REINDEX (Sawada Masahiko) @@ -6887,8 +6887,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-09 [ae4e688] Simon ..: Silence REINDEX --> - Prevent REINDEX DATABASE and SCHEMA - from outputting object names, unless VERBOSE is used + Prevent REINDEX DATABASE and SCHEMA + from outputting object names, unless VERBOSE is used (Simon Riggs) @@ -6898,7 +6898,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [17d436d] Fujii ..: Remove obsolete FORCE option from REINDEX. --> - Remove obsolete FORCE option from REINDEX + Remove obsolete FORCE option from REINDEX (Fujii Masao) @@ -6918,7 +6918,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-19 [491c029] Stephe..: Row-Level Security Policies (RLS) --> - Add row-level security control + Add row-level security control (Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, Stephen Frost) @@ -6926,11 +6926,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature allows row-by-row control over which users can add, modify, or even see rows in a table. This is controlled by new - commands CREATE/ALTER/DROP POLICY and CREATE/ALTER/DROP POLICY and ALTER TABLE ... ENABLE/DISABLE - ROW SECURITY. + ROW SECURITY. @@ -6942,7 +6942,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Allow changing of the WAL logging status of a table after creation with ALTER TABLE ... SET LOGGED / - UNLOGGED (Fabrízio de Royes Mello) + UNLOGGED (Fabrízio de Royes Mello) @@ -6953,12 +6953,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-13 [e39b6f9] Andrew..: Add CINE option for CREATE TABLE AS and CREATE .. --> - Add IF NOT EXISTS clause to CREATE TABLE AS, - CREATE INDEX, - CREATE SEQUENCE, + Add IF NOT EXISTS clause to CREATE TABLE AS, + CREATE INDEX, + CREATE SEQUENCE, and CREATE - MATERIALIZED VIEW (Fabrízio de Royes Mello) + MATERIALIZED VIEW (Fabrízio de Royes Mello) @@ -6967,9 +6967,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-24 [1d8198b] Bruce ..: Add support for ALTER TABLE IF EXISTS ... RENAM.. --> - Add support for IF EXISTS to IF EXISTS to ALTER TABLE ... RENAME - CONSTRAINT (Bruce Momjian) + CONSTRAINT (Bruce Momjian) @@ -6978,8 +6978,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-09 [31eae60] Alvaro..: Allow CURRENT/SESSION_USER to be used in certai.. --> - Allow some DDL commands to accept CURRENT_USER - or SESSION_USER, meaning the current user or session + Allow some DDL commands to accept CURRENT_USER + or SESSION_USER, meaning the current user or session user, in place of a specific user name (Kyotaro Horiguchi, Álvaro Herrera) @@ -6988,7 +6988,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature is now supported in , , , , - and ALTER object OWNER TO commands. + and ALTER object OWNER TO commands. @@ -6998,7 +6998,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Support comments on domain - constraints (Álvaro Herrera) + constraints (Álvaro Herrera) @@ -7018,13 +7018,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow LOCK TABLE ... ROW EXCLUSIVE - MODE for those with INSERT privileges on the + MODE for those with INSERT privileges on the target table (Stephen Frost) - Previously this command required UPDATE, DELETE, - or TRUNCATE privileges. + Previously this command required UPDATE, DELETE, + or TRUNCATE privileges. @@ -7033,7 +7033,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-23 [e5f455f] Tom Lane: Apply table and domain CHECK constraints in nam. --> - Apply table and domain CHECK constraints in order by name + Apply table and domain CHECK constraints in order by name (Tom Lane) @@ -7049,16 +7049,16 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow CREATE/ALTER DATABASE - to manipulate datistemplate and - datallowconn (Vik Fearing) + linkend="SQL-CREATEDATABASE">CREATE/ALTER DATABASE + to manipulate datistemplate and + datallowconn (Vik Fearing) This allows these per-database settings to be changed without manually modifying the pg_database + linkend="catalog-pg-database">pg_database system catalog. @@ -7090,7 +7090,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-17 [fc2ac1f] Tom Lane: Allow CHECK constraints to be placed on foreign.. --> - Allow CHECK constraints to be placed on foreign tables + Allow CHECK constraints to be placed on foreign tables (Shigeru Hanada, Etsuro Fujita) @@ -7099,7 +7099,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. and are not enforced locally. However, they are assumed to hold for purposes of query optimization, such as constraint - exclusion. + exclusion. @@ -7115,7 +7115,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. To let this work naturally, foreign tables are now allowed to have check constraints marked as not valid, and to set storage - and OID characteristics, even though these operations are + and OID characteristics, even though these operations are effectively no-ops for a foreign table. @@ -7145,14 +7145,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-11 [b488c58] Alvaro..: Allow on-the-fly capture of DDL event details --> - Whenever a ddl_command_end event trigger is installed, - capture details of DDL activity for it to inspect + Whenever a ddl_command_end event trigger is installed, + capture details of DDL activity for it to inspect (Álvaro Herrera) This information is available through a set-returning function pg_event_trigger_ddl_commands(), + linkend="pg-event-trigger-ddl-command-end-functions">pg_event_trigger_ddl_commands(), or by inspection of C data structures if that function doesn't provide enough detail. @@ -7164,7 +7164,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow event triggers on table rewrites caused by ALTER TABLE (Dimitri + linkend="SQL-ALTERTABLE">ALTER TABLE (Dimitri Fontaine) @@ -7175,10 +7175,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add event trigger support for database-level COMMENT, SECURITY LABEL, - and GRANT/REVOKE (Álvaro Herrera) + linkend="SQL-COMMENT">COMMENT, SECURITY LABEL, + and GRANT/REVOKE (Álvaro Herrera) @@ -7189,7 +7189,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add columns to the output of pg_event_trigger_dropped_objects + linkend="pg-event-trigger-sql-drop-functions">pg_event_trigger_dropped_objects (Álvaro Herrera) @@ -7214,12 +7214,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-09 [57b1085] Peter ..: Allow empty content in xml type --> - Allow the xml data type + Allow the xml data type to accept empty or all-whitespace content values (Peter Eisentraut) - This is required by the SQL/XML + This is required by the SQL/XML specification. @@ -7229,8 +7229,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-21 [6f04368] Peter ..: Allow input format xxxx-xxxx-xxxx for macaddr .. --> - Allow macaddr input - using the format xxxx-xxxx-xxxx (Herwin Weststrate) + Allow macaddr input + using the format xxxx-xxxx-xxxx (Herwin Weststrate) @@ -7240,15 +7240,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Disallow non-SQL-standard syntax for interval with + linkend="datatype-interval-input">interval with both precision and field specifications (Bruce Momjian) Per the standard, such type specifications should be written as, - for example, INTERVAL MINUTE TO SECOND(2). - PostgreSQL formerly allowed this to be written as - INTERVAL(2) MINUTE TO SECOND, but it must now be + for example, INTERVAL MINUTE TO SECOND(2). + PostgreSQL formerly allowed this to be written as + INTERVAL(2) MINUTE TO SECOND, but it must now be written in the standard way. @@ -7259,8 +7259,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add selectivity estimators for inet/cidr operators and improve + linkend="datatype-inet">inet/cidr operators and improve estimators for text search functions (Emre Hasegeli, Tom Lane) @@ -7272,9 +7272,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add data - types regrole - and regnamespace - to simplify entering and pretty-printing the OID of a role + types regrole + and regnamespace + to simplify entering and pretty-printing the OID of a role or namespace (Kyotaro Horiguchi) @@ -7282,7 +7282,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <link linkend="datatype-json"><acronym>JSON</></link> + <link linkend="datatype-json"><acronym>JSON</acronym></link> @@ -7292,10 +7292,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-31 [37def42] Andrew..: Rename jsonb_replace to jsonb_set and allow it .. --> - Add jsonb functions jsonb_set() + Add jsonb functions jsonb_set() and jsonb_pretty() + linkend="functions-json-processing-table">jsonb_pretty() (Dmitry Dolgov, Andrew Dunstan, Petr Jelínek) @@ -7305,23 +7305,23 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-12 [7e354ab] Andrew..: Add several generator functions for jsonb that .. --> - Add jsonb generator functions to_jsonb(), + Add jsonb generator functions to_jsonb(), jsonb_object(), + linkend="functions-json-creation-table">jsonb_object(), jsonb_build_object(), + linkend="functions-json-creation-table">jsonb_build_object(), jsonb_build_array(), + linkend="functions-json-creation-table">jsonb_build_array(), jsonb_agg(), + linkend="functions-aggregate-table">jsonb_agg(), and jsonb_object_agg() + linkend="functions-aggregate-table">jsonb_object_agg() (Andrew Dunstan) - Equivalent functions already existed for type json. + Equivalent functions already existed for type json. @@ -7331,8 +7331,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Reduce casting requirements to/from json and jsonb (Tom Lane) + linkend="datatype-json">json and jsonb (Tom Lane) @@ -7341,9 +7341,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-11 [908e234] Andrew..: Rename jsonb - text[] operator to #- to avoid a.. --> - Allow text, text array, and integer - values to be subtracted - from jsonb documents (Dmitry Dolgov, Andrew Dunstan) + Allow text, text array, and integer + values to be subtracted + from jsonb documents (Dmitry Dolgov, Andrew Dunstan) @@ -7352,8 +7352,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-12 [c694701] Andrew..: Additional functions and operators for jsonb --> - Add jsonb || operator + Add jsonb || operator (Dmitry Dolgov, Andrew Dunstan) @@ -7364,9 +7364,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add json_strip_nulls() + linkend="functions-json-processing-table">json_strip_nulls() and jsonb_strip_nulls() + linkend="functions-json-processing-table">jsonb_strip_nulls() functions to remove JSON null values from documents (Andrew Dunstan) @@ -7388,8 +7388,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-11 [1871c89] Fujii ..: Add generate_series(numeric, numeric). --> - Add generate_series() - for numeric values (Plato Malugin) + Add generate_series() + for numeric values (Plato Malugin) @@ -7399,8 +7399,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow array_agg() and - ARRAY() to take arrays as inputs (Ali Akbar, Tom Lane) + linkend="functions-aggregate-table">array_agg() and + ARRAY() to take arrays as inputs (Ali Akbar, Tom Lane) @@ -7411,9 +7411,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add functions array_position() + linkend="array-functions-table">array_position() and array_positions() + linkend="array-functions-table">array_positions() to return subscripts of array values (Pavel Stehule) @@ -7423,8 +7423,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-15 [4520ba6] Heikki..: Add point <-> polygon distance operator. --> - Add a point-to-polygon distance operator - <-> + Add a point-to-polygon distance operator + <-> (Alexander Korotkov) @@ -7435,8 +7435,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow multibyte characters as escapes in SIMILAR TO - and SUBSTRING + linkend="functions-similarto-regexp">SIMILAR TO + and SUBSTRING (Jeff Davis) @@ -7451,7 +7451,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add a width_bucket() + linkend="functions-math-func-table">width_bucket() variant that supports any sortable data type and non-uniform bucket widths (Petr Jelínek) @@ -7462,8 +7462,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-28 [cb2acb1] Heikki..: Add missing_ok option to the SQL functions for.. --> - Add an optional missing_ok argument to pg_read_file() + Add an optional missing_ok argument to pg_read_file() and related functions (Michael Paquier, Heikki Linnakangas) @@ -7473,14 +7473,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-10 [865f14a] Robert..: Allow named parameters to be specified using =>.. --> - Allow => + Allow => to specify named parameters in function calls (Pavel Stehule) - Previously only := could be used. This requires removing - the possibility for => to be a user-defined operator. - Creation of user-defined => operators has been issuing + Previously only := could be used. This requires removing + the possibility for => to be a user-defined operator. + Creation of user-defined => operators has been issuing warnings since PostgreSQL 9.0. @@ -7490,7 +7490,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-25 [06bf0dd] Tom Lane: Upgrade src/port/rint.c to be POSIX-compliant. --> - Add POSIX-compliant rounding for platforms that use + Add POSIX-compliant rounding for platforms that use PostgreSQL-supplied rounding functions (Pedro Gimeno Fortea) @@ -7509,11 +7509,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add function pg_get_object_address() - to return OIDs that uniquely + linkend="functions-info-object-table">pg_get_object_address() + to return OIDs that uniquely identify an object, and function pg_identify_object_as_address() - to return object information based on OIDs (Álvaro + linkend="functions-info-object-table">pg_identify_object_as_address() + to return object information based on OIDs (Álvaro Herrera) @@ -7524,11 +7524,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Loosen security checks for viewing queries in pg_stat_activity, + linkend="pg-stat-activity-view">pg_stat_activity, executing pg_cancel_backend(), + linkend="functions-admin-signal-table">pg_cancel_backend(), and executing pg_terminate_backend() + linkend="functions-admin-signal-table">pg_terminate_backend() (Stephen Frost) @@ -7544,7 +7544,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add pg_stat_get_snapshot_timestamp() + linkend="monitoring-stats-funcs-table">pg_stat_get_snapshot_timestamp() to output the time stamp of the statistics snapshot (Matt Kelly) @@ -7560,7 +7560,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add mxid_age() + linkend="vacuum-for-multixact-wraparound">mxid_age() to compute multi-xid age (Bruce Momjian) @@ -7578,9 +7578,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-28 [6c40f83] Tom Lane: Add min and max aggregates for inet/cidr data t.. --> - Add min()/max() aggregates - for inet/cidr data types (Haribabu + Add min()/max() aggregates + for inet/cidr data types (Haribabu Kommi) @@ -7613,12 +7613,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Improve support for composite types in PL/Python (Ed Behn, Ronan + linkend="plpython">PL/Python (Ed Behn, Ronan Dunklau) - This allows PL/Python functions to return arrays + This allows PL/Python functions to return arrays of composite types. @@ -7629,7 +7629,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Reduce lossiness of PL/Python floating-point value + linkend="plpython">PL/Python floating-point value conversions (Marko Kreen) @@ -7639,19 +7639,19 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-26 [cac7658] Peter ..: Add transforms feature --> - Allow specification of conversion routines between SQL + Allow specification of conversion routines between SQL data types and data types of procedural languages (Peter Eisentraut) This change adds new commands CREATE/DROP TRANSFORM. + linkend="SQL-CREATETRANSFORM">CREATE/DROP TRANSFORM. This also adds optional transformations between the hstore and ltree types to/from PL/Perl and PL/Python. + linkend="hstore">hstore and ltree types to/from PL/Perl and PL/Python. @@ -7670,7 +7670,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-16 [9e3ad1a] Tom Lane: Use fast path in plpgsql's RETURN/RETURN NEXT i.. --> - Improve PL/pgSQL array + Improve PL/pgSQL array performance (Tom Lane) @@ -7680,8 +7680,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-25 [a4847fc] Tom Lane: Add an ASSERT statement in plpgsql. --> - Add an ASSERT - statement in PL/pgSQL (Pavel Stehule) + Add an ASSERT + statement in PL/pgSQL (Pavel Stehule) @@ -7690,7 +7690,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-25 [bb1b8f6] Tom Lane: De-reserve most statement-introducing keywords .. --> - Allow more PL/pgSQL + Allow more PL/pgSQL keywords to be used as identifiers (Tom Lane) @@ -7715,11 +7715,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Move pg_archivecleanup, - pg_test_fsync, - pg_test_timing, - and pg_xlogdump - from contrib to src/bin (Peter Eisentraut) + linkend="pgarchivecleanup">pg_archivecleanup, + pg_test_fsync, + pg_test_timing, + and pg_xlogdump + from contrib to src/bin (Peter Eisentraut) @@ -7733,7 +7733,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-23 [61081e7] Heikki..: Add pg_rewind, for re-synchronizing a master se.. --> - Add pg_rewind, + Add pg_rewind, which allows re-synchronizing a master server after failback (Heikki Linnakangas) @@ -7745,13 +7745,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog to manage physical replication slots (Michael Paquier) - This is controlled via new and + options. @@ -7761,13 +7761,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pg_receivexlog - to synchronously flush WAL to storage using new - option (Furuya Osamu, Fujii Masao) - Without this, WAL files are fsync'ed only on close. + Without this, WAL files are fsync'ed only on close. @@ -7776,8 +7776,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-01-23 [a179232] Alvaro..: vacuumdb: enable parallel mode --> - Allow vacuumdb to - vacuum in parallel using new option (Dilip Kumar) @@ -7786,7 +7786,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-11-12 [5094da9] Alvaro..: vacuumdb: don't prompt for passwords over and .. --> - In vacuumdb, do not + In vacuumdb, do not prompt for the same password repeatedly when multiple connections are necessary (Haribabu Kommi, Michael Paquier) @@ -7797,8 +7797,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [458a077] Fujii ..: Support ––verbose option in reindexdb. --> - Add @@ -7808,10 +7808,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-12 [72d422a] Andrew..: Map basebackup tablespaces using a tablespace_.. --> - Make pg_basebackup - use a tablespace mapping file when using tar format, + Make pg_basebackup + use a tablespace mapping file when using tar format, to support symbolic links and file paths of 100+ characters in length - on MS Windows (Amit Kapila) + on MS Windows (Amit Kapila) @@ -7821,8 +7821,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-19 [bdd5726] Andres..: Add the capability to display summary statistic.. --> - Add pg_xlogdump option - to display summary statistics (Abhijit Menon-Sen) @@ -7838,7 +7838,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-31 [9d9991c] Bruce ..: psql: add asciidoc output format --> - Allow psql to produce AsciiDoc output (Szymon Guz) + Allow psql to produce AsciiDoc output (Szymon Guz) @@ -7847,14 +7847,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-10 [5b214c5] Fujii ..: Add new ECHO mode 'errors' that displays only .. --> - Add an errors mode that displays only failed commands - to psql's ECHO variable + Add an errors mode that displays only failed commands + to psql's ECHO variable (Pavel Stehule) - This behavior can also be selected with psql's - option. @@ -7864,12 +7864,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Provide separate column, header, and border linestyle control - in psql's unicode linestyle (Pavel Stehule) + in psql's unicode linestyle (Pavel Stehule) Single or double lines are supported; the default is - single. + single. @@ -7878,8 +7878,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-02 [51bb795] Andres..: Add psql PROMPT variable showing which line of .. --> - Add new option %l in psql's PROMPT variables + Add new option %l in psql's PROMPT variables to display the current multiline statement line number (Sawada Masahiko) @@ -7890,8 +7890,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-28 [7655f4c] Andrew..: Add a pager_min_lines setting to psql --> - Add \pset option pager_min_lines + Add \pset option pager_min_lines to control pager invocation (Andrew Dunstan) @@ -7901,7 +7901,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-21 [4077fb4] Andrew..: Fix an error in psql that overcounted output l.. --> - Improve psql line counting used when deciding + Improve psql line counting used when deciding to invoke the pager (Andrew Dunstan) @@ -7912,8 +7912,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-12-08 [e90371d] Tom Lane: Make failure to open psql log-file fatal. --> - psql now fails if the file specified by - an or switch cannot be written (Tom Lane, Daniel Vérité) @@ -7927,7 +7927,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-12 [bd40951] Andres..: Minimal psql tab completion support for SET se.. --> - Add psql tab completion when setting the + Add psql tab completion when setting the variable (Jeff Janes) @@ -7941,7 +7941,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-23 [631e7f6] Heikki..: Improve tab-completion of DROP and ALTER ENABLE.. --> - Improve psql's tab completion for triggers and rules + Improve psql's tab completion for triggers and rules (Andreas Karlsson) @@ -7958,17 +7958,17 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-10 [07c8651] Andres..: Add new psql help topics, accessible to both.. --> - Add psql \? help sections - variables and options (Pavel Stehule) + Add psql \? help sections + variables and options (Pavel Stehule) - \? variables shows psql's special - variables and \? options shows the command-line options. - \? commands shows the meta-commands, which is the + \? variables shows psql's special + variables and \? options shows the command-line options. + \? commands shows the meta-commands, which is the traditional output and remains the default. These help displays can also be obtained with the command-line - option --help=section. + option --help=section. @@ -7977,7 +7977,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [ee80f04] Alvaro..: psql: Show tablespace size in \db+ --> - Show tablespace size in psql's \db+ + Show tablespace size in psql's \db+ (Fabrízio de Royes Mello) @@ -7987,7 +7987,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [a6f3c1f] Magnus..: Show owner of types in psql \dT+ --> - Show data type owners in psql's \dT+ + Show data type owners in psql's \dT+ (Magnus Hagander) @@ -7997,13 +7997,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-04 [f6f654f] Fujii ..: Allow \watch to display query execution time if.. --> - Allow psql's \watch to output - \timing information (Fujii Masao) + Allow psql's \watch to output + \timing information (Fujii Masao) - Also prevent @@ -8012,8 +8012,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-22 [eca2b9b] Andrew..: Rework echo_hidden for \sf and \ef from commit .. --> - Make psql's \sf and \ef - commands honor ECHO_HIDDEN (Andrew Dunstan) + Make psql's \sf and \ef + commands honor ECHO_HIDDEN (Andrew Dunstan) @@ -8022,8 +8022,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-12 [e15c4ab] Fujii ..: Add tab-completion for \unset and valid setting.. --> - Improve psql tab completion for \set, - \unset, and :variable names (Pavel + Improve psql tab completion for \set, + \unset, and :variable names (Pavel Stehule) @@ -8034,7 +8034,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow tab completion of role names - in psql \c commands (Ian Barwick) + in psql \c commands (Ian Barwick) @@ -8054,15 +8054,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-17 [be1cc8f] Simon ..: Add pg_dump ––snapshot option --> - Allow pg_dump to share a snapshot taken by another - session using (Simon Riggs, Michael Paquier) The remote snapshot must have been exported by - pg_export_snapshot() or logical replication slot + pg_export_snapshot() or logical replication slot creation. This can be used to share a consistent snapshot - across multiple pg_dump processes. + across multiple pg_dump processes. @@ -8087,13 +8087,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-07 [7700597] Tom Lane: In pg_dump, show server and pg_dump versions w.. --> - Make pg_dump always print the server and - pg_dump versions (Jing Wang) + Make pg_dump always print the server and + pg_dump versions (Jing Wang) Previously, version information was only printed in - mode. @@ -8102,9 +8102,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-04 [232cd63] Fujii ..: Remove -i/-ignore-version option from pg_dump.. --> - Remove the long-ignored @@ -8122,7 +8122,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-25 [ebe30ad] Bruce ..: pg_ctl, pg_upgrade: allow multiple -o/-O opti.. --> - Support multiple pg_ctl options, concatenating their values (Bruce Momjian) @@ -8132,13 +8132,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-17 [c0e4520] Magnus..: Add option to pg_ctl to choose event source for.. --> - Allow control of pg_ctl's event source logging - on MS Windows (MauMau) + Allow control of pg_ctl's event source logging + on MS Windows (MauMau) - This only controls pg_ctl, not the server, which - has separate settings in postgresql.conf. + This only controls pg_ctl, not the server, which + has separate settings in postgresql.conf. @@ -8148,14 +8148,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> If the server's listen address is set to a wildcard value - (0.0.0.0 in IPv4 or :: in IPv6), connect via + (0.0.0.0 in IPv4 or :: in IPv6), connect via the loopback address rather than trying to use the wildcard address literally (Kondo Yuta) This fix primarily affects Windows, since on other platforms - pg_ctl will prefer to use a Unix-domain socket. + pg_ctl will prefer to use a Unix-domain socket. @@ -8173,13 +8173,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-14 [9fa8b0e] Peter ..: Move pg_upgrade from contrib/ to src/bin/ --> - Move pg_upgrade from contrib to - src/bin (Peter Eisentraut) + Move pg_upgrade from contrib to + src/bin (Peter Eisentraut) In connection with this change, the functionality previously - provided by the pg_upgrade_support module has been + provided by the pg_upgrade_support module has been moved into the core server. @@ -8189,8 +8189,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-25 [ebe30ad] Bruce ..: pg_ctl, pg_upgrade: allow multiple -o/-O optio.. --> - Support multiple pg_upgrade - / options, concatenating their values (Bruce Momjian) @@ -8201,7 +8201,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Improve database collation comparisons in - pg_upgrade (Heikki Linnakangas) + pg_upgrade (Heikki Linnakangas) @@ -8228,7 +8228,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-13 [81134af] Peter ..: Move pgbench from contrib/ to src/bin/ --> - Move pgbench from contrib to src/bin + Move pgbench from contrib to src/bin (Peter Eisentraut) @@ -8239,7 +8239,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Fix calculation of TPS number excluding connections - establishing (Tatsuo Ishii, Fabien Coelho) + establishing (Tatsuo Ishii, Fabien Coelho) @@ -8261,7 +8261,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - This is controlled by a new option. @@ -8271,7 +8271,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pgbench to generate Gaussian/exponential distributions - using \setrandom (Kondo Mitsumasa, Fabien Coelho) + using \setrandom (Kondo Mitsumasa, Fabien Coelho) @@ -8280,9 +8280,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-02 [878fdcb] Robert..: pgbench: Add a real expression syntax to \set --> - Allow pgbench's \set command to handle + Allow pgbench's \set command to handle arithmetic expressions containing more than one operator, and add - % (modulo) to the set of operators it supports + % (modulo) to the set of operators it supports (Robert Haas, Fabien Coelho) @@ -8303,7 +8303,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-20 [2c03216] Heikki..: Revamp the WAL record format. --> - Simplify WAL record format + Simplify WAL record format (Heikki Linnakangas) @@ -8328,7 +8328,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-25 [b64d92f] Andres..: Add a basic atomic ops API abstracting away pla.. --> - Add atomic memory operations API (Andres Freund) + Add atomic memory operations API (Andres Freund) @@ -8366,13 +8366,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Foreign tables can now take part in INSERT ... ON CONFLICT - DO NOTHING queries (Peter Geoghegan, Heikki Linnakangas, + DO NOTHING queries (Peter Geoghegan, Heikki Linnakangas, Andres Freund) Foreign data wrappers must be modified to handle this. - INSERT ... ON CONFLICT DO UPDATE is not supported on + INSERT ... ON CONFLICT DO UPDATE is not supported on foreign tables. @@ -8382,7 +8382,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-18 [4a14f13] Tom Lane: Improve hash_create's API for selecting simple-.. --> - Improve hash_create()'s API for selecting + Improve hash_create()'s API for selecting simple-binary-key hash functions (Teodor Sigaev, Tom Lane) @@ -8403,8 +8403,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-28 [a6d488c] Andres..: Remove Alpha and Tru64 support. --> - Remove Alpha (CPU) and Tru64 (OS) ports (Andres Freund) + Remove Alpha (CPU) and Tru64 (OS) ports (Andres Freund) @@ -8414,11 +8414,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Remove swap-byte-based spinlock implementation for - ARMv5 and earlier CPUs (Robert Haas) + ARMv5 and earlier CPUs (Robert Haas) - ARMv5's weak memory ordering made this locking + ARMv5's weak memory ordering made this locking implementation unsafe. Spinlock support is still possible on newer gcc implementations with atomics support. @@ -8444,10 +8444,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Change index operator class for columns pg_seclabel.provider + linkend="catalog-pg-seclabel">pg_seclabel.provider and pg_shseclabel.provider - to be text_pattern_ops (Tom Lane) + linkend="catalog-pg-shseclabel">pg_shseclabel.provider + to be text_pattern_ops (Tom Lane) @@ -8480,8 +8480,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow higher-precision time stamp resolution on Windows 8, Windows - Server 2012, and later Windows systems (Craig Ringer) + class="osname">Windows 8, Windows + Server 2012, and later Windows systems (Craig Ringer) @@ -8490,8 +8490,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-18 [f9dead5] Alvaro..: Install shared libraries to bin/ in Windows un.. --> - Install shared libraries to bin in MS Windows (Peter Eisentraut, Michael Paquier) + Install shared libraries to bin in MS Windows (Peter Eisentraut, Michael Paquier) @@ -8500,8 +8500,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-16 [22d0053] Alvaro..: MSVC: install src/test/modules together with c.. --> - Install src/test/modules together with - contrib on MSVC builds (Michael + Install src/test/modules together with + contrib on MSVC builds (Michael Paquier) @@ -8511,9 +8511,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-12 [8d9a0e8] Magnus..: Support ––with-extra-version equivalent functi.. --> - Allow configure's - @@ -8522,7 +8522,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [91f03ba] Noah M..: MSVC: Recognize PGFILEDESC in contrib and conv.. --> - Pass PGFILEDESC into MSVC contrib builds + Pass PGFILEDESC into MSVC contrib builds (Michael Paquier) @@ -8532,8 +8532,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [c4a448e] Noah M..: MSVC: Apply icons to all binaries having them .. --> - Add icons to all MSVC-built binaries and version - information to all MS Windows + Add icons to all MSVC-built binaries and version + information to all MS Windows binaries (Noah Misch) @@ -8548,12 +8548,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add optional-argument support to the internal - getopt_long() implementation (Michael Paquier, + getopt_long() implementation (Michael Paquier, Andres Freund) - This is used by the MSVC build. + This is used by the MSVC build. @@ -8575,7 +8575,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Add statistics for minimum, maximum, mean, and standard deviation times to pg_stat_statements + linkend="pgstatstatements-columns">pg_stat_statements (Mitsumasa Kondo, Andrew Dunstan) @@ -8585,8 +8585,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-01 [32984d8] Heikki..: Add functions for dealing with PGP armor heade.. --> - Add pgcrypto function - pgp_armor_headers() to extract PGP + Add pgcrypto function + pgp_armor_headers() to extract PGP armor headers (Marko Tiikkaja, Heikki Linnakangas) @@ -8597,7 +8597,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow empty replacement strings in unaccent (Mohammad Alhashash) + linkend="unaccent">unaccent (Mohammad Alhashash) @@ -8612,7 +8612,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow multicharacter source strings in unaccent (Tom Lane) + linkend="unaccent">unaccent (Tom Lane) @@ -8628,9 +8628,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [149f6f1] Simon ..: TABLESAMPLE system_time(limit) --> - Add contrib modules tsm_system_rows and - tsm_system_time + Add contrib modules tsm_system_rows and + tsm_system_time to allow additional table sampling methods (Petr Jelínek) @@ -8640,9 +8640,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-21 [3a82bc6] Heikki..: Add pageinspect functions for inspecting GIN in.. --> - Add GIN + Add GIN index inspection functions to pageinspect (Heikki + linkend="pageinspect">pageinspect (Heikki Linnakangas, Peter Geoghegan, Michael Paquier) @@ -8653,7 +8653,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add information about buffer pins to pg_buffercache display + linkend="pgbuffercache">pg_buffercache display (Andres Freund) @@ -8663,9 +8663,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-13 [5850b20] Andres..: Add pgstattuple_approx() to the pgstattuple ext.. --> - Allow pgstattuple + Allow pgstattuple to report approximate answers with less overhead using - pgstattuple_approx() (Abhijit Menon-Sen) + pgstattuple_approx() (Abhijit Menon-Sen) @@ -8675,15 +8675,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-01 [df761e3] Alvaro..: Move security_label test --> - Move dummy_seclabel, test_shm_mq, - test_parser, and worker_spi - from contrib to src/test/modules + Move dummy_seclabel, test_shm_mq, + test_parser, and worker_spi + from contrib to src/test/modules (Álvaro Herrera) These modules are only meant for server testing, so they do not need - to be built or installed when packaging PostgreSQL. + to be built or installed when packaging PostgreSQL. diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 09b6b90254..a89b1b5879 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -46,20 +46,20 @@ Branch: REL9_2_STABLE [98e6784aa] 2017-08-15 19:33:04 -0400 --> Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -98,7 +98,7 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. @@ -114,14 +114,14 @@ Branch: REL9_2_STABLE [8ae41ceae] 2017-08-14 15:43:20 -0400 --> Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -156,7 +156,7 @@ Branch: REL9_2_STABLE [4e704aac1] 2017-08-09 17:03:10 -0400 - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -189,7 +189,7 @@ Branch: REL9_4_STABLE [59dde9fed] 2017-08-19 13:39:38 -0400 Branch: REL9_3_STABLE [ece4bd901] 2017-08-19 13:39:38 -0400 --> - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -206,13 +206,13 @@ Branch: REL9_3_STABLE [f8bc6b2f6] 2017-08-16 13:30:09 +0200 Branch: REL9_2_STABLE [60b135c82] 2017-08-16 13:30:20 +0200 --> - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -225,7 +225,7 @@ Branch: REL_10_STABLE [a6b174f55] 2017-08-16 13:27:21 +0200 Branch: REL9_6_STABLE [954490fec] 2017-08-16 13:28:10 +0200 --> - Change ecpg's parser to recognize backslash + Change ecpg's parser to recognize backslash continuation of C preprocessor command lines (Michael Meskes) @@ -253,12 +253,12 @@ Branch: REL9_2_STABLE [f7e4783dd] 2017-08-17 13:15:46 -0400 This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -270,7 +270,7 @@ Branch: REL9_6_STABLE [fc2aafe4a] 2017-08-09 12:06:08 -0400 Branch: REL9_5_STABLE [a784d5f21] 2017-08-09 12:06:14 -0400 --> - Fix make check to behave correctly when invoked via a + Fix make check to behave correctly when invoked via a non-GNU make program (Thomas Munro) @@ -329,7 +329,7 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 --> Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -337,11 +337,11 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -356,15 +356,15 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -395,15 +395,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -417,7 +417,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -440,16 +440,16 @@ Branch: REL9_2_STABLE [06651648a] 2017-08-07 17:04:17 +0300 - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -464,13 +464,13 @@ Branch: REL9_5_STABLE [873741c68] 2017-08-07 10:19:21 -0400 Branch: REL9_4_STABLE [f1cda6d6c] 2017-08-07 10:19:22 -0400 --> - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -485,12 +485,12 @@ Branch: REL9_5_STABLE [fd376afc9] 2017-06-15 12:30:02 -0400 --> Correct the documentation about the process for upgrading standby - servers with pg_upgrade (Bruce Momjian) + servers with pg_upgrade (Bruce Momjian) The previous documentation instructed users to start/stop the primary - server after running pg_upgrade but before syncing + server after running pg_upgrade but before syncing the standby servers. This sequence is unsafe. @@ -697,7 +697,7 @@ Branch: REL9_2_STABLE [81bf7b5b1] 2017-06-21 14:13:58 -0700 --> Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) @@ -711,7 +711,7 @@ Branch: REL9_5_STABLE [446914f6b] 2017-06-30 12:00:03 -0400 Branch: REL9_4_STABLE [5aa8db014] 2017-06-30 12:00:03 -0400 --> - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) @@ -731,7 +731,7 @@ Branch: REL9_3_STABLE [45d067d50] 2017-06-05 19:18:16 -0700 Branch: REL9_2_STABLE [133b1920c] 2017-06-05 19:18:16 -0700 --> - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -761,7 +761,7 @@ Branch: REL9_3_STABLE [cb59949f6] 2017-06-26 17:31:56 -0400 Branch: REL9_2_STABLE [e96adaacd] 2017-06-26 17:31:56 -0400 --> - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -880,7 +880,7 @@ Branch: REL9_3_STABLE [aea1a3f0e] 2017-07-12 18:00:04 -0400 Branch: REL9_2_STABLE [75670ec37] 2017-07-12 18:00:04 -0400 --> - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -896,7 +896,7 @@ Branch: REL9_4_STABLE [dc777f9db] 2017-06-27 17:51:11 -0400 Branch: REL9_3_STABLE [66dee28b4] 2017-06-27 17:51:11 -0400 --> - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -908,7 +908,7 @@ Branch: master [7086be6e3] 2017-07-24 15:57:24 -0400 Branch: REL9_6_STABLE [971faefc2] 2017-07-24 16:24:42 -0400 --> - Ensure that a view's CHECK OPTIONS clause is enforced + Ensure that a view's CHECK OPTIONS clause is enforced properly when the underlying table is a foreign table (Etsuro Fujita) @@ -930,12 +930,12 @@ Branch: REL9_2_STABLE [da9165686] 2017-05-26 15:16:59 -0400 --> Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. @@ -948,7 +948,7 @@ Branch: REL9_6_STABLE [b35cce914] 2017-05-15 11:33:44 -0400 Branch: REL9_5_STABLE [53a1aa9f9] 2017-05-15 11:33:45 -0400 --> - Fix dangling pointer in ALTER TABLE when there is a + Fix dangling pointer in ALTER TABLE when there is a comment on a constraint belonging to the table (David Rowley) @@ -969,8 +969,8 @@ Branch: REL9_3_STABLE [b7d1bc820] 2017-08-03 21:29:36 -0400 Branch: REL9_2_STABLE [22eb38caa] 2017-08-03 21:42:46 -0400 --> - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) @@ -981,18 +981,18 @@ Branch: master [86705aa8c] 2017-08-03 13:24:48 -0400 Branch: REL9_6_STABLE [1f220c390] 2017-08-03 13:25:32 -0400 --> - Allow a foreign table's CHECK constraints to be - initially NOT VALID (Amit Langote) + Allow a foreign table's CHECK constraints to be + initially NOT VALID (Amit Langote) - CREATE TABLE silently drops NOT VALID - specifiers for CHECK constraints, reasoning that the + CREATE TABLE silently drops NOT VALID + specifiers for CHECK constraints, reasoning that the table must be empty so the constraint can be validated immediately. - But this is wrong for CREATE FOREIGN TABLE, where there's + But this is wrong for CREATE FOREIGN TABLE, where there's no reason to suppose that the underlying table is empty, and even if it is it's no business of ours to decide that the constraint can be - treated as valid going forward. Skip this optimization for + treated as valid going forward. Skip this optimization for foreign tables. @@ -1009,14 +1009,14 @@ Branch: REL9_2_STABLE [ac93a78b0] 2017-06-16 11:46:26 +0300 --> Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. @@ -1028,7 +1028,7 @@ Branch: master [34aebcf42] 2017-06-02 19:11:15 -0700 Branch: REL9_6_STABLE [8a7cd781e] 2017-06-02 19:11:23 -0700 --> - Allow parallelism in the query plan when COPY copies from + Allow parallelism in the query plan when COPY copies from a query's result (Andres Freund) @@ -1044,8 +1044,8 @@ Branch: REL9_3_STABLE [11854dee0] 2017-07-12 22:04:08 +0300 Branch: REL9_2_STABLE [40ba61b44] 2017-07-12 22:04:15 +0300 --> - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) @@ -1061,7 +1061,7 @@ Branch: REL9_2_STABLE [798d2321e] 2017-05-21 13:05:17 -0400 --> Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) @@ -1077,7 +1077,7 @@ Branch: REL9_2_STABLE [a047270d5] 2017-05-24 15:28:35 -0400 --> Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -1103,13 +1103,13 @@ Branch: REL9_3_STABLE [0d8f015e7] 2017-07-31 12:38:35 -0400 Branch: REL9_2_STABLE [456c7dff2] 2017-07-31 12:38:35 -0400 --> - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. @@ -1124,7 +1124,7 @@ Branch: REL9_4_STABLE [1fe1fc449] 2017-06-07 14:04:49 +0300 Branch: REL9_3_STABLE [f2fa0c651] 2017-06-07 14:04:44 +0300 --> - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -1146,9 +1146,9 @@ Branch: REL9_3_STABLE [6bc710f6d] 2017-05-17 12:24:19 -0400 Branch: REL9_2_STABLE [07477130e] 2017-05-17 12:24:19 -0400 --> - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -1167,8 +1167,8 @@ Branch: REL9_4_STABLE [b93217653] 2017-08-03 17:36:43 -0400 Branch: REL9_3_STABLE [035bb8222] 2017-08-03 17:36:23 -0400 --> - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -1190,8 +1190,8 @@ Branch: REL9_5_STABLE [12f1e523a] 2017-08-03 14:55:17 -0400 Branch: REL9_4_STABLE [69ad12b58] 2017-08-03 14:55:17 -0400 --> - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) @@ -1206,7 +1206,7 @@ Branch: REL9_4_STABLE [502ead3d6] 2017-07-22 20:20:10 -0400 Branch: REL9_3_STABLE [68a22bc69] 2017-07-22 20:20:10 -0400 --> - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -1224,8 +1224,8 @@ Branch: master [4500edc7e] 2017-06-28 10:33:57 -0400 Branch: REL9_6_STABLE [a2de017b3] 2017-06-28 10:34:01 -0400 --> - Fix pg_dump with the @@ -1240,7 +1240,7 @@ Branch: REL9_3_STABLE [a561254e4] 2017-05-26 12:51:05 -0400 Branch: REL9_2_STABLE [f62e1eff5] 2017-05-26 12:51:06 -0400 --> - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) @@ -1256,7 +1256,7 @@ Branch: REL9_3_STABLE [2943c04f7] 2017-06-19 11:03:16 -0400 Branch: REL9_2_STABLE [c10cbf77a] 2017-06-19 11:03:21 -0400 --> - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -1276,14 +1276,14 @@ Branch: REL9_3_STABLE [b6d640047] 2017-07-24 15:16:31 -0400 Branch: REL9_2_STABLE [d9874fde8] 2017-07-24 15:16:31 -0400 --> - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -1299,7 +1299,7 @@ Branch: REL9_3_STABLE [e947838ae] 2017-07-20 11:29:36 -0400 --> Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) @@ -1314,7 +1314,7 @@ Branch: REL9_3_STABLE [0ecc407d9] 2017-07-13 19:24:44 -0400 Branch: REL9_2_STABLE [bccfb1776] 2017-07-13 19:24:44 -0400 --> - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -1331,7 +1331,7 @@ Branch: REL9_3_STABLE [f3633689f] 2017-07-14 16:03:23 +0300 Branch: REL9_2_STABLE [4b994a96c] 2017-07-14 16:03:27 +0300 --> - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -1349,12 +1349,12 @@ Branch: REL9_6_STABLE [73fbf3d3d] 2017-07-21 22:04:55 -0400 Branch: REL9_5_STABLE [ed367be64] 2017-07-21 22:05:07 -0400 --> - Fix pg_rewind to correctly handle files exceeding 2GB + Fix pg_rewind to correctly handle files exceeding 2GB (Kuntal Ghosh, Michael Paquier) - Ordinarily such files won't appear in PostgreSQL data + Ordinarily such files won't appear in PostgreSQL data directories, but they could be present in some cases. @@ -1370,8 +1370,8 @@ Branch: REL9_3_STABLE [5c890645d] 2017-06-20 13:20:02 -0400 Branch: REL9_2_STABLE [65beccae5] 2017-06-20 13:20:02 -0400 --> - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -1389,7 +1389,7 @@ Branch: REL9_6_STABLE [d3ca4b4b4] 2017-06-05 16:10:07 -0700 Branch: REL9_5_STABLE [25653c171] 2017-06-05 16:10:07 -0700 --> - Fix pg_xlogdump's computation of WAL record length + Fix pg_xlogdump's computation of WAL record length (Andres Freund) @@ -1409,9 +1409,9 @@ Branch: REL9_4_STABLE [a648fc70a] 2017-07-21 14:20:43 -0400 Branch: REL9_3_STABLE [6d9de660d] 2017-07-21 14:20:43 -0400 --> - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -1430,7 +1430,7 @@ Branch: REL9_4_STABLE [c02c450cf] 2017-06-07 15:40:35 -0400 Branch: REL9_3_STABLE [fc267a0c3] 2017-06-07 15:41:05 -0400 --> - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -1449,7 +1449,7 @@ Branch: REL9_5_STABLE [6f2fe2468] 2017-05-11 14:51:38 -0400 Branch: REL9_4_STABLE [5c633f76b] 2017-05-11 14:51:46 -0400 --> - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) @@ -1465,7 +1465,7 @@ Branch: REL9_3_STABLE [cee7238de] 2017-06-01 13:32:56 -0400 Branch: REL9_2_STABLE [a378b9bc2] 2017-06-01 13:32:56 -0400 --> - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -1492,8 +1492,8 @@ Branch: REL9_3_STABLE [da30fa603] 2017-06-05 20:40:47 -0400 Branch: REL9_2_STABLE [f964a7c5a] 2017-06-05 20:41:01 -0400 --> - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) @@ -1508,13 +1508,13 @@ Branch: REL9_3_STABLE [2c7d2114b] 2017-05-12 10:24:16 -0400 Branch: REL9_2_STABLE [614f83c12] 2017-05-12 10:24:36 -0400 --> - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. @@ -1530,7 +1530,7 @@ Branch: REL9_2_STABLE [4885e5c88] 2017-07-23 23:53:55 -0700 --> In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -1551,8 +1551,8 @@ Branch: REL9_5_STABLE [7eb4124da] 2017-07-16 11:27:07 -0400 Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 --> - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -1589,7 +1589,7 @@ Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -1615,18 +1615,18 @@ Branch: REL9_2_STABLE [99cbb0bd9] 2017-05-08 07:24:28 -0700 --> Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -1665,7 +1665,7 @@ Branch: REL9_3_STABLE [703da1795] 2017-05-08 11:19:08 -0400 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -1687,17 +1687,17 @@ Branch: REL9_4_STABLE [ed36c1fe1] 2017-05-08 07:24:27 -0700 Branch: REL9_3_STABLE [3eab81127] 2017-05-08 07:24:28 -0700 --> - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -1748,7 +1748,7 @@ Branch: REL9_3_STABLE [6bd7816e7] 2017-03-14 12:08:14 -0400 Branch: REL9_2_STABLE [b2ae1d6c4] 2017-03-14 12:10:36 -0400 --> - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1770,7 +1770,7 @@ Branch: REL9_3_STABLE [856580873] 2017-04-23 13:10:57 -0400 Branch: REL9_2_STABLE [952e33b05] 2017-04-23 13:10:58 -0400 --> - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1778,7 +1778,7 @@ Branch: REL9_2_STABLE [952e33b05] 2017-04-23 13:10:58 -0400 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -1792,7 +1792,7 @@ Branch: REL9_5_STABLE [feb659cce] 2017-02-22 08:29:44 +0900 Branch: REL9_4_STABLE [a3eb715a3] 2017-02-22 08:29:57 +0900 --> - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) @@ -1840,7 +1840,7 @@ Branch: REL9_5_STABLE [dba1f310a] 2017-04-24 12:16:58 -0400 Branch: REL9_4_STABLE [436b560b8] 2017-04-24 12:16:58 -0400 --> - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1858,7 +1858,7 @@ Branch: master [89deca582] 2017-04-07 12:18:38 -0400 Branch: REL9_6_STABLE [c0a493e17] 2017-04-07 12:18:38 -0400 --> - Fix possible no relation entry for relid 0 error when + Fix possible no relation entry for relid 0 error when planning nested set operations (Tom Lane) @@ -1886,7 +1886,7 @@ Branch: REL9_6_STABLE [6c73b390b] 2017-04-17 15:29:00 -0400 Branch: REL9_5_STABLE [6f0f98bb0] 2017-04-17 15:29:00 -0400 --> - Avoid applying physical targetlist optimization to custom + Avoid applying physical targetlist optimization to custom scans (Dmitry Ivanov, Tom Lane) @@ -1905,13 +1905,13 @@ Branch: REL9_6_STABLE [92b15224b] 2017-05-06 21:46:41 -0400 Branch: REL9_5_STABLE [d617c7629] 2017-05-06 21:46:56 -0400 --> - Use the correct sub-expression when applying a FOR ALL + Use the correct sub-expression when applying a FOR ALL row-level-security policy (Stephen Frost) - In some cases the WITH CHECK restriction would be applied - when the USING restriction is more appropriate. + In some cases the WITH CHECK restriction would be applied + when the USING restriction is more appropriate. @@ -1934,7 +1934,7 @@ Branch: REL9_2_STABLE [c9d6c564f] 2017-05-02 18:05:54 -0400 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. @@ -1950,12 +1950,12 @@ Branch: REL9_2_STABLE [27a8c8033] 2017-02-12 16:05:23 -0500 --> Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1972,13 +1972,13 @@ Branch: REL9_3_STABLE [954744f7a] 2017-04-28 14:53:56 -0400 Branch: REL9_2_STABLE [f60f0c8fe] 2017-04-28 14:55:42 -0400 --> - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. @@ -1991,7 +1991,7 @@ Branch: REL9_6_STABLE [943140d57] 2017-03-06 16:50:47 -0500 Branch: REL9_5_STABLE [420d9ec0a] 2017-03-06 16:50:47 -0500 --> - Avoid dangling pointer in COPY ... TO when row-level + Avoid dangling pointer in COPY ... TO when row-level security is active for the source table (Tom Lane) @@ -2009,8 +2009,8 @@ Branch: REL9_6_STABLE [68f7b91e5] 2017-03-04 16:09:33 -0500 Branch: REL9_5_STABLE [807df31d1] 2017-03-04 16:09:33 -0500 --> - Avoid accessing an already-closed relcache entry in CLUSTER - and VACUUM FULL (Tom Lane) + Avoid accessing an already-closed relcache entry in CLUSTER + and VACUUM FULL (Tom Lane) @@ -2032,14 +2032,14 @@ Branch: master [64ae420b2] 2017-03-17 14:35:54 +0000 Branch: REL9_6_STABLE [733488dc6] 2017-03-17 14:46:15 +0000 --> - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -2067,13 +2067,13 @@ Branch: master [d5286aa90] 2017-03-21 16:23:10 +0300 Branch: REL9_6_STABLE [a4d07d2e9] 2017-03-21 16:24:10 +0300 --> - Fix incorrect support for certain box operators in SP-GiST + Fix incorrect support for certain box operators in SP-GiST (Nikita Glukhov) - SP-GiST index scans using the operators &< - &> &<| and |&> + SP-GiST index scans using the operators &< + &> &<| and |&> would yield incorrect answers. @@ -2087,12 +2087,12 @@ Branch: REL9_5_STABLE [d68a2b20a] 2017-04-05 23:51:28 -0400 Branch: REL9_4_STABLE [8851bcf88] 2017-04-05 23:51:28 -0400 --> - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -2110,13 +2110,13 @@ Branch: REL9_3_STABLE [6e86b448f] 2017-05-04 21:31:12 -0400 Branch: REL9_2_STABLE [a48d47908] 2017-05-04 22:39:23 -0400 --> - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. @@ -2134,8 +2134,8 @@ Branch: REL9_5_STABLE [cf73c6bfc] 2017-02-09 15:49:57 -0500 Branch: REL9_4_STABLE [86ef376bb] 2017-02-09 15:49:58 -0500 --> - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -2155,7 +2155,7 @@ Branch: REL9_6_STABLE [1ec36a9eb] 2017-04-16 20:49:40 -0400 Branch: REL9_5_STABLE [b6e6ae1dc] 2017-04-16 20:50:31 -0400 --> - Fix pg_get_object_address() to handle members of operator + Fix pg_get_object_address() to handle members of operator families correctly (Álvaro Herrera) @@ -2167,12 +2167,12 @@ Branch: master [78874531b] 2017-03-24 13:53:40 +0300 Branch: REL9_6_STABLE [8de6278d3] 2017-03-24 13:55:02 +0300 --> - Fix cancelling of pg_stop_backup() when attempting to stop + Fix cancelling of pg_stop_backup() when attempting to stop a non-exclusive backup (Michael Paquier, David Steele) - If pg_stop_backup() was cancelled while waiting for a + If pg_stop_backup() was cancelled while waiting for a non-exclusive backup to end, related state was left inconsistent; a new exclusive backup could not be started, and there were other minor problems. @@ -2196,7 +2196,7 @@ Branch: REL9_3_STABLE [07987304d] 2017-05-07 11:35:05 -0400 Branch: REL9_2_STABLE [9061680f0] 2017-05-07 11:35:11 -0400 --> - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -2226,13 +2226,13 @@ Branch: REL9_3_STABLE [3f613c6a4] 2017-02-21 17:51:28 -0500 Branch: REL9_2_STABLE [775227590] 2017-02-21 17:51:28 -0500 --> - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -2273,8 +2273,8 @@ Branch: REL9_3_STABLE [04207ef76] 2017-03-13 20:52:05 +0100 Branch: REL9_2_STABLE [d8c207437] 2017-03-13 20:52:16 +0100 --> - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) @@ -2290,7 +2290,7 @@ Branch: REL9_2_STABLE [731afc91f] 2017-03-10 10:52:01 +0100 --> Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) @@ -2300,8 +2300,8 @@ Author: Teodor Sigaev Branch: REL9_6_STABLE [2ed391f95] 2017-03-24 19:23:13 +0300 --> - Fix pgbench to handle the combination - of and options correctly (Fabien Coelho) @@ -2313,8 +2313,8 @@ Branch: master [ef2662394] 2017-03-07 11:36:42 -0500 Branch: REL9_6_STABLE [0e2c85d13] 2017-03-07 11:36:35 -0500 --> - Fix pgbench to honor the long-form option - spelling , as per its documentation (Tom Lane) @@ -2325,15 +2325,15 @@ Branch: master [330b84d8c] 2017-03-06 23:29:02 -0500 Branch: REL9_6_STABLE [e961341cc] 2017-03-06 23:29:08 -0500 --> - Fix pg_dump/pg_restore to correctly - handle privileges for the public schema when - using option (Stephen Frost) Other schemas start out with no privileges granted, - but public does not; this requires special-case treatment - when it is dropped and restored due to the option. @@ -2348,7 +2348,7 @@ Branch: REL9_3_STABLE [783acfd4d] 2017-03-06 19:33:59 -0500 Branch: REL9_2_STABLE [0ab75448e] 2017-03-06 19:33:59 -0500 --> - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -2368,12 +2368,12 @@ Branch: master [39370e6a0] 2017-02-17 15:06:28 -0500 Branch: REL9_6_STABLE [4e8b2fd33] 2017-02-17 15:06:34 -0500 --> - Fix typo in pg_dump's query for initial privileges + Fix typo in pg_dump's query for initial privileges of a procedural language (Peter Eisentraut) - This resulted in pg_dump always believing that the + This resulted in pg_dump always believing that the language had no initial privileges. Since that's true for most procedural languages, ill effects from this bug are probably rare. @@ -2390,13 +2390,13 @@ Branch: REL9_3_STABLE [0c0a95c2f] 2017-03-10 14:15:09 -0500 Branch: REL9_2_STABLE [e6d2ba419] 2017-03-10 14:15:09 -0500 --> - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. @@ -2411,8 +2411,8 @@ Branch: REL9_3_STABLE [7f831f09b] 2017-03-06 17:04:29 -0500 Branch: REL9_2_STABLE [e864cd25b] 2017-03-06 17:04:55 -0500 --> - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -2433,13 +2433,13 @@ Branch: REL9_2_STABLE [0276da5eb] 2017-03-12 19:36:28 -0400 --> Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). @@ -2454,7 +2454,7 @@ Branch: REL9_3_STABLE [f6cfc14e5] 2017-03-11 13:33:22 -0800 Branch: REL9_2_STABLE [c4613c3f4] 2017-03-11 13:33:30 -0800 --> - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) @@ -2479,7 +2479,7 @@ Branch: REL9_4_STABLE [b179684c7] 2017-04-13 17:18:35 -0400 Branch: REL9_3_STABLE [5be58cc89] 2017-04-13 17:18:35 -0400 --> - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -2497,7 +2497,7 @@ Branch: master [332bec1e6] 2017-04-24 22:50:07 -0400 Branch: REL9_6_STABLE [86e640a69] 2017-04-26 09:14:21 -0400 --> - In contrib/postgres_fdw, allow join conditions that + In contrib/postgres_fdw, allow join conditions that contain shippable extension-provided functions to be pushed to the remote server (David Rowley, Ashutosh Bapat) @@ -2555,7 +2555,7 @@ Branch: REL9_3_STABLE [dc93cafca] 2017-05-01 11:54:02 -0400 Branch: REL9_2_STABLE [c96ccc40e] 2017-05-01 11:54:08 -0400 --> - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -2569,9 +2569,9 @@ Branch: REL9_2_STABLE [c96ccc40e] 2017-05-01 11:54:08 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -2593,15 +2593,15 @@ Branch: REL9_2_STABLE [82e7d3dfd] 2017-05-07 11:57:41 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -2663,15 +2663,15 @@ Branch: REL9_2_STABLE [bcd7b47c2] 2017-02-06 13:20:25 -0500 --> Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -2695,7 +2695,7 @@ Branch: REL9_4_STABLE [3e844a34b] 2016-11-15 15:55:36 -0500 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. @@ -2711,7 +2711,7 @@ Branch: REL9_5_STABLE [ed8e8b814] 2017-01-09 18:19:29 -0300 - The WAL record emitted for a BRIN revmap page when moving an + The WAL record emitted for a BRIN revmap page when moving an index tuple to a different page was incorrect. Replay would make the related portion of the index useless, forcing it to be recomputed. @@ -2728,13 +2728,13 @@ Branch: REL9_3_STABLE [8e403f215] 2016-12-08 14:16:47 -0500 Branch: REL9_2_STABLE [a00ac6299] 2016-12-08 14:19:25 -0500 --> - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -2816,7 +2816,7 @@ Branch: master [93eb619cd] 2016-12-17 02:22:15 +0900 Branch: REL9_6_STABLE [6c75fb6b3] 2016-12-17 02:25:47 +0900 --> - Disallow setting the num_sync field to zero in + Disallow setting the num_sync field to zero in (Fujii Masao) @@ -2867,7 +2867,7 @@ Branch: REL9_6_STABLE [20064c0ec] 2017-01-29 23:05:09 -0500 --> Fix tracking of initial privileges for extension member objects so - that it works correctly with ALTER EXTENSION ... ADD/DROP + that it works correctly with ALTER EXTENSION ... ADD/DROP (Stephen Frost) @@ -2875,7 +2875,7 @@ Branch: REL9_6_STABLE [20064c0ec] 2017-01-29 23:05:09 -0500 An object's current privileges at the time it is added to the extension will now be considered its default privileges; only later changes in its privileges will be dumped by - subsequent pg_dump runs. + subsequent pg_dump runs. @@ -2890,7 +2890,7 @@ Branch: REL9_3_STABLE [8f67a6c22] 2016-11-23 13:45:56 -0500 Branch: REL9_2_STABLE [05975ab0a] 2016-11-23 13:45:56 -0500 --> - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -2912,7 +2912,7 @@ Branch: REL9_4_STABLE [3a9a8c408] 2016-10-26 17:05:06 -0400 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -2937,8 +2937,8 @@ Branch: REL9_2_STABLE [6a363a4c2] 2016-11-25 13:44:48 -0500 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. @@ -2950,15 +2950,15 @@ Branch: REL9_6_STABLE [4e563a1f6] 2017-01-09 19:26:58 -0300 Branch: REL9_5_STABLE [4d4ab6ccd] 2017-01-09 19:26:58 -0300 --> - Fix ALTER TABLE ... SET DATA TYPE ... USING when child + Fix ALTER TABLE ... SET DATA TYPE ... USING when child table has different column ordering than the parent (Álvaro Herrera) - Failure to adjust the column numbering in the USING + Failure to adjust the column numbering in the USING expression led to errors, - typically attribute N has wrong type. + typically attribute N has wrong type. @@ -2974,7 +2974,7 @@ Branch: REL9_2_STABLE [6c4cf2be8] 2017-01-04 18:00:12 -0500 --> Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -2992,8 +2992,8 @@ Branch: master [1ead0208b] 2016-12-22 16:23:38 -0500 Branch: REL9_6_STABLE [68330c8b4] 2016-12-22 16:23:34 -0500 --> - Ensure that CREATE TABLE ... LIKE ... WITH OIDS creates - a table with OIDs, whether or not the LIKE-referenced + Ensure that CREATE TABLE ... LIKE ... WITH OIDS creates + a table with OIDs, whether or not the LIKE-referenced table(s) have OIDs (Tom Lane) @@ -3007,7 +3007,7 @@ Branch: REL9_5_STABLE [78a98b767] 2016-12-21 17:02:47 +0000 Branch: REL9_4_STABLE [cad24980e] 2016-12-21 17:03:54 +0000 --> - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -3028,7 +3028,7 @@ Branch: REL9_3_STABLE [0e3aadb68] 2016-12-22 17:09:00 -0500 --> Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -3046,8 +3046,8 @@ Branch: REL9_5_STABLE [7816d1356] 2016-11-24 15:39:55 -0300 --> Fix commit timestamp mechanism to not fail when queried about - the special XIDs FrozenTransactionId - and BootstrapTransactionId (Craig Ringer) + the special XIDs FrozenTransactionId + and BootstrapTransactionId (Craig Ringer) @@ -3068,8 +3068,8 @@ Branch: REL9_5_STABLE [6e00ba1e1] 2016-11-10 15:00:58 -0500 The symptom was spurious ON CONFLICT is not supported on table - ... used as a catalog table errors when the target - of INSERT ... ON CONFLICT is a view with cascade option. + ... used as a catalog table errors when the target + of INSERT ... ON CONFLICT is a view with cascade option. @@ -3081,8 +3081,8 @@ Branch: REL9_6_STABLE [da05d0ebc] 2016-12-04 15:02:46 -0500 Branch: REL9_5_STABLE [25c06a1ed] 2016-12-04 15:02:48 -0500 --> - Fix incorrect target lists can have at most N - entries complaint when using ON CONFLICT with + Fix incorrect target lists can have at most N + entries complaint when using ON CONFLICT with wide tables (Tom Lane) @@ -3094,8 +3094,8 @@ Branch: master [da8f3ebf3] 2016-11-02 14:32:13 -0400 Branch: REL9_6_STABLE [f4d865f22] 2016-11-02 14:32:13 -0400 --> - Fix spurious query provides a value for a dropped column - errors during INSERT or UPDATE on a table + Fix spurious query provides a value for a dropped column + errors during INSERT or UPDATE on a table with a dropped column (Tom Lane) @@ -3110,13 +3110,13 @@ Branch: REL9_4_STABLE [44c8b4fcd] 2016-11-20 14:26:19 -0500 Branch: REL9_3_STABLE [71db302ec] 2016-11-20 14:26:19 -0500 --> - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -3133,12 +3133,12 @@ Branch: REL9_2_STABLE [082d1fb9e] 2016-12-09 12:01:14 -0500 --> Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -3162,8 +3162,8 @@ Branch: REL9_2_STABLE [6e2c21ec5] 2016-12-21 17:39:33 -0500 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). @@ -3174,7 +3174,7 @@ Branch: master [db80acfc9] 2016-12-20 09:20:17 +0200 Branch: REL9_6_STABLE [ce92fc4e2] 2016-12-20 09:20:30 +0200 --> - Fix execution of DISTINCT and ordered aggregates when + Fix execution of DISTINCT and ordered aggregates when multiple such aggregates are able to share the same transition state (Heikki Linnakangas) @@ -3189,7 +3189,7 @@ Branch: master [260443847] 2016-12-19 13:49:50 -0500 Branch: REL9_6_STABLE [3f07eff10] 2016-12-19 13:49:45 -0500 --> - Fix implementation of phrase search operators in tsquery + Fix implementation of phrase search operators in tsquery (Tom Lane) @@ -3218,7 +3218,7 @@ Branch: REL9_2_STABLE [fe6120f9b] 2017-01-26 12:17:47 -0500 --> Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -3238,7 +3238,7 @@ Branch: REL9_3_STABLE [79e1a9efa] 2016-12-11 13:09:57 -0500 Branch: REL9_2_STABLE [f4ccee408] 2016-12-11 13:09:57 -0500 --> - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) @@ -3254,7 +3254,7 @@ Branch: REL9_3_STABLE [407d513df] 2016-10-30 17:35:43 -0400 Branch: REL9_2_STABLE [606e16a7f] 2016-10-30 17:35:43 -0400 --> - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) @@ -3269,7 +3269,7 @@ Branch: REL9_3_STABLE [77a22f898] 2016-10-30 15:24:40 -0400 Branch: REL9_2_STABLE [b0f8a273e] 2016-10-30 15:24:40 -0400 --> - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) @@ -3283,7 +3283,7 @@ Branch: REL9_5_STABLE [7151e72d7] 2016-10-30 12:27:41 -0400 --> Improve speed of user-defined aggregates that - use array_append() as transition function (Tom Lane) + use array_append() as transition function (Tom Lane) @@ -3298,7 +3298,7 @@ Branch: REL9_3_STABLE [ee9cb284a] 2017-01-05 11:33:51 -0500 Branch: REL9_2_STABLE [e0d59c6ef] 2017-01-05 11:33:51 -0500 --> - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) @@ -3310,8 +3310,8 @@ Branch: REL9_6_STABLE [79c89f1f4] 2016-12-09 12:42:17 -0300 Branch: REL9_5_STABLE [581b09c72] 2016-12-09 12:42:17 -0300 --> - Fix possible crash in array_position() - or array_positions() when processing arrays of records + Fix possible crash in array_position() + or array_positions() when processing arrays of records (Junseok Yang) @@ -3327,7 +3327,7 @@ Branch: REL9_3_STABLE [e71fe8470] 2016-12-16 12:53:22 +0200 Branch: REL9_2_STABLE [c8f8ed5c2] 2016-12-16 12:53:27 +0200 --> - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -3348,8 +3348,8 @@ Branch: REL9_3_STABLE [f64b11fa0] 2017-01-17 17:32:20 +0900 Branch: REL9_2_STABLE [c73157ca0] 2017-01-17 17:32:45 +0900 --> - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -3368,7 +3368,7 @@ Branch: REL9_5_STABLE [74e67bbad] 2017-01-18 15:21:52 -0500 --> Disable transform that attempted to remove no-op AT TIME - ZONE conversions (Tom Lane) + ZONE conversions (Tom Lane) @@ -3388,15 +3388,15 @@ Branch: REL9_3_STABLE [583599839] 2016-12-27 15:43:54 -0500 Branch: REL9_2_STABLE [beae7d5f0] 2016-12-27 15:43:55 -0500 --> - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -3432,7 +3432,7 @@ Branch: master [4212cb732] 2016-12-06 11:11:54 -0500 Branch: REL9_6_STABLE [ebe5dc9e0] 2016-12-06 11:43:12 -0500 --> - Allow statements prepared with PREPARE to be given + Allow statements prepared with PREPARE to be given parallel plans (Amit Kapila, Tobias Bussmann) @@ -3501,7 +3501,7 @@ Branch: REL9_6_STABLE [7defc3b97] 2016-11-10 11:31:56 -0500 --> Fix the plan generated for sorted partial aggregation with a constant - GROUP BY clause (Tom Lane) + GROUP BY clause (Tom Lane) @@ -3512,8 +3512,8 @@ Branch: master [1f542a2ea] 2016-12-13 13:20:37 -0500 Branch: REL9_6_STABLE [997a2994e] 2016-12-13 13:20:16 -0500 --> - Fix could not find plan for CTE planner error when dealing - with a UNION ALL containing CTE references (Tom Lane) + Fix could not find plan for CTE planner error when dealing + with a UNION ALL containing CTE references (Tom Lane) @@ -3530,7 +3530,7 @@ Branch: REL9_6_STABLE [b971a98ce] 2017-02-02 19:11:27 -0500 The typical consequence of this mistake was a plan should not - reference subplan's variable error. + reference subplan's variable error. @@ -3561,7 +3561,7 @@ Branch: master [bec96c82f] 2017-01-19 12:06:21 -0500 Branch: REL9_6_STABLE [fd081cabf] 2017-01-19 12:06:27 -0500 --> - Fix pg_dump to emit the data of a sequence that is + Fix pg_dump to emit the data of a sequence that is marked as an extension configuration table (Michael Paquier) @@ -3573,14 +3573,14 @@ Branch: master [e2090d9d2] 2017-01-31 16:24:11 -0500 Branch: REL9_6_STABLE [eb5e9d90d] 2017-01-31 16:24:14 -0500 --> - Fix mishandling of ALTER DEFAULT PRIVILEGES ... REVOKE - in pg_dump (Stephen Frost) + Fix mishandling of ALTER DEFAULT PRIVILEGES ... REVOKE + in pg_dump (Stephen Frost) - pg_dump missed issuing the - required REVOKE commands in cases where ALTER - DEFAULT PRIVILEGES had been used to reduce privileges to less than + pg_dump missed issuing the + required REVOKE commands in cases where ALTER + DEFAULT PRIVILEGES had been used to reduce privileges to less than they would normally be. @@ -3602,7 +3602,7 @@ Branch: REL9_3_STABLE [fc03f7dd1] 2016-12-21 13:47:28 -0500 Branch: REL9_2_STABLE [59a389891] 2016-12-21 13:47:32 -0500 --> - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) @@ -3616,15 +3616,15 @@ Branch: REL9_5_STABLE [a7864037d] 2016-11-17 14:59:23 -0500 Branch: REL9_4_STABLE [e69b532be] 2016-11-17 14:59:26 -0500 --> - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. @@ -3637,7 +3637,7 @@ Branch: REL9_5_STABLE [bc53d7130] 2016-12-19 10:16:02 +0100 Branch: REL9_4_STABLE [f6508827a] 2016-12-19 10:16:12 +0100 --> - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -3656,8 +3656,8 @@ Branch: REL9_5_STABLE [6d779e05a] 2016-11-07 15:03:56 +0100 Branch: REL9_4_STABLE [5556420d4] 2016-11-07 15:04:23 +0100 --> - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) @@ -3673,7 +3673,7 @@ Branch: REL9_3_STABLE [92929a3e3] 2016-10-27 12:00:05 -0400 Branch: REL9_2_STABLE [629575fa2] 2016-10-27 12:14:07 -0400 --> - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -3685,10 +3685,10 @@ Branch: master [dbdfd114f] 2016-11-25 18:36:10 -0500 Branch: REL9_6_STABLE [255bcd27f] 2016-11-25 18:36:10 -0500 --> - Improve initdb to insert the correct + Improve initdb to insert the correct platform-specific default values for - the xxx_flush_after parameters - into postgresql.conf (Fabien Coelho, Tom Lane) + the xxx_flush_after parameters + into postgresql.conf (Fabien Coelho, Tom Lane) @@ -3706,7 +3706,7 @@ Branch: REL9_5_STABLE [c472f2a33] 2016-12-22 15:01:39 -0500 --> Fix possible mishandling of expanded arrays in domain check - constraints and CASE execution (Tom Lane) + constraints and CASE execution (Tom Lane) @@ -3762,14 +3762,14 @@ Branch: REL9_3_STABLE [9c0b04f18] 2016-11-06 14:43:14 -0500 Branch: REL9_2_STABLE [92b7b1058] 2016-11-06 14:43:14 -0500 --> - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. @@ -3785,7 +3785,7 @@ Branch: REL9_3_STABLE [46b6f3fff] 2016-11-15 16:17:19 -0500 Branch: REL9_2_STABLE [13aa9af37] 2016-11-15 16:17:19 -0500 --> - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -3806,7 +3806,7 @@ Branch: REL9_3_STABLE [1df8b3fe8] 2016-12-22 08:32:25 +0100 Branch: REL9_2_STABLE [501c91074] 2016-12-22 08:34:07 +0100 --> - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) @@ -3819,11 +3819,11 @@ Branch: REL9_6_STABLE [6a8c67f50] 2016-12-25 16:04:47 -0500 --> Fix incorrect error reporting for duplicate data - in psql's \crosstabview (Tom Lane) + in psql's \crosstabview (Tom Lane) - psql sometimes quoted the wrong row and/or column + psql sometimes quoted the wrong row and/or column values when complaining about multiple entries for the same crosstab cell. @@ -3840,8 +3840,8 @@ Branch: REL9_3_STABLE [2022d594d] 2016-12-23 21:01:48 -0500 Branch: REL9_2_STABLE [26b55d669] 2016-12-23 21:01:51 -0500 --> - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) @@ -3852,8 +3852,8 @@ Branch: master [404e66758] 2016-11-28 11:51:30 -0500 Branch: REL9_6_STABLE [28735cc72] 2016-11-28 11:51:35 -0500 --> - Fix psql's tab completion for ALTER TABLE t - ALTER c DROP ... (Kyotaro Horiguchi) + Fix psql's tab completion for ALTER TABLE t + ALTER c DROP ... (Kyotaro Horiguchi) @@ -3868,9 +3868,9 @@ Branch: REL9_3_STABLE [82eb5c514] 2016-12-07 12:19:56 -0500 Branch: REL9_2_STABLE [1ec5cc025] 2016-12-07 12:19:57 -0500 --> - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -3890,8 +3890,8 @@ Branch: REL9_3_STABLE [9b8507bfa] 2016-12-22 09:47:25 -0800 Branch: REL9_2_STABLE [44de099f8] 2016-12-22 09:46:46 -0800 --> - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) @@ -3906,14 +3906,14 @@ Branch: REL9_4_STABLE [cb687e0ac] 2016-12-22 09:19:08 -0800 Branch: REL9_3_STABLE [bd46cce21] 2016-12-22 09:18:50 -0800 --> - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -3927,7 +3927,7 @@ Branch: REL9_6_STABLE [2a8783e44] 2016-11-02 00:09:28 -0400 Branch: REL9_5_STABLE [af636d7b5] 2016-11-02 00:09:28 -0400 --> - Fix portability problems in contrib/pageinspect's + Fix portability problems in contrib/pageinspect's functions for GIN indexes (Peter Eisentraut, Tom Lane) @@ -4016,7 +4016,7 @@ Branch: REL9_3_STABLE [2b133be04] 2017-01-30 11:41:02 -0500 Branch: REL9_2_STABLE [ef878cc2c] 2017-01-30 11:41:09 -0500 --> - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -4083,7 +4083,7 @@ Branch: REL9_3_STABLE [1c02ee314] 2016-10-19 15:00:34 +0300 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -4091,7 +4091,7 @@ Branch: REL9_3_STABLE [1c02ee314] 2016-10-19 15:00:34 +0300 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -4102,7 +4102,7 @@ Branch: master [5afcd2aa7] 2016-09-30 20:40:55 -0400 Branch: REL9_6_STABLE [b6d906073] 2016-09-30 20:39:06 -0400 --> - Fix possible data corruption when pg_upgrade rewrites + Fix possible data corruption when pg_upgrade rewrites a relation visibility map into 9.6 format (Tom Lane) @@ -4112,20 +4112,20 @@ Branch: REL9_6_STABLE [b6d906073] 2016-09-30 20:39:06 -0400 Windows, the old map was read using text mode, leading to incorrect results if the map happened to contain consecutive bytes that matched a carriage return/line feed sequence. The latter error would almost - always lead to a pg_upgrade failure due to the map + always lead to a pg_upgrade failure due to the map file appearing to be the wrong length. If you are using a big-endian machine (many non-Intel architectures - are big-endian) and have used pg_upgrade to upgrade + are big-endian) and have used pg_upgrade to upgrade from a pre-9.6 release, you should assume that all visibility maps are incorrect and need to be regenerated. It is sufficient to truncate each relation's visibility map - with contrib/pg_visibility's - pg_truncate_visibility_map() function. + with contrib/pg_visibility's + pg_truncate_visibility_map() function. For more information see - . + . @@ -4138,7 +4138,7 @@ Branch: REL9_5_STABLE [65d85b8f9] 2016-10-23 18:36:13 -0400 --> Don't throw serialization errors for self-conflicting insertions - in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) + in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) @@ -4150,7 +4150,7 @@ Branch: REL9_6_STABLE [a5f0bd77a] 2016-10-17 12:13:35 +0300 --> Fix use-after-free hazard in execution of aggregate functions - using DISTINCT (Peter Geoghegan) + using DISTINCT (Peter Geoghegan) @@ -4185,7 +4185,7 @@ Branch: REL9_6_STABLE [190765a05] 2016-10-03 16:23:02 -0400 Branch: REL9_5_STABLE [647a86e37] 2016-10-03 16:23:12 -0400 --> - Fix COPY with a column name list from a table that has + Fix COPY with a column name list from a table that has row-level security enabled (Adam Brightwell) @@ -4201,14 +4201,14 @@ Branch: REL9_3_STABLE [edb514306] 2016-10-20 17:18:09 -0400 Branch: REL9_2_STABLE [f17c26dbd] 2016-10-20 17:18:14 -0400 --> - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. @@ -4220,7 +4220,7 @@ Branch: REL9_6_STABLE [03f2bf70a] 2016-10-13 19:46:06 -0400 Branch: REL9_5_STABLE [3cd504254] 2016-10-13 19:45:58 -0400 --> - Fix statistics update for TRUNCATE in a prepared + Fix statistics update for TRUNCATE in a prepared transaction (Stas Kelvich) @@ -4242,16 +4242,16 @@ Branch: REL9_3_STABLE [f0bf0f233] 2016-10-13 17:05:15 -0400 Branch: REL9_2_STABLE [6f2db29ec] 2016-10-13 17:05:15 -0400 --> - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. @@ -4264,8 +4264,8 @@ Branch: REL9_5_STABLE [f50fa46cc] 2016-10-03 16:40:27 -0400 --> Show a sensible value - in pg_settings.unit - for min_wal_size and max_wal_size (Tom Lane) + in pg_settings.unit + for min_wal_size and max_wal_size (Tom Lane) @@ -4276,7 +4276,7 @@ Branch: master [9c4cc9e2c] 2016-10-13 00:25:48 -0400 Branch: REL9_6_STABLE [0e9e64c07] 2016-10-13 00:25:28 -0400 --> - Fix replacement of array elements in jsonb_set() + Fix replacement of array elements in jsonb_set() (Tom Lane) @@ -4364,7 +4364,7 @@ Branch: REL9_4_STABLE [6d3cbbf59] 2016-10-13 15:07:11 -0400 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -4390,7 +4390,7 @@ Branch: REL9_1_STABLE [e84e4761f] 2016-10-07 12:53:51 +0300 --> Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -4411,12 +4411,12 @@ Branch: REL9_2_STABLE [7397f62e7] 2016-10-10 10:35:58 -0400 Branch: REL9_1_STABLE [fb6825fe5] 2016-10-10 10:35:58 -0400 --> - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. @@ -4428,7 +4428,7 @@ Branch: REL9_6_STABLE [bac56dbe0] 2016-10-03 10:07:39 -0400 Branch: REL9_5_STABLE [0f259bd17] 2016-10-03 10:07:39 -0400 --> - In pg_upgrade, check library loadability in name order + In pg_upgrade, check library loadability in name order (Tom Lane) @@ -4446,13 +4446,13 @@ Branch: master [e8bdee277] 2016-10-02 14:31:28 -0400 Branch: REL9_6_STABLE [f40334b85] 2016-10-02 14:31:28 -0400 --> - Fix pg_upgrade to work correctly for extensions + Fix pg_upgrade to work correctly for extensions containing index access methods (Tom Lane) To allow this, the server has been extended to support ALTER - EXTENSION ADD/DROP ACCESS METHOD. That functionality should have + EXTENSION ADD/DROP ACCESS METHOD. That functionality should have been included in the original patch to support dynamic creation of access methods, but it was overlooked. @@ -4465,7 +4465,7 @@ Branch: master [f002ed2b8] 2016-09-30 20:40:56 -0400 Branch: REL9_6_STABLE [53fbeed40] 2016-09-30 20:40:27 -0400 --> - Improve error reporting in pg_upgrade's file + Improve error reporting in pg_upgrade's file copying/linking/rewriting steps (Tom Lane, Álvaro Herrera) @@ -4477,7 +4477,7 @@ Branch: master [4806f26f9] 2016-10-07 09:51:18 -0400 Branch: REL9_6_STABLE [1749332ec] 2016-10-07 09:51:28 -0400 --> - Fix pg_dump to work against pre-7.4 servers + Fix pg_dump to work against pre-7.4 servers (Amit Langote, Tom Lane) @@ -4490,8 +4490,8 @@ Branch: REL9_6_STABLE [2933ed036] 2016-10-07 14:35:41 +0300 Branch: REL9_5_STABLE [010a1b561] 2016-10-07 14:35:45 +0300 --> - Disallow specifying both @@ -4504,12 +4504,12 @@ Branch: REL9_6_STABLE [aab809664] 2016-10-06 13:34:38 +0300 Branch: REL9_5_STABLE [69da71254] 2016-10-06 13:34:32 +0300 --> - Make pg_rewind turn off synchronous_commit + Make pg_rewind turn off synchronous_commit in its session on the source server (Michael Banck, Michael Paquier) - This allows pg_rewind to work even when the source + This allows pg_rewind to work even when the source server is using synchronous replication that is not working for some reason. @@ -4525,8 +4525,8 @@ Branch: REL9_4_STABLE [da3f71a08] 2016-09-30 11:22:49 +0200 Branch: REL9_3_STABLE [4bff35cca] 2016-09-30 11:23:25 +0200 --> - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -4542,7 +4542,7 @@ Branch: master [9a109452d] 2016-10-01 16:32:54 -0400 Branch: REL9_6_STABLE [f4e787c82] 2016-10-01 16:32:55 -0400 --> - Fix contrib/pg_visibility to report the correct TID for + Fix contrib/pg_visibility to report the correct TID for a corrupt tuple that has been the subject of a rolled-back update (Tom Lane) @@ -4556,7 +4556,7 @@ Branch: REL9_6_STABLE [68fb75e10] 2016-10-01 13:35:20 -0400 --> Fix makefile dependencies so that parallel make - of PL/Python by itself will succeed reliably + of PL/Python by itself will succeed reliably (Pavel Raiskup) @@ -4594,7 +4594,7 @@ Branch: REL9_2_STABLE [a03339aef] 2016-10-19 17:57:01 -0400 Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -4607,15 +4607,15 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -4637,7 +4637,7 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 Overview - Major enhancements in PostgreSQL 9.6 include: + Major enhancements in PostgreSQL 9.6 include: @@ -4671,15 +4671,15 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 - postgres_fdw now supports remote joins, sorts, - UPDATEs, and DELETEs + postgres_fdw now supports remote joins, sorts, + UPDATEs, and DELETEs Substantial performance improvements, especially in the area of - scalability on multi-CPU-socket servers + scalability on multi-CPU-socket servers @@ -4714,7 +4714,7 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> Improve the pg_stat_activity + linkend="pg-stat-activity-view">pg_stat_activity view's information about what a process is waiting for (Amit Kapila, Ildus Kurbangaliev) @@ -4722,10 +4722,10 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 Historically a process has only been shown as waiting if it was waiting for a heavyweight lock. Now waits for lightweight locks - and buffer pins are also shown in pg_stat_activity. + and buffer pins are also shown in pg_stat_activity. Also, the type of lock being waited for is now visible. - These changes replace the waiting column with - wait_event_type and wait_event. + These changes replace the waiting column with + wait_event_type and wait_event. @@ -4735,14 +4735,14 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> In to_char(), + linkend="functions-formatting-table">to_char(), do not count a minus sign (when needed) as part of the field width for time-related fields (Bruce Momjian) - For example, to_char('-4 years'::interval, 'YY') - now returns -04, rather than -4. + For example, to_char('-4 years'::interval, 'YY') + now returns -04, rather than -4. @@ -4752,18 +4752,18 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> Make extract() behave + linkend="functions-datetime-table">extract() behave more reasonably with infinite inputs (Vitaly Burovoy) - Historically the extract() function just returned + Historically the extract() function just returned zero given an infinite timestamp, regardless of the given field name. Make it return infinity or -infinity as appropriate when the requested field is one that is monotonically increasing (e.g, - year, epoch), or NULL when - it is not (e.g., day, hour). Also, + year, epoch), or NULL when + it is not (e.g., day, hour). Also, throw the expected error for bad field names. @@ -4774,9 +4774,9 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 This commit is also listed under libpq and psql --> - Remove PL/pgSQL's feature that suppressed the - innermost line of CONTEXT for messages emitted by - RAISE commands (Pavel Stehule) + Remove PL/pgSQL's feature that suppressed the + innermost line of CONTEXT for messages emitted by + RAISE commands (Pavel Stehule) @@ -4791,13 +4791,13 @@ This commit is also listed under libpq and psql --> Fix the default text search parser to allow leading digits - in email and host tokens (Artur Zakirov) + in email and host tokens (Artur Zakirov) In most cases this will result in few changes in the parsing of text. But if you have data where such addresses occur frequently, - it may be worth rebuilding dependent tsvector columns + it may be worth rebuilding dependent tsvector columns and indexes so that addresses of this form will be found properly by text searches. @@ -4809,8 +4809,8 @@ This commit is also listed under libpq and psql 2016-03-16 [9a206d063] Improve script generating unaccent rules --> - Extend contrib/unaccent's - standard unaccent.rules file to handle all diacritics + Extend contrib/unaccent's + standard unaccent.rules file to handle all diacritics known to Unicode, and to expand ligatures correctly (Thomas Munro, Léonard Benedetti) @@ -4819,7 +4819,7 @@ This commit is also listed under libpq and psql The previous version neglected to convert some less-common letters with diacritic marks. Also, ligatures are now expanded into separate letters. Installations that use this rules file may wish - to rebuild tsvector columns and indexes that depend on the + to rebuild tsvector columns and indexes that depend on the result. @@ -4830,15 +4830,15 @@ This commit is also listed under libpq and psql --> Remove the long-deprecated - CREATEUSER/NOCREATEUSER options from - CREATE ROLE and allied commands (Tom Lane) + CREATEUSER/NOCREATEUSER options from + CREATE ROLE and allied commands (Tom Lane) - CREATEUSER actually meant SUPERUSER, + CREATEUSER actually meant SUPERUSER, for ancient backwards-compatibility reasons. This has been a constant source of confusion for people who (reasonably) expect - it to mean CREATEROLE. It has been deprecated for + it to mean CREATEROLE. It has been deprecated for ten years now, so fix the problem by removing it. @@ -4850,13 +4850,13 @@ This commit is also listed under libpq and psql 2016-05-08 [7df974ee0] Disallow superuser names starting with 'pg_' in initdb --> - Treat role names beginning with pg_ as reserved + Treat role names beginning with pg_ as reserved (Stephen Frost) User creation of such role names is now disallowed. This prevents - conflicts with built-in roles created by initdb. + conflicts with built-in roles created by initdb. @@ -4866,16 +4866,16 @@ This commit is also listed under libpq and psql --> Change a column name in the - information_schema.routines - view from result_cast_character_set_name - to result_cast_char_set_name (Clément + information_schema.routines + view from result_cast_character_set_name + to result_cast_char_set_name (Clément Prévost) The SQL:2011 standard specifies the longer name, but that appears to be a mistake, because adjacent column names use the shorter - style, as do other information_schema views. + style, as do other information_schema views. @@ -4884,7 +4884,7 @@ This commit is also listed under libpq and psql 2015-12-08 [d5563d7df] psql: Support multiple -c and -f options, and allow mixi --> - psql's option no longer implies + psql's option no longer implies (Pavel Stehule, Catalin Iacob) @@ -4893,7 +4893,7 @@ This commit is also listed under libpq and psql Write (or its abbreviation ) explicitly to obtain the old behavior. Scripts so modified will still work with old - versions of psql. + versions of psql. @@ -4902,7 +4902,7 @@ This commit is also listed under libpq and psql 2015-07-02 [5671aaca8] Improve pg_restore's -t switch to match all types of rel --> - Improve pg_restore's option to + Improve pg_restore's option to match all types of relations, not only plain tables (Craig Ringer) @@ -4912,17 +4912,17 @@ This commit is also listed under libpq and psql 2016-02-12 [59a884e98] Change delimiter used for display of NextXID --> - Change the display format used for NextXID in - pg_controldata and related places (Joe Conway, + Change the display format used for NextXID in + pg_controldata and related places (Joe Conway, Bruce Momjian) Display epoch-and-transaction-ID values in the format - number:number. + number:number. The previous format - number/number was - confusingly similar to that used for LSNs. + number/number was + confusingly similar to that used for LSNs. @@ -4940,8 +4940,8 @@ and many others in the same vein Many of the standard extensions have been updated to allow their functions to be executed within parallel query worker processes. These changes will not take effect in - databases pg_upgrade'd from prior versions unless - you apply ALTER EXTENSION UPDATE to each such extension + databases pg_upgrade'd from prior versions unless + you apply ALTER EXTENSION UPDATE to each such extension (in each database of a cluster). @@ -5002,7 +5002,7 @@ and many others in the same vein - With 9.6, PostgreSQL introduces initial support + With 9.6, PostgreSQL introduces initial support for parallel execution of large queries. Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized. Hash joins and nested loops can be performed @@ -5048,7 +5048,7 @@ and many others in the same vein 2015-09-02 [30bb26b5e] Allow usage of huge maintenance_work_mem for GIN build. --> - Allow GIN index builds to + Allow GIN index builds to make effective use of settings larger than 1 GB (Robert Abraham, Teodor Sigaev) @@ -5076,7 +5076,7 @@ and many others in the same vein --> Add gin_clean_pending_list() + linkend="functions-admin-index">gin_clean_pending_list() function to allow manual invocation of pending-list cleanup for a GIN index (Jeff Janes) @@ -5094,7 +5094,7 @@ and many others in the same vein --> Improve handling of dead index tuples in GiST indexes (Anastasia Lubennikova) + linkend="GiST">GiST indexes (Anastasia Lubennikova) @@ -5111,7 +5111,7 @@ and many others in the same vein --> Add an SP-GiST operator class for - type box (Alexander Lebedev) + type box (Alexander Lebedev) @@ -5137,7 +5137,7 @@ and many others in the same vein - The new approach makes better use of the CPU cache + The new approach makes better use of the CPU cache for typical cache sizes and data volumes. Where necessary, the behavior can be adjusted via the new configuration parameter replacement_sort_tuples. @@ -5162,17 +5162,17 @@ and many others in the same vein 2016-02-17 [f1f5ec1ef] Reuse abbreviated keys in ordered [set] aggregates. --> - Speed up sorting of uuid, bytea, and - char(n) fields by using abbreviated keys + Speed up sorting of uuid, bytea, and + char(n) fields by using abbreviated keys (Peter Geoghegan) Support for abbreviated keys has also been added to the non-default operator classes text_pattern_ops, - varchar_pattern_ops, and - bpchar_pattern_ops. Processing of ordered-set + linkend="indexes-opclass">text_pattern_ops, + varchar_pattern_ops, and + bpchar_pattern_ops. Processing of ordered-set aggregates can also now exploit abbreviated keys. @@ -5182,8 +5182,8 @@ and many others in the same vein 2015-12-16 [b648b7034] Speed up CREATE INDEX CONCURRENTLY's TID sort. --> - Speed up CREATE INDEX CONCURRENTLY by treating - TIDs as 64-bit integers during sorting (Peter + Speed up CREATE INDEX CONCURRENTLY by treating + TIDs as 64-bit integers during sorting (Peter Geoghegan) @@ -5203,7 +5203,7 @@ and many others in the same vein 2015-09-03 [4aec49899] Assorted code review for recent ProcArrayLock patch. --> - Reduce contention for the ProcArrayLock (Amit Kapila, + Reduce contention for the ProcArrayLock (Amit Kapila, Robert Haas) @@ -5234,7 +5234,7 @@ and many others in the same vein --> Use atomic operations, rather than a spinlock, to protect an - LWLock's wait queue (Andres Freund) + LWLock's wait queue (Andres Freund) @@ -5244,7 +5244,7 @@ and many others in the same vein --> Partition the shared hash table freelist to reduce contention on - multi-CPU-socket servers (Aleksander Alekseev) + multi-CPU-socket servers (Aleksander Alekseev) @@ -5280,14 +5280,14 @@ and many others in the same vein 2016-04-04 [391159e03] Partially revert commit 3d3bf62f30200500637b24fdb7b992a9 --> - Improve ANALYZE's estimates for columns with many nulls + Improve ANALYZE's estimates for columns with many nulls (Tomas Vondra, Alex Shulgin) - Previously ANALYZE tended to underestimate the number - of non-NULL distinct values in a column with many - NULLs, and was also inaccurate in computing the + Previously ANALYZE tended to underestimate the number + of non-NULL distinct values in a column with many + NULLs, and was also inaccurate in computing the most-common values. @@ -5314,13 +5314,13 @@ and many others in the same vein - If a table t has a foreign key restriction, say - (a,b) REFERENCES r (x,y), then a WHERE - condition such as t.a = r.x AND t.b = r.y cannot - select more than one r row per t row. - The planner formerly considered these AND conditions + If a table t has a foreign key restriction, say + (a,b) REFERENCES r (x,y), then a WHERE + condition such as t.a = r.x AND t.b = r.y cannot + select more than one r row per t row. + The planner formerly considered these AND conditions to be independent and would often drastically misestimate - selectivity as a result. Now it compares the WHERE + selectivity as a result. Now it compares the WHERE conditions to applicable foreign key constraints and produces better estimates. @@ -5331,7 +5331,7 @@ and many others in the same vein - <command>VACUUM</> + <command>VACUUM</command> @@ -5361,7 +5361,7 @@ and many others in the same vein If necessary, vacuum can be forced to process all-frozen - pages using the new DISABLE_PAGE_SKIPPING option. + pages using the new DISABLE_PAGE_SKIPPING option. Normally this should never be needed, but it might help in recovering from visibility-map corruption. @@ -5372,7 +5372,7 @@ and many others in the same vein 2015-12-30 [e84290823] Avoid useless truncation attempts during VACUUM. --> - Avoid useless heap-truncation attempts during VACUUM + Avoid useless heap-truncation attempts during VACUUM (Jeff Janes, Tom Lane) @@ -5401,19 +5401,19 @@ and many others in the same vein 2016-08-07 [9ee1cf04a] Fix TOAST access failure in RETURNING queries. --> - Allow old MVCC snapshots to be invalidated after a + Allow old MVCC snapshots to be invalidated after a configurable timeout (Kevin Grittner) Normally, deleted tuples cannot be physically removed by - vacuuming until the last transaction that could see + vacuuming until the last transaction that could see them is gone. A transaction that stays open for a long time can thus cause considerable table bloat because space cannot be recycled. This feature allows setting a time-based limit, via the new configuration parameter , on how long an - MVCC snapshot is guaranteed to be valid. After that, + MVCC snapshot is guaranteed to be valid. After that, dead tuples are candidates for removal. A transaction using an outdated snapshot will get an error if it attempts to read a page that potentially could have contained such data. @@ -5425,12 +5425,12 @@ and many others in the same vein 2016-02-11 [d4c3a156c] Remove GROUP BY columns that are functionally dependent --> - Ignore GROUP BY columns that are + Ignore GROUP BY columns that are functionally dependent on other columns (David Rowley) - If a GROUP BY clause includes all columns of a + If a GROUP BY clause includes all columns of a non-deferred primary key, as well as other columns of the same table, those other columns are redundant and can be dropped from the grouping. This saves computation in many common cases. @@ -5443,17 +5443,17 @@ and many others in the same vein --> Allow use of an index-only - scan on a partial index when the index's WHERE + scan on a partial index when the index's WHERE clause references columns that are not indexed (Tomas Vondra, Kyotaro Horiguchi) For example, an index defined by CREATE INDEX tidx_partial - ON t(b) WHERE a > 0 can now be used for an index-only scan by - a query that specifies WHERE a > 0 and does not - otherwise use a. Previously this was disallowed - because a is not listed as an index column. + ON t(b) WHERE a > 0 can now be used for an index-only scan by + a query that specifies WHERE a > 0 and does not + otherwise use a. Previously this was disallowed + because a is not listed as an index column. @@ -5493,7 +5493,7 @@ and many others in the same vein - PostgreSQL writes data to the kernel's disk cache, + PostgreSQL writes data to the kernel's disk cache, from where it will be flushed to physical storage in due time. Many operating systems are not smart about managing this and allow large amounts of dirty data to accumulate before deciding to flush @@ -5504,11 +5504,11 @@ and many others in the same vein - On Linux, sync_file_range() is used for this purpose, + On Linux, sync_file_range() is used for this purpose, and the feature is on by default on Linux because that function has few downsides. This flushing capability is also available on other - platforms if they have msync() - or posix_fadvise(), but those interfaces have some + platforms if they have msync() + or posix_fadvise(), but those interfaces have some undesirable side-effects so the feature is disabled by default on non-Linux platforms. @@ -5533,7 +5533,7 @@ and many others in the same vein - For example, SELECT AVG(x), VARIANCE(x) FROM tab can use + For example, SELECT AVG(x), VARIANCE(x) FROM tab can use a single per-row computation for both aggregates. @@ -5544,7 +5544,7 @@ and many others in the same vein --> Speed up visibility tests for recently-created tuples by checking - the current transaction's snapshot, not pg_clog, to + the current transaction's snapshot, not pg_clog, to decide if the source transaction should be considered committed (Jeff Janes, Tom Lane) @@ -5570,9 +5570,9 @@ and many others in the same vein - Two-phase commit information is now written only to WAL - during PREPARE TRANSACTION, and will be read back from - WAL during COMMIT PREPARED if that happens + Two-phase commit information is now written only to WAL + during PREPARE TRANSACTION, and will be read back from + WAL during COMMIT PREPARED if that happens soon thereafter. A separate state file is created only if the pending transaction does not get committed or aborted by the time of the next checkpoint. @@ -5603,8 +5603,8 @@ and many others in the same vein 2016-02-06 [aa2387e2f] Improve speed of timestamp/time/date output functions. --> - Improve speed of the output functions for timestamp, - time, and date data types (David Rowley, + Improve speed of the output functions for timestamp, + time, and date data types (David Rowley, Andres Freund) @@ -5615,7 +5615,7 @@ and many others in the same vein --> Avoid some unnecessary cancellations of hot-standby queries - during replay of actions that take AccessExclusive + during replay of actions that take AccessExclusive locks (Jeff Janes) @@ -5649,8 +5649,8 @@ and many others in the same vein 2015-07-05 [6c82d8d1f] Further reduce overhead for passing plpgsql variables to --> - Speed up expression evaluation in PL/pgSQL by - keeping ParamListInfo entries for simple variables + Speed up expression evaluation in PL/pgSQL by + keeping ParamListInfo entries for simple variables valid at all times (Tom Lane) @@ -5660,7 +5660,7 @@ and many others in the same vein 2015-07-06 [4f33621f3] Don't set SO_SNDBUF on recent Windows versions that have --> - Avoid reducing the SO_SNDBUF setting below its default + Avoid reducing the SO_SNDBUF setting below its default on recent Windows versions (Chen Huajun) @@ -5696,8 +5696,8 @@ and many others in the same vein --> Add pg_stat_progress_vacuum - system view to provide progress reporting for VACUUM + linkend="pg-stat-progress-vacuum-view">pg_stat_progress_vacuum + system view to provide progress reporting for VACUUM operations (Amit Langote, Robert Haas, Vinayak Pokale, Rahila Syed) @@ -5708,11 +5708,11 @@ and many others in the same vein --> Add pg_control_system(), - pg_control_checkpoint(), - pg_control_recovery(), and - pg_control_init() functions to expose fields of - pg_control to SQL (Joe Conway, Michael + linkend="functions-controldata">pg_control_system(), + pg_control_checkpoint(), + pg_control_recovery(), and + pg_control_init() functions to expose fields of + pg_control to SQL (Joe Conway, Michael Paquier) @@ -5722,15 +5722,15 @@ and many others in the same vein 2016-02-17 [a5c43b886] Add new system view, pg_config --> - Add pg_config + Add pg_config system view (Joe Conway) This view exposes the same information available from - the pg_config command-line utility, + the pg_config command-line utility, namely assorted compile-time configuration information for - PostgreSQL. + PostgreSQL. @@ -5739,8 +5739,8 @@ and many others in the same vein 2015-08-10 [3f811c2d6] Add confirmed_flush column to pg_replication_slots. --> - Add a confirmed_flush_lsn column to the pg_replication_slots + Add a confirmed_flush_lsn column to the pg_replication_slots system view (Marko Tiikkaja) @@ -5753,9 +5753,9 @@ and many others in the same vein --> Add pg_stat_wal_receiver + linkend="pg-stat-wal-receiver-view">pg_stat_wal_receiver system view to provide information about the state of a hot-standby - server's WAL receiver process (Michael Paquier) + server's WAL receiver process (Michael Paquier) @@ -5765,7 +5765,7 @@ and many others in the same vein --> Add pg_blocking_pids() + linkend="functions-info-session-table">pg_blocking_pids() function to reliably identify which sessions block which others (Tom Lane) @@ -5774,7 +5774,7 @@ and many others in the same vein This function returns an array of the process IDs of any sessions that are blocking the session with the given process ID. Historically users have obtained such information using a self-join - on the pg_locks view. However, it is unreasonably + on the pg_locks view. However, it is unreasonably tedious to do it that way with any modicum of correctness, and the addition of parallel queries has made the old approach entirely impractical, since locks might be held or awaited by child worker @@ -5788,7 +5788,7 @@ and many others in the same vein --> Add function pg_current_xlog_flush_location() + linkend="functions-admin-backup-table">pg_current_xlog_flush_location() to expose the current transaction log flush location (Tomas Vondra) @@ -5799,8 +5799,8 @@ and many others in the same vein --> Add function pg_notification_queue_usage() - to report how full the NOTIFY queue is (Brendan Jurd) + linkend="functions-info-session-table">pg_notification_queue_usage() + to report how full the NOTIFY queue is (Brendan Jurd) @@ -5816,7 +5816,7 @@ and many others in the same vein The memory usage dump that is output to the postmaster log during an out-of-memory failure now summarizes statistics when there are a large number of memory contexts, rather than possibly generating - a very large report. There is also a grand total + a very large report. There is also a grand total summary line now. @@ -5826,7 +5826,7 @@ and many others in the same vein - <acronym>Authentication</> + <acronym>Authentication</acronym> @@ -5835,15 +5835,15 @@ and many others in the same vein 2016-04-08 [34c33a1f0] Add BSD authentication method. --> - Add a BSD authentication + Add a BSD authentication method to allow use of - the BSD Authentication service for - PostgreSQL client authentication (Marisa Emerson) + the BSD Authentication service for + PostgreSQL client authentication (Marisa Emerson) BSD Authentication is currently only available on OpenBSD. + class="osname">OpenBSD. @@ -5852,9 +5852,9 @@ and many others in the same vein 2016-04-08 [2f1d2b7a7] Set PAM_RHOST item for PAM authentication --> - When using PAM + When using PAM authentication, provide the client IP address or host name - to PAM modules via the PAM_RHOST item + to PAM modules via the PAM_RHOST item (Grzegorz Sampolski) @@ -5870,7 +5870,7 @@ and many others in the same vein All ordinarily-reachable password authentication failure cases - should now provide specific DETAIL fields in the log. + should now provide specific DETAIL fields in the log. @@ -5879,7 +5879,7 @@ and many others in the same vein 2015-09-06 [643beffe8] Support RADIUS passwords up to 128 characters --> - Support RADIUS passwords + Support RADIUS passwords up to 128 characters long (Marko Tiikkaja) @@ -5889,11 +5889,11 @@ and many others in the same vein 2016-04-08 [35e2e357c] Add authentication parameters compat_realm and upn_usena --> - Add new SSPI + Add new SSPI authentication parameters - compat_realm and upn_username to control - whether NetBIOS or Kerberos - realm names and user names are used during SSPI + compat_realm and upn_username to control + whether NetBIOS or Kerberos + realm names and user names are used during SSPI authentication (Christian Ullrich) @@ -5939,7 +5939,7 @@ and many others in the same vein 2015-09-08 [1aba62ec6] Allow per-tablespace effective_io_concurrency --> - Allow effective_io_concurrency to be set per-tablespace + Allow effective_io_concurrency to be set per-tablespace to support cases where different tablespaces have different I/O characteristics (Julien Rouhaud) @@ -5951,7 +5951,7 @@ and many others in the same vein 2015-09-07 [b1e1862a1] Coordinate log_line_prefix options 'm' and 'n' to share --> - Add option %n to + Add option %n to print the current time in Unix epoch form, with milliseconds (Tomas Vondra, Jeff Davis) @@ -5966,7 +5966,7 @@ and many others in the same vein Add and configuration parameters to provide more control over the message format when logging to - syslog (Peter Eisentraut) + syslog (Peter Eisentraut) @@ -5975,16 +5975,16 @@ and many others in the same vein 2016-03-18 [b555ed810] Merge wal_level "archive" and "hot_standby" into new nam --> - Merge the archive and hot_standby values + Merge the archive and hot_standby values of the configuration parameter - into a single new value replica (Peter Eisentraut) + into a single new value replica (Peter Eisentraut) Making a distinction between these settings is no longer useful, and merging them is a step towards a planned future simplification of replication setup. The old names are still accepted but are - converted to replica internally. + converted to replica internally. @@ -5993,15 +5993,15 @@ and many others in the same vein 2016-02-02 [7d17e683f] Add support for systemd service notifications --> - Add configure option - This allows the use of systemd service units of - type notify, which greatly simplifies the management - of PostgreSQL under systemd. + This allows the use of systemd service units of + type notify, which greatly simplifies the management + of PostgreSQL under systemd. @@ -6010,17 +6010,17 @@ and many others in the same vein 2016-03-19 [9a83564c5] Allow SSL server key file to have group read access if o --> - Allow the server's SSL key file to have group read - access if it is owned by root (Christoph Berg) + Allow the server's SSL key file to have group read + access if it is owned by root (Christoph Berg) Formerly, we insisted the key file be owned by the - user running the PostgreSQL server, but + user running the PostgreSQL server, but that is inconvenient on some systems (such as Debian) that are configured to manage + class="osname">Debian) that are configured to manage certificates centrally. Therefore, allow the case where the key - file is owned by root and has group read access. + file is owned by root and has group read access. It is up to the operating system administrator to ensure that the group does not include any untrusted users. @@ -6085,8 +6085,8 @@ XXX this is pending backpatch, may need to remove 2016-04-26 [c6ff84b06] Emit invalidations to standby for transactions without x --> - Ensure that invalidation messages are recorded in WAL - even when issued by a transaction that has no XID + Ensure that invalidation messages are recorded in WAL + even when issued by a transaction that has no XID assigned (Andres Freund) @@ -6102,7 +6102,7 @@ XXX this is pending backpatch, may need to remove 2016-04-28 [e2c79e14d] Prevent multiple cleanup process for pending list in GIN --> - Prevent multiple processes from trying to clean a GIN + Prevent multiple processes from trying to clean a GIN index's pending list concurrently (Teodor Sigaev, Jeff Janes) @@ -6147,13 +6147,13 @@ XXX this is pending backpatch, may need to remove 2016-03-29 [314cbfc5d] Add new replication mode synchronous_commit = 'remote_ap --> - Add new setting remote_apply for configuration + Add new setting remote_apply for configuration parameter (Thomas Munro) In this mode, the master waits for the transaction to be - applied on the standby server, not just written + applied on the standby server, not just written to disk. That means that you can count on a transaction started on the standby to see all commits previously acknowledged by the master. @@ -6168,14 +6168,14 @@ XXX this is pending backpatch, may need to remove Add a feature to the replication protocol, and a corresponding option to pg_create_physical_replication_slot(), - to allow reserving WAL immediately when creating a + linkend="functions-replication-table">pg_create_physical_replication_slot(), + to allow reserving WAL immediately when creating a replication slot (Gurjeet Singh, Michael Paquier) This allows the creation of a replication slot to guarantee - that all the WAL needed for a base backup will be + that all the WAL needed for a base backup will be available. @@ -6186,13 +6186,13 @@ XXX this is pending backpatch, may need to remove --> Add a option to - pg_basebackup + pg_basebackup (Peter Eisentraut) - This lets pg_basebackup use a replication - slot defined for WAL streaming. After the base + This lets pg_basebackup use a replication + slot defined for WAL streaming. After the base backup completes, selecting the same slot for regular streaming replication allows seamless startup of the new standby server. @@ -6205,8 +6205,8 @@ XXX this is pending backpatch, may need to remove --> Extend pg_start_backup() - and pg_stop_backup() to support non-exclusive backups + linkend="functions-admin-backup-table">pg_start_backup() + and pg_stop_backup() to support non-exclusive backups (Magnus Hagander) @@ -6226,14 +6226,14 @@ XXX this is pending backpatch, may need to remove --> Allow functions that return sets of tuples to return simple - NULLs (Andrew Gierth, Tom Lane) + NULLs (Andrew Gierth, Tom Lane) - In the context of SELECT FROM function(...), a function + In the context of SELECT FROM function(...), a function that returned a set of composite values was previously not allowed - to return a plain NULL value as part of the set. - Now that is allowed and interpreted as a row of NULLs. + to return a plain NULL value as part of the set. + Now that is allowed and interpreted as a row of NULLs. This avoids corner-case errors with, for example, unnesting an array of composite values. @@ -6245,14 +6245,14 @@ XXX this is pending backpatch, may need to remove --> Fully support array subscripts and field selections in the - target column list of an INSERT with multiple - VALUES rows (Tom Lane) + target column list of an INSERT with multiple + VALUES rows (Tom Lane) Previously, such cases failed if the same target column was mentioned more than once, e.g., INSERT INTO tab (x[1], - x[2]) VALUES (...). + x[2]) VALUES (...). @@ -6262,16 +6262,16 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [d543170f2] Don't split up SRFs when choosing to postpone SELECT out --> - When appropriate, postpone evaluation of SELECT - output expressions until after an ORDER BY sort + When appropriate, postpone evaluation of SELECT + output expressions until after an ORDER BY sort (Konstantin Knizhnik) This change ensures that volatile or expensive functions in the output list are executed in the order suggested by ORDER - BY, and that they are not evaluated more times than required - when there is a LIMIT clause. Previously, these + BY, and that they are not evaluated more times than required + when there is a LIMIT clause. Previously, these properties held if the ordering was performed by an index scan or pre-merge-join sort, but not if it was performed by a top-level sort. @@ -6289,9 +6289,9 @@ XXX this is pending backpatch, may need to remove - This change allows command tags, e.g. SELECT, to + This change allows command tags, e.g. SELECT, to correctly report tuple counts larger than 4 billion. This also - applies to PL/pgSQL's GET DIAGNOSTICS ... ROW_COUNT + applies to PL/pgSQL's GET DIAGNOSTICS ... ROW_COUNT command. @@ -6302,17 +6302,17 @@ XXX this is pending backpatch, may need to remove --> Avoid doing encoding conversions by converting through the - MULE_INTERNAL encoding (Tom Lane) + MULE_INTERNAL encoding (Tom Lane) Previously, many conversions for Cyrillic and Central European single-byte encodings were done by converting to a - related MULE_INTERNAL coding scheme and then to the + related MULE_INTERNAL coding scheme and then to the destination encoding. Aside from being inefficient, this meant that when the conversion encountered an untranslatable character, the error message would confusingly complain about failure to - convert to or from MULE_INTERNAL, rather than the + convert to or from MULE_INTERNAL, rather than the user-visible encoding. @@ -6331,7 +6331,7 @@ XXX this is pending backpatch, may need to remove Previously, the foreign join pushdown infrastructure left the question of security entirely up to individual foreign data - wrappers, but that made it too easy for an FDW to + wrappers, but that made it too easy for an FDW to inadvertently create subtle security holes. So, make it the core code's job to determine which role ID will access each table, and do not attempt join pushdown unless the role is the same for @@ -6353,13 +6353,13 @@ XXX this is pending backpatch, may need to remove 2015-11-27 [92e38182d] COPY (INSERT/UPDATE/DELETE .. RETURNING ..) --> - Allow COPY to copy the output of an - INSERT/UPDATE/DELETE - ... RETURNING query (Marko Tiikkaja) + Allow COPY to copy the output of an + INSERT/UPDATE/DELETE + ... RETURNING query (Marko Tiikkaja) - Previously, an intermediate CTE had to be written to + Previously, an intermediate CTE had to be written to get this result. @@ -6369,16 +6369,16 @@ XXX this is pending backpatch, may need to remove 2016-04-05 [f2fcad27d] Support ALTER THING .. DEPENDS ON EXTENSION --> - Introduce ALTER object DEPENDS ON + Introduce ALTER object DEPENDS ON EXTENSION (Abhijit Menon-Sen) This command allows a database object to be marked as depending on an extension, so that it will be dropped automatically if - the extension is dropped (without needing CASCADE). + the extension is dropped (without needing CASCADE). However, the object is not part of the extension, and thus will - be dumped separately by pg_dump. + be dumped separately by pg_dump. @@ -6387,7 +6387,7 @@ XXX this is pending backpatch, may need to remove 2015-11-19 [bc4996e61] Make ALTER .. SET SCHEMA do nothing, instead of throwing --> - Make ALTER object SET SCHEMA do nothing + Make ALTER object SET SCHEMA do nothing when the object is already in the requested schema, rather than throwing an error as it historically has for most object types (Marti Raudsepp) @@ -6411,8 +6411,8 @@ XXX this is pending backpatch, may need to remove 2015-07-29 [2cd40adb8] Add IF NOT EXISTS processing to ALTER TABLE ADD COLUMN --> - Add an @@ -6422,7 +6422,7 @@ XXX this is pending backpatch, may need to remove 2016-03-10 [fcb4bfddb] Reduce lock level for altering fillfactor --> - Reduce the lock strength needed by ALTER TABLE + Reduce the lock strength needed by ALTER TABLE when setting fillfactor and autovacuum-related relation options (Fabrízio de Royes Mello, Simon Riggs) @@ -6434,7 +6434,7 @@ XXX this is pending backpatch, may need to remove --> Introduce CREATE - ACCESS METHOD to allow extensions to create index access + ACCESS METHOD to allow extensions to create index access methods (Alexander Korotkov, Petr Jelínek) @@ -6444,7 +6444,7 @@ XXX this is pending backpatch, may need to remove 2015-10-03 [b67aaf21e] Add CASCADE support for CREATE EXTENSION. --> - Add a CASCADE option to CREATE + Add a CASCADE option to CREATE EXTENSION to automatically create any extensions the requested one depends on (Petr Jelínek) @@ -6455,7 +6455,7 @@ XXX this is pending backpatch, may need to remove 2015-10-05 [b943f502b] Have CREATE TABLE LIKE add OID column if any LIKEd table --> - Make CREATE TABLE ... LIKE include an OID + Make CREATE TABLE ... LIKE include an OID column if any source table has one (Bruce Momjian) @@ -6465,14 +6465,14 @@ XXX this is pending backpatch, may need to remove 2015-12-16 [f27a6b15e] Mark CHECK constraints declared NOT VALID valid if creat --> - If a CHECK constraint is declared NOT VALID + If a CHECK constraint is declared NOT VALID in a table creation command, automatically mark it as valid (Amit Langote, Amul Sul) This is safe because the table has no existing rows. This matches - the longstanding behavior of FOREIGN KEY constraints. + the longstanding behavior of FOREIGN KEY constraints. @@ -6481,16 +6481,16 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [c94959d41] Fix DROP OPERATOR to reset oprcom/oprnegate links to the --> - Fix DROP OPERATOR to clear - pg_operator.oprcom and - pg_operator.oprnegate links to + Fix DROP OPERATOR to clear + pg_operator.oprcom and + pg_operator.oprnegate links to the dropped operator (Roma Sokolov) Formerly such links were left as-is, which could pose a problem in the somewhat unlikely event that the dropped operator's - OID was reused for another operator. + OID was reused for another operator. @@ -6499,13 +6499,13 @@ XXX this is pending backpatch, may need to remove 2016-07-11 [4d042999f] Print a given subplan only once in EXPLAIN. --> - Do not show the same subplan twice in EXPLAIN output + Do not show the same subplan twice in EXPLAIN output (Tom Lane) In certain cases, typically involving SubPlan nodes in index - conditions, EXPLAIN would print data for the same + conditions, EXPLAIN would print data for the same subplan twice. @@ -6516,7 +6516,7 @@ XXX this is pending backpatch, may need to remove --> Disallow creation of indexes on system columns, except for - OID columns (David Rowley) + OID columns (David Rowley) @@ -6550,8 +6550,8 @@ XXX this is pending backpatch, may need to remove checks that would throw an error if they were called by a non-superuser. This forced the use of superuser roles for some relatively pedestrian tasks. The hard-wired error checks - are now gone in favor of making initdb revoke the - default public EXECUTE privilege on these functions. + are now gone in favor of making initdb revoke the + default public EXECUTE privilege on these functions. This allows installations to choose to grant usage of such functions to trusted roles that do not need all superuser privileges. @@ -6569,7 +6569,7 @@ XXX this is pending backpatch, may need to remove - Currently the only such role is pg_signal_backend, + Currently the only such role is pg_signal_backend, but more are expected to be added in future. @@ -6591,19 +6591,19 @@ XXX this is pending backpatch, may need to remove 2016-06-27 [6734a1cac] Change predecence of phrase operator. --> - Improve full-text search to support + Improve full-text search to support searching for phrases, that is, lexemes appearing adjacent to each other in a specific order, or with a specified distance between them (Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov) - A phrase-search query can be specified in tsquery - input using the new operators <-> and - <N>. The former means + A phrase-search query can be specified in tsquery + input using the new operators <-> and + <N>. The former means that the lexemes before and after it must appear adjacent to each other in that order. The latter means they must be exactly - N lexemes apart. + N lexemes apart. @@ -6613,7 +6613,7 @@ XXX this is pending backpatch, may need to remove --> Allow omitting one or both boundaries in an array slice specifier, - e.g. array_col[3:] (Yury Zhuravlev) + e.g. array_col[3:] (Yury Zhuravlev) @@ -6634,19 +6634,19 @@ XXX this is pending backpatch, may need to remove This change prevents unexpected out-of-range errors for - timestamp with time zone values very close to the - implementation limits. Previously, the same value might - be accepted or not depending on the timezone setting, + timestamp with time zone values very close to the + implementation limits. Previously, the same value might + be accepted or not depending on the timezone setting, meaning that a dump and reload could fail on a value that had been accepted when presented. Now the limits are enforced according - to the equivalent UTC time, not local time, so as to - be independent of timezone. + to the equivalent UTC time, not local time, so as to + be independent of timezone. - Also, PostgreSQL is now more careful to detect + Also, PostgreSQL is now more careful to detect overflow in operations that compute new date or timestamp values, - such as date + integer. + such as date + integer. @@ -6655,14 +6655,14 @@ XXX this is pending backpatch, may need to remove 2016-03-30 [50861cd68] Improve portability of I/O behavior for the geometric ty --> - For geometric data types, make sure infinity and - NaN component values are treated consistently during + For geometric data types, make sure infinity and + NaN component values are treated consistently during input and output (Tom Lane) Such values will now always print the same as they would in - a simple float8 column, and be accepted the same way + a simple float8 column, and be accepted the same way on input. Previously the behavior was platform-dependent. @@ -6675,8 +6675,8 @@ XXX this is pending backpatch, may need to remove --> Upgrade - the ispell - dictionary type to handle modern Hunspell files and + the ispell + dictionary type to handle modern Hunspell files and support more languages (Artur Zakirov) @@ -6687,7 +6687,7 @@ XXX this is pending backpatch, may need to remove --> Implement look-behind constraints - in regular expressions + in regular expressions (Tom Lane) @@ -6706,12 +6706,12 @@ XXX this is pending backpatch, may need to remove --> In regular expressions, if an apparent three-digit octal escape - \nnn would exceed 377 (255 decimal), + \nnn would exceed 377 (255 decimal), assume it is a two-digit octal escape instead (Tom Lane) - This makes the behavior match current Tcl releases. + This makes the behavior match current Tcl releases. @@ -6720,8 +6720,8 @@ XXX this is pending backpatch, may need to remove 2015-11-07 [c5e86ea93] Add "xid <> xid" and "xid <> int4" operators. --> - Add transaction ID operators xid <> - xid and xid <> int4, + Add transaction ID operators xid <> + xid and xid <> int4, for consistency with the corresponding equality operators (Michael Paquier) @@ -6742,9 +6742,9 @@ XXX this is pending backpatch, may need to remove --> Add jsonb_insert() - function to insert a new element into a jsonb array, - or a not-previously-existing key into a jsonb object + linkend="functions-json-processing-table">jsonb_insert() + function to insert a new element into a jsonb array, + or a not-previously-existing key into a jsonb object (Dmitry Dolgov) @@ -6755,9 +6755,9 @@ XXX this is pending backpatch, may need to remove 2016-05-05 [18a02ad2a] Fix corner-case loss of precision in numeric pow() calcu --> - Improve the accuracy of the ln(), log(), - exp(), and pow() functions for type - numeric (Dean Rasheed) + Improve the accuracy of the ln(), log(), + exp(), and pow() functions for type + numeric (Dean Rasheed) @@ -6767,8 +6767,8 @@ XXX this is pending backpatch, may need to remove --> Add a scale(numeric) - function to extract the display scale of a numeric value + linkend="functions-math-func-table">scale(numeric) + function to extract the display scale of a numeric value (Marko Tiikkaja) @@ -6783,8 +6783,8 @@ XXX this is pending backpatch, may need to remove For example, sind() - measures its argument in degrees, whereas sin() + linkend="functions-math-trig-table">sind() + measures its argument in degrees, whereas sin() measures in radians. These functions go to some lengths to deliver exact results for values where an exact result can be expected, for instance sind(30) = 0.5. @@ -6796,15 +6796,15 @@ XXX this is pending backpatch, may need to remove 2016-01-22 [fd5200c3d] Improve cross-platform consistency of Inf/NaN handling i --> - Ensure that trigonometric functions handle infinity - and NaN inputs per the POSIX standard + Ensure that trigonometric functions handle infinity + and NaN inputs per the POSIX standard (Dean Rasheed) - The POSIX standard says that these functions should - return NaN for NaN input, and should throw - an error for out-of-range inputs including infinity. + The POSIX standard says that these functions should + return NaN for NaN input, and should throw + an error for out-of-range inputs including infinity. Previously our behavior varied across platforms. @@ -6815,9 +6815,9 @@ XXX this is pending backpatch, may need to remove --> Make to_timestamp(float8) - convert float infinity to - timestamp infinity (Vitaly Burovoy) + linkend="functions-datetime-table">to_timestamp(float8) + convert float infinity to + timestamp infinity (Vitaly Burovoy) @@ -6831,15 +6831,15 @@ XXX this is pending backpatch, may need to remove 2016-05-05 [0b9a23443] Rename tsvector delete() to ts_delete(), and filter() to --> - Add new functions for tsvector data (Stas Kelvich) + Add new functions for tsvector data (Stas Kelvich) The new functions are ts_delete(), - ts_filter(), unnest(), - tsvector_to_array(), array_to_tsvector(), - and a variant of setweight() that sets the weight + linkend="textsearch-functions-table">ts_delete(), + ts_filter(), unnest(), + tsvector_to_array(), array_to_tsvector(), + and a variant of setweight() that sets the weight only for specified lexeme(s). @@ -6849,11 +6849,11 @@ XXX this is pending backpatch, may need to remove 2015-09-17 [9acb9007d] Fix oversight in tsearch type check --> - Allow ts_stat() - and tsvector_update_trigger() + Allow ts_stat() + and tsvector_update_trigger() to operate on values that are of types binary-compatible with the expected argument type, not just exactly that type; for example - allow citext where text is expected (Teodor + allow citext where text is expected (Teodor Sigaev) @@ -6864,14 +6864,14 @@ XXX this is pending backpatch, may need to remove --> Add variadic functions num_nulls() - and num_nonnulls() that count the number of their + linkend="functions-comparison-func-table">num_nulls() + and num_nonnulls() that count the number of their arguments that are null or non-null (Marko Tiikkaja) - An example usage is CHECK(num_nonnulls(a,b,c) = 1) - which asserts that exactly one of a,b,c is not NULL. + An example usage is CHECK(num_nonnulls(a,b,c) = 1) + which asserts that exactly one of a,b,c is not NULL. These functions can also be used to count the number of null or nonnull elements in an array. @@ -6883,8 +6883,8 @@ XXX this is pending backpatch, may need to remove --> Add function parse_ident() - to split a qualified, possibly quoted SQL identifier + linkend="functions-string-other">parse_ident() + to split a qualified, possibly quoted SQL identifier into its parts (Pavel Stehule) @@ -6895,15 +6895,15 @@ XXX this is pending backpatch, may need to remove --> In to_number(), - interpret a V format code as dividing by 10 to the - power of the number of digits following V (Bruce + linkend="functions-formatting-table">to_number(), + interpret a V format code as dividing by 10 to the + power of the number of digits following V (Bruce Momjian) This makes it operate in an inverse fashion to - to_char(). + to_char(). @@ -6913,8 +6913,8 @@ XXX this is pending backpatch, may need to remove --> Make the to_reg*() - functions accept type text not cstring + linkend="functions-info-catalog-table">to_reg*() + functions accept type text not cstring (Petr Korobeinikov) @@ -6930,16 +6930,16 @@ XXX this is pending backpatch, may need to remove --> Add pg_size_bytes() + linkend="functions-admin-dbsize">pg_size_bytes() function to convert human-readable size strings to numbers (Pavel Stehule, Vitaly Burovoy, Dean Rasheed) This function converts strings like those produced by - pg_size_pretty() into bytes. An example + pg_size_pretty() into bytes. An example usage is SELECT oid::regclass FROM pg_class WHERE - pg_total_relation_size(oid) > pg_size_bytes('10 GB'). + pg_total_relation_size(oid) > pg_size_bytes('10 GB'). @@ -6949,7 +6949,7 @@ XXX this is pending backpatch, may need to remove --> In pg_size_pretty(), + linkend="functions-admin-dbsize">pg_size_pretty(), format negative numbers similarly to positive ones (Adrian Vondendriesch) @@ -6965,14 +6965,14 @@ XXX this is pending backpatch, may need to remove 2015-07-02 [10fb48d66] Add an optional missing_ok argument to SQL function curr --> - Add an optional missing_ok argument to the current_setting() + Add an optional missing_ok argument to the current_setting() function (David Christensen) This allows avoiding an error for an unrecognized parameter - name, instead returning a NULL. + name, instead returning a NULL. @@ -6984,16 +6984,16 @@ XXX this is pending backpatch, may need to remove --> Change various catalog-inspection functions to return - NULL for invalid input (Michael Paquier) + NULL for invalid input (Michael Paquier) pg_get_viewdef() - now returns NULL if given an invalid view OID, - and several similar functions likewise return NULL for + linkend="functions-info-catalog-table">pg_get_viewdef() + now returns NULL if given an invalid view OID, + and several similar functions likewise return NULL for bad input. Previously, such cases usually led to cache - lookup failed errors, which are not meant to occur in + lookup failed errors, which are not meant to occur in user-facing cases. @@ -7004,13 +7004,13 @@ XXX this is pending backpatch, may need to remove --> Fix pg_replication_origin_xact_reset() + linkend="pg-replication-origin-xact-reset">pg_replication_origin_xact_reset() to not have any arguments (Fujii Masao) The documentation said that it has no arguments, and the C code did - not expect any arguments, but the entry in pg_proc + not expect any arguments, but the entry in pg_proc mistakenly specified two arguments. @@ -7030,7 +7030,7 @@ XXX this is pending backpatch, may need to remove --> In PL/pgSQL, detect mismatched - CONTINUE and EXIT statements while + CONTINUE and EXIT statements while compiling a function, rather than at execution time (Jim Nasby) @@ -7043,7 +7043,7 @@ XXX this is pending backpatch, may need to remove 2016-07-02 [3a4a33ad4] PL/Python: Report argument parsing errors using exceptio --> - Extend PL/Python's error-reporting and + Extend PL/Python's error-reporting and message-reporting functions to allow specifying additional message fields besides the primary error message (Pavel Stehule) @@ -7055,7 +7055,7 @@ XXX this is pending backpatch, may need to remove --> Allow PL/Python functions to call themselves recursively - via SPI, and fix the behavior when multiple + via SPI, and fix the behavior when multiple set-returning PL/Python functions are called within one query (Alexey Grishchenko, Tom Lane) @@ -7077,14 +7077,14 @@ XXX this is pending backpatch, may need to remove 2016-03-02 [e2609323e] Make PL/Tcl require Tcl 8.4 or later. --> - Modernize PL/Tcl to use Tcl's object - APIs instead of simple strings (Jim Nasby, Karl + Modernize PL/Tcl to use Tcl's object + APIs instead of simple strings (Jim Nasby, Karl Lehenbauer) This can improve performance substantially in some cases. - Note that PL/Tcl now requires Tcl 8.4 or later. + Note that PL/Tcl now requires Tcl 8.4 or later. @@ -7094,8 +7094,8 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [cd37bb785] Improve PL/Tcl errorCode facility by providing decoded n --> - In PL/Tcl, make database-reported errors return - additional information in Tcl's errorCode global + In PL/Tcl, make database-reported errors return + additional information in Tcl's errorCode global variable (Jim Nasby, Tom Lane) @@ -7110,15 +7110,15 @@ XXX this is pending backpatch, may need to remove 2016-03-02 [c8c7c93de] Fix PL/Tcl's encoding conversion logic. --> - Fix PL/Tcl to perform encoding conversion between - the database encoding and UTF-8, which is what Tcl + Fix PL/Tcl to perform encoding conversion between + the database encoding and UTF-8, which is what Tcl expects (Tom Lane) Previously, strings were passed through without conversion, - leading to misbehavior with non-ASCII characters when - the database encoding was not UTF-8. + leading to misbehavior with non-ASCII characters when + the database encoding was not UTF-8. @@ -7137,7 +7137,7 @@ XXX this is pending backpatch, may need to remove --> Add a nonlocalized version of - the severity field in + the severity field in error and notice messages (Tom Lane) @@ -7154,17 +7154,17 @@ XXX this is pending backpatch, may need to remove This commit is also listed under psql and PL/pgSQL --> - Introduce a feature in libpq whereby the - CONTEXT field of messages can be suppressed, either + Introduce a feature in libpq whereby the + CONTEXT field of messages can be suppressed, either always or only for non-error messages (Pavel Stehule) The default behavior of PQerrorMessage() - is now to print CONTEXT + linkend="libpq-pqerrormessage">PQerrorMessage() + is now to print CONTEXT only for errors. The new function PQsetErrorContextVisibility() + linkend="libpq-pqseterrorcontextvisibility">PQsetErrorContextVisibility() can be used to adjust this. @@ -7174,14 +7174,14 @@ This commit is also listed under psql and PL/pgSQL 2016-04-03 [e3161b231] Add libpq support for recreating an error message with d --> - Add support in libpq for regenerating an error + Add support in libpq for regenerating an error message with a different verbosity level (Alex Shulgin) This is done with the new function PQresultVerboseErrorMessage(). - This supports psql's new \errverbose + linkend="libpq-pqresultverboseerrormessage">PQresultVerboseErrorMessage(). + This supports psql's new \errverbose feature, and may be useful for other clients as well. @@ -7191,13 +7191,13 @@ This commit is also listed under psql and PL/pgSQL 2015-11-27 [40cb21f70] Improve PQhost() to return useful data for default Unix- --> - Improve libpq's PQhost() function to return + Improve libpq's PQhost() function to return useful data for default Unix-socket connections (Tom Lane) - Previously it would return NULL if no explicit host + Previously it would return NULL if no explicit host specification had been given; now it returns the default socket directory path. @@ -7208,7 +7208,7 @@ This commit is also listed under psql and PL/pgSQL 2016-02-16 [fc1ae7d2e] Change ecpg lexer to accept comments with line breaks in --> - Fix ecpg's lexer to handle line breaks within + Fix ecpg's lexer to handle line breaks within comments starting on preprocessor directive lines (Michael Meskes) @@ -7227,9 +7227,9 @@ This commit is also listed under psql and PL/pgSQL 2015-09-14 [d02426029] Check existency of table/schema for -t/-n option (pg_dum --> - Add a @@ -7249,7 +7249,7 @@ This commit is also listed under psql and PL/pgSQL 2016-05-06 [e1b120a8c] Only issue LOCK TABLE commands when necessary --> - In pg_dump, dump locally-made changes of privilege + In pg_dump, dump locally-made changes of privilege assignments for system objects (Stephen Frost) @@ -7257,7 +7257,7 @@ This commit is also listed under psql and PL/pgSQL While it has always been possible for a superuser to change the privilege assignments for built-in or extension-created objects, such changes were formerly lost in a dump and reload. - Now, pg_dump recognizes and dumps such changes. + Now, pg_dump recognizes and dumps such changes. (This works only when dumping from a 9.6 or later server, however.) @@ -7267,7 +7267,7 @@ This commit is also listed under psql and PL/pgSQL 2016-09-08 [31eb14504] Allow pg_dump to dump non-extension members of an extens --> - Allow pg_dump to dump non-extension-owned objects + Allow pg_dump to dump non-extension-owned objects that are within an extension-owned schema (Martín Marqués) @@ -7283,7 +7283,7 @@ This commit is also listed under psql and PL/pgSQL 2016-04-06 [3b3fcc4ee] pg_dump: Add table qualifications to some tags --> - In pg_dump output, include the table name in object + In pg_dump output, include the table name in object tags for object types that are only uniquely named per-table (for example, triggers) (Peter Eisentraut) @@ -7308,7 +7308,7 @@ this commit is also listed in the compatibility section The specified operations are carried out in the order in which the - options are given, and then psql terminates. + options are given, and then psql terminates. @@ -7317,7 +7317,7 @@ this commit is also listed in the compatibility section 2016-04-08 [c09b18f21] Support \crosstabview in psql --> - Add a \crosstabview command that prints the results of + Add a \crosstabview command that prints the results of a query in a cross-tabulated display (Daniel Vérité) @@ -7333,13 +7333,13 @@ this commit is also listed in the compatibility section 2016-04-03 [3cc38ca7d] Add psql \errverbose command to see last server error at --> - Add an \errverbose command that shows the last server + Add an \errverbose command that shows the last server error at full verbosity (Alex Shulgin) This is useful after getting an unexpected error — you - no longer need to adjust the VERBOSITY variable and + no longer need to adjust the VERBOSITY variable and recreate the failure in order to see error fields that are not shown by default. @@ -7351,13 +7351,13 @@ this commit is also listed in the compatibility section 2016-05-06 [9b66aa006] Fix psql's \ev and \sv commands so that they handle view --> - Add \ev and \sv commands for editing and + Add \ev and \sv commands for editing and showing view definitions (Petr Korobeinikov) - These are parallel to the existing \ef and - \sf commands for functions. + These are parallel to the existing \ef and + \sf commands for functions. @@ -7366,7 +7366,7 @@ this commit is also listed in the compatibility section 2016-04-04 [2bbe9112a] Add a \gexec command to psql for evaluation of computed --> - Add a \gexec command that executes a query and + Add a \gexec command that executes a query and re-submits the result(s) as new queries (Corey Huinker) @@ -7376,9 +7376,9 @@ this commit is also listed in the compatibility section 2015-10-05 [2145a7660] psql: allow \pset C in setting the title, matches \C --> - Allow \pset C string + Allow \pset C string to set the table title, for consistency with \C - string (Bruce Momjian) + string (Bruce Momjian) @@ -7387,7 +7387,7 @@ this commit is also listed in the compatibility section 2016-03-11 [69ab7b9d6] psql: Don't automatically use expanded format when there --> - In \pset expanded auto mode, do not use expanded + In \pset expanded auto mode, do not use expanded format for query results with only one column (Andreas Karlsson, Robert Haas) @@ -7399,16 +7399,16 @@ this commit is also listed in the compatibility section 2016-06-15 [9901d8ac2] Use strftime("%c") to format timestamps in psql's \watch --> - Improve the headers output by the \watch command + Improve the headers output by the \watch command (Michael Paquier, Tom Lane) - Include the \pset title string if one has + Include the \pset title string if one has been set, and shorten the prefabricated part of the - header to be timestamp (every - Ns). Also, the timestamp format now - obeys psql's locale environment. + header to be timestamp (every + Ns). Also, the timestamp format now + obeys psql's locale environment. @@ -7456,7 +7456,7 @@ this commit is also listed in the compatibility section 2015-07-07 [275f05c99] Add psql PROMPT variable showing the pid of the connecte --> - Add a PROMPT option %p to insert the + Add a PROMPT option %p to insert the process ID of the connected backend (Julien Rouhaud) @@ -7467,13 +7467,13 @@ this commit is also listed in the compatibility section This commit is also listed under libpq and PL/pgSQL --> - Introduce a feature whereby the CONTEXT field of + Introduce a feature whereby the CONTEXT field of messages can be suppressed, either always or only for non-error messages (Pavel Stehule) - Printing CONTEXT only for errors is now the default + Printing CONTEXT only for errors is now the default behavior. This can be changed by setting the special variable SHOW_CONTEXT. @@ -7484,7 +7484,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-07-11 [a670c24c3] Improve output of psql's \df+ command. --> - Make \df+ show function access privileges and + Make \df+ show function access privileges and parallel-safety attributes (Michael Paquier) @@ -7503,7 +7503,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-20 [68ab8e8ba] SQL commands in pgbench scripts are now ended by semicol --> - SQL commands in pgbench scripts are now ended by + SQL commands in pgbench scripts are now ended by semicolons, not newlines (Kyotaro Horiguchi, Tom Lane) @@ -7512,7 +7512,7 @@ This commit is also listed under libpq and PL/pgSQL Existing custom scripts will need to be modified to add a semicolon at the end of each line that does not have one already. (Doing so does not break the script for use with older versions - of pgbench.) + of pgbench.) @@ -7525,7 +7525,7 @@ This commit is also listed under libpq and PL/pgSQL --> Support floating-point arithmetic, as well as some built-in functions, in + linkend="pgbench-builtin-functions">built-in functions, in expressions in backslash commands (Fabien Coelho) @@ -7535,18 +7535,18 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-29 [ad9566470] pgbench: Remove \setrandom. --> - Replace \setrandom with built-in functions (Fabien + Replace \setrandom with built-in functions (Fabien Coelho) The new built-in functions include random(), - random_exponential(), and - random_gaussian(), which perform the same work as - \setrandom, but are easier to use since they can be + linkend="pgbench-functions">random(), + random_exponential(), and + random_gaussian(), which perform the same work as + \setrandom, but are easier to use since they can be embedded in larger expressions. Since these additions have made - \setrandom obsolete, remove it. + \setrandom obsolete, remove it. @@ -7561,8 +7561,8 @@ This commit is also listed under libpq and PL/pgSQL - This is done with the new switch, which works + similarly to for custom scripts. @@ -7577,7 +7577,7 @@ This commit is also listed under libpq and PL/pgSQL - When multiple scripts are specified, each pgbench + When multiple scripts are specified, each pgbench transaction randomly chooses one to execute. Formerly this was always done with uniform probability, but now different selection probabilities can be specified for different scripts. @@ -7604,7 +7604,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-16 [1def9063c] pgbench progress with timestamp --> - Add a option to report progress with Unix epoch timestamps, instead of time since the run started (Fabien Coelho) @@ -7615,8 +7615,8 @@ This commit is also listed under libpq and PL/pgSQL 2015-07-03 [ba3deeefb] Lift the limitation that # of clients must be a multiple --> - Allow the number of client connections () to not + be an exact multiple of the number of threads () (Fabien Coelho) @@ -7626,13 +7626,13 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-09 [accf7616f] pgbench: When -T is used, don't wait for transactions be --> - When the option is used, stop promptly at the end of the specified time (Fabien Coelho) Previously, specifying a low transaction rate could cause - pgbench to wait significantly longer than + pgbench to wait significantly longer than specified. @@ -7653,15 +7653,15 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-17 [66d947b9d] Adjust behavior of single-user -j mode for better initdb --> - Improve error reporting during initdb's + Improve error reporting during initdb's post-bootstrap phase (Tom Lane) Previously, an error here led to reporting the entire input - file as the failing query; now just the current + file as the failing query; now just the current query is reported. To get the desired behavior, queries in - initdb's input files must be separated by blank + initdb's input files must be separated by blank lines. @@ -7672,7 +7672,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-08-30 [d9720e437] Fix initdb misbehavior when user mis-enters superuser pa --> - Speed up initdb by using just one + Speed up initdb by using just one standalone-backend session for all the post-bootstrap steps (Tom Lane) @@ -7683,7 +7683,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-01 [e50cda784] Use pg_rewind when target timeline was switched --> - Improve pg_rewind + Improve pg_rewind so that it can work when the target timeline changes (Alexander Korotkov) @@ -7709,7 +7709,7 @@ This commit is also listed under libpq and PL/pgSQL --> Remove obsolete - heap_formtuple/heap_modifytuple/heap_deformtuple + heap_formtuple/heap_modifytuple/heap_deformtuple functions (Peter Geoghegan) @@ -7719,16 +7719,16 @@ This commit is also listed under libpq and PL/pgSQL 2016-08-27 [b9fe6cbc8] Add macros to make AllocSetContextCreate() calls simpler --> - Add macros to make AllocSetContextCreate() calls simpler + Add macros to make AllocSetContextCreate() calls simpler and safer (Tom Lane) Writing out the individual sizing parameters for a memory context is now deprecated in favor of using one of the new - macros ALLOCSET_DEFAULT_SIZES, - ALLOCSET_SMALL_SIZES, - or ALLOCSET_START_SMALL_SIZES. + macros ALLOCSET_DEFAULT_SIZES, + ALLOCSET_SMALL_SIZES, + or ALLOCSET_START_SMALL_SIZES. Existing code continues to work, however. @@ -7738,7 +7738,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-08-05 [de6fd1c89] Rely on inline functions even if that causes warnings in --> - Unconditionally use static inline functions in header + Unconditionally use static inline functions in header files (Andres Freund) @@ -7759,7 +7759,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-05-06 [6bd356c33] Add TAP tests for pg_dump --> - Improve TAP testing infrastructure (Michael + Improve TAP testing infrastructure (Michael Paquier, Craig Ringer, Álvaro Herrera, Stephen Frost) @@ -7774,7 +7774,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-11 [aa65de042] When trace_lwlocks is used, identify individual lwlocks --> - Make trace_lwlocks identify individual locks by name + Make trace_lwlocks identify individual locks by name (Robert Haas) @@ -7786,7 +7786,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-01-05 [4f18010af] Convert psql's tab completion for backslash commands to --> - Improve psql's tab-completion code infrastructure + Improve psql's tab-completion code infrastructure (Thomas Munro, Michael Paquier) @@ -7801,7 +7801,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-01-05 [efa318bcf] Make pg_shseclabel available in early backend startup --> - Nail the pg_shseclabel system catalog into cache, + Nail the pg_shseclabel system catalog into cache, so that it is available for access during connection authentication (Adam Brightwell) @@ -7820,21 +7820,21 @@ This commit is also listed under libpq and PL/pgSQL --> Restructure index access - method API to hide most of it at - the C level (Alexander Korotkov, Andrew Gierth) + method API to hide most of it at + the C level (Alexander Korotkov, Andrew Gierth) - This change modernizes the index AM API to look more + This change modernizes the index AM API to look more like the designs we have adopted for foreign data wrappers and - tablesample handlers. This simplifies the C code + tablesample handlers. This simplifies the C code and makes it much more practical to define index access methods in installable extensions. A consequence is that most of the columns - of the pg_am system catalog have disappeared. + of the pg_am system catalog have disappeared. New inspection functions have been added to allow SQL queries to determine index AM properties that used to be discoverable - from pg_am. + from pg_am. @@ -7844,14 +7844,14 @@ This commit is also listed under libpq and PL/pgSQL --> Add pg_init_privs + linkend="catalog-pg-init-privs">pg_init_privs system catalog to hold original privileges - of initdb-created and extension-created objects + of initdb-created and extension-created objects (Stephen Frost) - This infrastructure allows pg_dump to dump changes + This infrastructure allows pg_dump to dump changes that an installation may have made in privileges attached to system objects. Formerly, such changes would be lost in a dump and reload, but now they are preserved. @@ -7863,14 +7863,14 @@ This commit is also listed under libpq and PL/pgSQL 2016-02-04 [c1772ad92] Change the way that LWLocks for extensions are allocated --> - Change the way that extensions allocate custom LWLocks + Change the way that extensions allocate custom LWLocks (Amit Kapila, Robert Haas) - The RequestAddinLWLocks() function is removed, - and replaced by RequestNamedLWLockTranche(). - This allows better identification of custom LWLocks, + The RequestAddinLWLocks() function is removed, + and replaced by RequestNamedLWLockTranche(). + This allows better identification of custom LWLocks, and is less error-prone. @@ -7894,7 +7894,7 @@ This commit is also listed under libpq and PL/pgSQL - This change allows FDWs or custom scan providers + This change allows FDWs or custom scan providers to store data in a plan tree in a more convenient format than was previously possible. @@ -7911,7 +7911,7 @@ This commit is also listed under libpq and PL/pgSQL --> Make the planner deal with post-scan/join query steps by generating - and comparing Paths, replacing a lot of ad-hoc logic + and comparing Paths, replacing a lot of ad-hoc logic (Tom Lane) @@ -7961,7 +7961,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-24 [c1156411a] Move psql's psqlscan.l into src/fe_utils. --> - Separate out psql's flex lexer to + Separate out psql's flex lexer to make it usable by other client programs (Tom Lane, Kyotaro Horiguchi) @@ -7970,12 +7970,12 @@ This commit is also listed under libpq and PL/pgSQL This eliminates code duplication for programs that need to be able to parse SQL commands well enough to identify command boundaries. Doing that in full generality is more painful than one could - wish, and up to now only psql has really gotten + wish, and up to now only psql has really gotten it right among our supported client programs. - A new source-code subdirectory src/fe_utils/ has + A new source-code subdirectory src/fe_utils/ has been created to hold this and other code that is shared across our client programs. Formerly such sharing was accomplished by symbolic linking or copying source files at build time, which @@ -7988,7 +7988,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-21 [98a64d0bd] Introduce WaitEventSet API. --> - Introduce WaitEventSet API to allow + Introduce WaitEventSet API to allow efficient waiting for event sets that usually do not change from one wait to the next (Andres Freund, Amit Kapila) @@ -7999,16 +7999,16 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-01 [65578341a] Add Generic WAL interface --> - Add a generic interface for writing WAL records + Add a generic interface for writing WAL records (Alexander Korotkov, Petr Jelínek, Markus Nullmeier) - This change allows extensions to write WAL records for + This change allows extensions to write WAL records for changes to pages using a standard layout. The problem of needing to - replay WAL without access to the extension is solved by + replay WAL without access to the extension is solved by having generic replay code. This allows extensions to implement, - for example, index access methods and have WAL + for example, index access methods and have WAL support for them. @@ -8018,13 +8018,13 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-06 [3fe3511d0] Generic Messages for Logical Decoding --> - Support generic WAL messages for logical decoding + Support generic WAL messages for logical decoding (Petr Jelínek, Andres Freund) This feature allows extensions to insert data into the - WAL stream that can be read by logical-decoding + WAL stream that can be read by logical-decoding plugins, but is not connected to physical data restoration. @@ -8036,12 +8036,12 @@ This commit is also listed under libpq and PL/pgSQL --> Allow SP-GiST operator classes to store an arbitrary - traversal value while descending the index (Alexander + traversal value while descending the index (Alexander Lebedev, Teodor Sigaev) - This is somewhat like the reconstructed value, but it + This is somewhat like the reconstructed value, but it could be any arbitrary chunk of data, not necessarily of the same data type as the indexed column. @@ -8052,12 +8052,12 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-04 [66229ac00] Introduce a LOG_SERVER_ONLY ereport level, which is neve --> - Introduce a LOG_SERVER_ONLY message level for - ereport() (David Steele) + Introduce a LOG_SERVER_ONLY message level for + ereport() (David Steele) - This level acts like LOG except that the message is + This level acts like LOG except that the message is never sent to the client. It is meant for use in auditing and similar applications. @@ -8068,14 +8068,14 @@ This commit is also listed under libpq and PL/pgSQL 2016-07-01 [548af97fc] Provide and use a makefile target to build all generated --> - Provide a Makefile target to build all generated + Provide a Makefile target to build all generated headers (Michael Paquier, Tom Lane) - submake-generated-headers can now be invoked to ensure + submake-generated-headers can now be invoked to ensure that generated backend header files are up-to-date. This is - useful in subdirectories that might be built standalone. + useful in subdirectories that might be built standalone. @@ -8104,8 +8104,8 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-13 [7a8d87483] Rename auto_explain.sample_ratio to sample_rate --> - Add configuration parameter auto_explain.sample_rate to - allow contrib/auto_explain + Add configuration parameter auto_explain.sample_rate to + allow contrib/auto_explain to capture just a configurable fraction of all queries (Craig Ringer, Julien Rouhaud) @@ -8121,7 +8121,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-01 [9ee014fc8] Bloom index contrib module --> - Add contrib/bloom module that + Add contrib/bloom module that implements an index access method based on Bloom filtering (Teodor Sigaev, Alexander Korotkov) @@ -8139,7 +8139,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-28 [81ee726d8] Code and docs review for cube kNN support. --> - In contrib/cube, introduce + In contrib/cube, introduce distance operators for cubes, and support kNN-style searches in GiST indexes on cube columns (Stas Kelvich) @@ -8150,19 +8150,19 @@ This commit is also listed under libpq and PL/pgSQL 2016-02-03 [41d2c081c] Make hstore_to_jsonb_loose match hstore_to_json_loose on --> - Make contrib/hstore's hstore_to_jsonb_loose() - and hstore_to_json_loose() functions agree on what + Make contrib/hstore's hstore_to_jsonb_loose() + and hstore_to_json_loose() functions agree on what is a number (Tom Lane) - Previously, hstore_to_jsonb_loose() would convert - numeric-looking strings to JSON numbers, rather than - strings, even if they did not exactly match the JSON + Previously, hstore_to_jsonb_loose() would convert + numeric-looking strings to JSON numbers, rather than + strings, even if they did not exactly match the JSON syntax specification for numbers. This was inconsistent with - hstore_to_json_loose(), so tighten the test to match - the JSON syntax. + hstore_to_json_loose(), so tighten the test to match + the JSON syntax. @@ -8172,7 +8172,7 @@ This commit is also listed under libpq and PL/pgSQL --> Add selectivity estimation functions for - contrib/intarray operators + contrib/intarray operators to improve plans for queries using those operators (Yury Zhuravlev, Alexander Korotkov) @@ -8184,10 +8184,10 @@ This commit is also listed under libpq and PL/pgSQL --> Make contrib/pageinspect's - heap_page_items() function show the raw data in each - tuple, and add new functions tuple_data_split() and - heap_page_item_attrs() for inspection of individual + linkend="pageinspect">contrib/pageinspect's + heap_page_items() function show the raw data in each + tuple, and add new functions tuple_data_split() and + heap_page_item_attrs() for inspection of individual tuple fields (Nikolay Shaplov) @@ -8197,9 +8197,9 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-09 [188f359d3] pgcrypto: support changing S2K iteration count --> - Add an optional S2K iteration count parameter to - contrib/pgcrypto's - pgp_sym_encrypt() function (Jeff Janes) + Add an optional S2K iteration count parameter to + contrib/pgcrypto's + pgp_sym_encrypt() function (Jeff Janes) @@ -8208,8 +8208,8 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-16 [f576b17cd] Add word_similarity to pg_trgm contrib module. --> - Add support for word similarity to - contrib/pg_trgm + Add support for word similarity to + contrib/pg_trgm (Alexander Korotkov, Artur Zakirov) @@ -8226,14 +8226,14 @@ This commit is also listed under libpq and PL/pgSQL --> Add configuration parameter - pg_trgm.similarity_threshold for - contrib/pg_trgm's similarity threshold (Artur Zakirov) + pg_trgm.similarity_threshold for + contrib/pg_trgm's similarity threshold (Artur Zakirov) This threshold has always been configurable, but formerly it was - controlled by special-purpose functions set_limit() - and show_limit(). Those are now deprecated. + controlled by special-purpose functions set_limit() + and show_limit(). Those are now deprecated. @@ -8242,7 +8242,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-07-20 [97f301464] This supports the triconsistent function for pg_trgm GIN --> - Improve contrib/pg_trgm's GIN operator class to + Improve contrib/pg_trgm's GIN operator class to speed up index searches in which both common and rare keys appear (Jeff Janes) @@ -8254,7 +8254,7 @@ This commit is also listed under libpq and PL/pgSQL --> Improve performance of similarity searches in - contrib/pg_trgm GIN indexes (Christophe Fornaroli) + contrib/pg_trgm GIN indexes (Christophe Fornaroli) @@ -8265,7 +8265,7 @@ This commit is also listed under libpq and PL/pgSQL --> Add contrib/pg_visibility module + linkend="pgvisibility">contrib/pg_visibility module to allow examining table visibility maps (Robert Haas) @@ -8275,9 +8275,9 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-07 [49124613f] contrib/sslinfo: add ssl_extension_info SRF --> - Add ssl_extension_info() - function to contrib/sslinfo, to print information - about SSL extensions present in the X509 + Add ssl_extension_info() + function to contrib/sslinfo, to print information + about SSL extensions present in the X509 certificate used for the current connection (Dmitry Voronin) @@ -8285,7 +8285,7 @@ This commit is also listed under libpq and PL/pgSQL - <link linkend="postgres-fdw"><filename>postgres_fdw</></> + <link linkend="postgres-fdw"><filename>postgres_fdw</filename></link> @@ -8332,12 +8332,12 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-18 [0bf3ae88a] Directly modify foreign tables. --> - When feasible, perform UPDATE or DELETE + When feasible, perform UPDATE or DELETE entirely on the remote server (Etsuro Fujita) - Formerly, remote updates involved sending a SELECT FOR UPDATE + Formerly, remote updates involved sending a SELECT FOR UPDATE command and then updating or deleting the selected rows one-by-one. While that is still necessary if the operation requires any local processing, it can now be done remotely if all elements of the @@ -8355,7 +8355,7 @@ This commit is also listed under libpq and PL/pgSQL - Formerly, postgres_fdw always fetched 100 rows at + Formerly, postgres_fdw always fetched 100 rows at a time from remote queries; now that behavior is configurable. diff --git a/doc/src/sgml/release-old.sgml b/doc/src/sgml/release-old.sgml index 24a7233378..e95e5cac24 100644 --- a/doc/src/sgml/release-old.sgml +++ b/doc/src/sgml/release-old.sgml @@ -15,7 +15,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 7.3.X series. Users are encouraged to update to a newer release branch soon. @@ -39,7 +39,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -50,60 +50,60 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 7.3.20 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -144,27 +144,27 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -206,22 +206,22 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -322,13 +322,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -339,7 +339,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -376,7 +376,7 @@ Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) Back-port 7.4 spinlock code to improve performance and support @@ -419,9 +419,9 @@ into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -431,46 +431,46 @@ Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations -This fixes libpq-using applications for the security +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix server to use custom DH SSL parameters correctly (Michael @@ -510,7 +510,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -525,14 +525,14 @@ created in 7.3.11 release. Fix race condition that could lead to file already -exists errors during pg_clog file creation +exists errors during pg_clog file creation (Tom) Fix to allow restoring dumps that have cross-schema references to custom operators (Tom) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -558,9 +558,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.10, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -571,28 +571,28 @@ and isinf during configure (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Fix longstanding bug in strpos() and regular expression handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -634,13 +634,13 @@ for the wrong page, leading to an Assert failure or data corruption. -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped @@ -674,25 +674,25 @@ table has been dropped Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled Various memory leakage fixes Various portability improvements -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type @@ -754,17 +754,17 @@ COMMIT; - The above procedure must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedure must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same error. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same error. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedure. Finally, do: -- re-freeze template0: @@ -792,34 +792,34 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. Additional buffer overrun checks in plpgsql (Neil) -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) -Prevent to_char(interval) from dumping core for +Prevent to_char(interval) from dumping core for month-related formats -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD +RECORD @@ -850,11 +850,11 @@ month-related formats Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -909,7 +909,7 @@ datestyles Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -1264,13 +1264,13 @@ operations on bytea columns (Joe) Restore creation of OID column in CREATE TABLE AS / SELECT INTO -Fix pg_dump core dump when dumping views having comments +Fix pg_dump core dump when dumping views having comments Dump DEFERRABLE/INITIALLY DEFERRED constraints properly Fix UPDATE when child table's column numbering differs from parent Increase default value of max_fsm_relations Fix problem when fetching backwards in a cursor for a single-row query Make backward fetch work properly with cursor on SELECT DISTINCT query -Fix problems with loading pg_dump files containing contrib/lo usage +Fix problems with loading pg_dump files containing contrib/lo usage Fix problem with all-numeric user names Fix possible memory leak and core dump during disconnect in libpgtcl Make plpython's spi_execute command handle nulls properly (Andrew Bosma) @@ -1328,7 +1328,7 @@ operations on bytea columns (Joe) Fix a core dump of COPY TO when client/server encodings don't match (Tom) -Allow pg_dump to work with pre-7.2 servers (Philip) +Allow pg_dump to work with pre-7.2 servers (Philip) contrib/adddepend fixes (Tom) Fix problem with deletion of per-user/per-database config settings (Tom) contrib/vacuumlo fix (Tom) @@ -1418,7 +1418,7 @@ operations on bytea columns (Joe) PostgreSQL now records object dependencies, which allows improvements in many areas. DROP statements now take either - CASCADE or RESTRICT to control whether + CASCADE or RESTRICT to control whether dependent objects are also dropped. @@ -1458,7 +1458,7 @@ operations on bytea columns (Joe) A large number of interfaces have been moved to http://gborg.postgresql.org + url="http://gborg.postgresql.org">http://gborg.postgresql.org where they can be developed and released independently. @@ -1469,9 +1469,9 @@ operations on bytea columns (Joe) By default, functions can now take up to 32 parameters, and - identifiers can be up to 63 bytes long. Also, OPAQUE - is now deprecated: there are specific pseudo-datatypes - to represent each of the former meanings of OPAQUE + identifiers can be up to 63 bytes long. Also, OPAQUE + is now deprecated: there are specific pseudo-datatypes + to represent each of the former meanings of OPAQUE in function argument and result types. @@ -1484,12 +1484,12 @@ operations on bytea columns (Joe) Migration to Version 7.3 - A dump/restore using pg_dump is required for those + A dump/restore using pg_dump is required for those wishing to migrate data from any previous release. If your application examines the system catalogs, additional changes will be required due to the introduction of schemas in 7.3; for more information, see: . + url="http://developer.postgresql.org/~momjian/upgrade_tips_7.3">. @@ -1538,7 +1538,7 @@ operations on bytea columns (Joe) serial columns are no longer automatically - UNIQUE; thus, an index will not automatically be + UNIQUE; thus, an index will not automatically be created. @@ -1724,7 +1724,7 @@ operations on bytea columns (Joe) Have COPY TO output embedded carriage returns and newlines as \r and \n (Tom) Allow DELIMITER in COPY FROM to be 8-bit clean (Tatsuo) -Make pg_dump use ALTER TABLE ADD PRIMARY KEY, for performance (Neil) +Make pg_dump use ALTER TABLE ADD PRIMARY KEY, for performance (Neil) Disable brackets in multistatement rules (Bruce) Disable VACUUM from being called inside a function (Bruce) Allow dropdb and other scripts to use identifiers with spaces (Bruce) @@ -1736,7 +1736,7 @@ operations on bytea columns (Joe) Add 'SET LOCAL var = value' to set configuration variables for a single transaction (Tom) Allow ANALYZE to run in a transaction (Bruce) Improve COPY syntax using new WITH clauses, keep backward compatibility (Bruce) -Fix pg_dump to consistently output tags in non-ASCII dumps (Bruce) +Fix pg_dump to consistently output tags in non-ASCII dumps (Bruce) Make foreign key constraints clearer in dump file (Rod) Add COMMENT ON CONSTRAINT (Rod) Allow COPY TO/FROM to specify column names (Brent Verner) @@ -1745,9 +1745,9 @@ operations on bytea columns (Joe) Generate failure on short COPY lines rather than pad NULLs (Neil) Fix CLUSTER to preserve all table attributes (Alvaro Herrera) New pg_settings table to view/modify GUC settings (Joe) -Add smart quoting, portability improvements to pg_dump output (Peter) +Add smart quoting, portability improvements to pg_dump output (Peter) Dump serial columns out as SERIAL (Tom) -Enable large file support, >2G for pg_dump (Peter, Philip Warner, Bruce) +Enable large file support, >2G for pg_dump (Peter, Philip Warner, Bruce) Disallow TRUNCATE on tables that are involved in referential constraints (Rod) Have TRUNCATE also auto-truncate the toast table of the relation (Tom) Add clusterdb utility that will auto-cluster an entire database based on previous CLUSTER operations (Alvaro Herrera) @@ -2020,15 +2020,15 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Additional buffer overrun checks in plpgsql (Neil) Fix pg_dump to dump index names and trigger names containing -% correctly (Neil) -Prevent to_char(interval) from dumping core for +% correctly (Neil) +Prevent to_char(interval) from dumping core for month-related formats -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) @@ -2060,11 +2060,11 @@ month-related formats Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Add needed STRICT marking to some contrib functions (Kris Jurka) @@ -2111,7 +2111,7 @@ datestyles Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -2247,7 +2247,7 @@ since PostgreSQL 7.1. Handle pre-1970 date values in newer versions of glibc (Tom) Fix possible hang during server shutdown Prevent spinlock hangs on SMP PPC machines (Tomoyuki Niijima) -Fix pg_dump to properly dump FULL JOIN USING (Tom) +Fix pg_dump to properly dump FULL JOIN USING (Tom) @@ -2281,7 +2281,7 @@ since PostgreSQL 7.1. Allow EXECUTE of "CREATE TABLE AS ... SELECT" in PL/pgSQL (Tom) Fix for compressed transaction log id wraparound (Tom) Fix PQescapeBytea/PQunescapeBytea so that they handle bytes > 0x7f (Tatsuo) -Fix for psql and pg_dump crashing when invoked with non-existent long options (Tatsuo) +Fix for psql and pg_dump crashing when invoked with non-existent long options (Tatsuo) Fix crash when invoking geometric operators (Tom) Allow OPEN cursor(args) (Tom) Fix for rtree_gist index build (Teodor) @@ -2354,7 +2354,7 @@ since PostgreSQL 7.1. Overview - This release improves PostgreSQL for use in + This release improves PostgreSQL for use in high-volume applications. @@ -2368,7 +2368,7 @@ since PostgreSQL 7.1. Vacuuming no longer locks tables, thus allowing normal user - access during the vacuum. A new VACUUM FULL + access during the vacuum. A new VACUUM FULL command does old-style vacuum by locking the table and shrinking the on-disk copy of the table. @@ -2400,7 +2400,7 @@ since PostgreSQL 7.1. The system now computes histogram column statistics during - ANALYZE, allowing much better optimizer choices. + ANALYZE, allowing much better optimizer choices. @@ -2472,15 +2472,15 @@ since PostgreSQL 7.1. - The pg_hba.conf and pg_ident.conf + The pg_hba.conf and pg_ident.conf configuration is now only reloaded after receiving a - SIGHUP signal, not with each connection. + SIGHUP signal, not with each connection. - The function octet_length() now returns the uncompressed data length. + The function octet_length() now returns the uncompressed data length. @@ -2693,7 +2693,7 @@ since PostgreSQL 7.1. Internationalization -National language support in psql, pg_dump, libpq, and server (Peter E) +National language support in psql, pg_dump, libpq, and server (Peter E) Message translations in Chinese (simplified, traditional), Czech, French, German, Hungarian, Russian, Swedish (Peter E, Serguei A. Mokhov, Karel Zak, Weiping He, Zhenbang Wei, Kovacs Zoltan) Make trim, ltrim, rtrim, btrim, lpad, rpad, translate multibyte aware (Tatsuo) Add LATIN5,6,7,8,9,10 support (Tatsuo) @@ -2705,7 +2705,7 @@ since PostgreSQL 7.1. - <application>PL/pgSQL</> + <application>PL/pgSQL</application> Now uses portals for SELECT loops, allowing huge result sets (Jan) CURSOR and REFCURSOR support (Jan) @@ -2745,7 +2745,7 @@ since PostgreSQL 7.1. - <application>psql</> + <application>psql</application> \d displays indexes in unique, primary groupings (Christopher Kings-Lynne) Allow trailing semicolons in backslash commands (Greg Sabino Mullane) @@ -2756,7 +2756,7 @@ since PostgreSQL 7.1. - <application>libpq</> + <application>libpq</application> New function PQescapeString() to escape quotes in command strings (Florian Weimer) New function PQescapeBytea() escapes binary strings for use as SQL string literals @@ -2818,7 +2818,7 @@ since PostgreSQL 7.1. - <application>ECPG</> + <application>ECPG</application> EXECUTE ... INTO implemented (Christof Petig) Multiple row descriptor support (e.g. CARDINALITY) (Christof Petig) @@ -2839,7 +2839,7 @@ since PostgreSQL 7.1. Python fix fetchone() (Gerhard Haring) Use UTF, Unicode in Tcl where appropriate (Vsevolod Lobko, Reinhard Max) Add Tcl COPY TO/FROM (ljb) -Prevent output of default index op class in pg_dump (Tom) +Prevent output of default index op class in pg_dump (Tom) Fix libpgeasy memory leak (Bruce) @@ -3547,9 +3547,9 @@ ecpg changes (Michael) SQL92 join syntax is now supported, though only as - INNER JOIN for this release. JOIN, - NATURAL JOIN, JOIN/USING, - and JOIN/ON are available, as are + INNER JOIN for this release. JOIN, + NATURAL JOIN, JOIN/USING, + and JOIN/ON are available, as are column correlation names. @@ -3959,7 +3959,7 @@ New multibyte encodings This is basically a cleanup release for 6.5.2. We have added a new - PgAccess that was missing in 6.5.2, and installed an NT-specific fix. + PgAccess that was missing in 6.5.2, and installed an NT-specific fix. @@ -4209,7 +4209,7 @@ Add Win1250 (Czech) support (Pavel Behal) We continue to expand our port list, this time including - Windows NT/ix86 and NetBSD/arm32. + Windows NT/ix86 and NetBSD/arm32. @@ -4234,7 +4234,7 @@ Add Win1250 (Czech) support (Pavel Behal) New and updated material is present throughout the documentation. New FAQs have been - contributed for SGI and AIX platforms. + contributed for SGI and AIX platforms. The Tutorial has introductory information on SQL from Stefan Simkovics. For the User's Guide, there are @@ -4926,7 +4926,7 @@ Correctly handles function calls on the left side of BETWEEN and LIKE clauses. A dump/restore is NOT required for those running 6.3 or 6.3.1. A -make distclean, make, and make install is all that is required. +make distclean, make, and make install is all that is required. This last step should be performed while the postmaster is not running. You should re-link any custom applications that use PostgreSQL libraries. @@ -5003,7 +5003,7 @@ Improvements to the configuration autodetection for installation. A dump/restore is NOT required for those running 6.3. A -make distclean, make, and make install is all that is required. +make distclean, make, and make install is all that is required. This last step should be performed while the postmaster is not running. You should re-link any custom applications that use PostgreSQL libraries. @@ -5128,7 +5128,7 @@ Better identify tcl and tk libs and includes(Bruce) Third, char() fields will now allow faster access than varchar() or - text. Specifically, the text and varchar() have a penalty for access to + text. Specifically, the text and varchar() have a penalty for access to any columns after the first column of this type. char() used to also have this access penalty, but it no longer does. This might suggest that you redesign some of your tables, especially if you have short character @@ -5470,7 +5470,7 @@ to dump the 6.1 database. -Migration from version 1.<replaceable>x</> to version 6.2 +Migration from version 1.<replaceable>x</replaceable> to version 6.2 Those migrating from earlier 1.* releases should first upgrade to 1.09 @@ -5689,11 +5689,11 @@ optimizer which uses genetic - The random results in the random test should cause the + The random results in the random test should cause the random test to be failed, since the regression tests are evaluated using a simple diff. However, - random does not seem to produce random results on my test - machine (Linux/gcc/i686). + random does not seem to produce random results on my test + machine (Linux/gcc/i686). @@ -5990,16 +5990,16 @@ and a script to convert old ASCII files. The following notes are for the benefit of users who want to migrate -databases from Postgres95 1.01 and 1.02 to Postgres95 1.02.1. +databases from Postgres95 1.01 and 1.02 to Postgres95 1.02.1. -If you are starting afresh with Postgres95 1.02.1 and do not need +If you are starting afresh with Postgres95 1.02.1 and do not need to migrate old databases, you do not need to read any further. -In order to upgrade older Postgres95 version 1.01 or 1.02 databases to +In order to upgrade older Postgres95 version 1.01 or 1.02 databases to version 1.02.1, the following steps are required: @@ -6013,7 +6013,7 @@ Start up a new 1.02.1 postmaster Add the new built-in functions and operators of 1.02.1 to 1.01 or 1.02 databases. This is done by running the new 1.02.1 server against your own 1.01 or 1.02 database and applying the queries attached at - the end of the file. This can be done easily through psql. If your + the end of the file. This can be done easily through psql. If your 1.01 or 1.02 database is named testdb and you have cut the commands from the end of this file and saved them in addfunc.sql: @@ -6044,7 +6044,7 @@ sed 's/^\.$/\\./g' <in_file >out_file -If you are loading an older binary copy or non-stdout copy, there is no +If you are loading an older binary copy or non-stdout copy, there is no end-of-data character, and hence no conversion necessary. @@ -6135,15 +6135,15 @@ Contributors (apologies to any missed) The following notes are for the benefit of users who want to migrate -databases from Postgres95 1.0 to Postgres95 1.01. +databases from Postgres95 1.0 to Postgres95 1.01. -If you are starting afresh with Postgres95 1.01 and do not need +If you are starting afresh with Postgres95 1.01 and do not need to migrate old databases, you do not need to read any further. -In order to Postgres95 version 1.01 with databases created with -Postgres95 version 1.0, the following steps are required: +In order to Postgres95 version 1.01 with databases created with +Postgres95 version 1.0, the following steps are required: diff --git a/doc/src/sgml/release.sgml b/doc/src/sgml/release.sgml index f1f4e91252..a815a48b8d 100644 --- a/doc/src/sgml/release.sgml +++ b/doc/src/sgml/release.sgml @@ -44,7 +44,7 @@ For new features, add links to the documentation sections. The release notes contain the significant changes in each - PostgreSQL release, with major features and migration + PostgreSQL release, with major features and migration issues listed at the top. The release notes do not contain changes that affect only a few users or changes that are internal and therefore not user-visible. For example, the optimizer is improved in almost every diff --git a/doc/src/sgml/rowtypes.sgml b/doc/src/sgml/rowtypes.sgml index 9d6768e006..bc2fc9b885 100644 --- a/doc/src/sgml/rowtypes.sgml +++ b/doc/src/sgml/rowtypes.sgml @@ -12,7 +12,7 @@ - A composite type represents the structure of a row or record; + A composite type represents the structure of a row or record; it is essentially just a list of field names and their data types. PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. For example, a @@ -36,11 +36,11 @@ CREATE TYPE inventory_item AS ( price numeric ); - The syntax is comparable to CREATE TABLE, except that only + The syntax is comparable to CREATE TABLE, except that only field names and types can be specified; no constraints (such as NOT - NULL) can presently be included. Note that the AS keyword + NULL) can presently be included. Note that the AS keyword is essential; without it, the system will think a different kind - of CREATE TYPE command is meant, and you will get odd syntax + of CREATE TYPE command is meant, and you will get odd syntax errors. @@ -78,12 +78,12 @@ CREATE TABLE inventory_item ( price numeric CHECK (price > 0) ); - then the same inventory_item composite type shown above would + then the same inventory_item composite type shown above would come into being as a byproduct, and could be used just as above. Note however an important restriction of the current implementation: since no constraints are associated with a composite type, the constraints shown in the table - definition do not apply to values of the composite type + definition do not apply to values of the composite type outside the table. (A partial workaround is to use domain types as members of composite types.) @@ -111,7 +111,7 @@ CREATE TABLE inventory_item ( '("fuzzy dice",42,1.99)' - which would be a valid value of the inventory_item type + which would be a valid value of the inventory_item type defined above. To make a field be NULL, write no characters at all in its position in the list. For example, this constant specifies a NULL third field: @@ -150,7 +150,7 @@ ROW('', 42, NULL) ('fuzzy dice', 42, 1.99) ('', 42, NULL) - The ROW expression syntax is discussed in more detail in ROW expression syntax is discussed in more detail in . @@ -163,15 +163,15 @@ ROW('', 42, NULL) name, much like selecting a field from a table name. In fact, it's so much like selecting from a table name that you often have to use parentheses to keep from confusing the parser. For example, you might try to select - some subfields from our on_hand example table with something + some subfields from our on_hand example table with something like: SELECT item.name FROM on_hand WHERE item.price > 9.99; - This will not work since the name item is taken to be a table - name, not a column name of on_hand, per SQL syntax rules. + This will not work since the name item is taken to be a table + name, not a column name of on_hand, per SQL syntax rules. You must write it like this: @@ -186,7 +186,7 @@ SELECT (on_hand.item).name FROM on_hand WHERE (on_hand.item).price > 9.99; Now the parenthesized object is correctly interpreted as a reference to - the item column, and then the subfield can be selected from it. + the item column, and then the subfield can be selected from it. @@ -202,7 +202,7 @@ SELECT (my_func(...)).field FROM ... - The special field name * means all fields, as + The special field name * means all fields, as further explained in . @@ -221,7 +221,7 @@ INSERT INTO mytab (complex_col) VALUES((1.1,2.2)); UPDATE mytab SET complex_col = ROW(1.1,2.2) WHERE ...; - The first example omits ROW, the second uses it; we + The first example omits ROW, the second uses it; we could have done it either way. @@ -234,12 +234,12 @@ UPDATE mytab SET complex_col.r = (complex_col).r + 1 WHERE ...; Notice here that we don't need to (and indeed cannot) put parentheses around the column name appearing just after - SET, but we do need parentheses when referencing the same + SET, but we do need parentheses when referencing the same column in the expression to the right of the equal sign. - And we can specify subfields as targets for INSERT, too: + And we can specify subfields as targets for INSERT, too: INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); @@ -260,10 +260,10 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); - In PostgreSQL, a reference to a table name (or alias) + In PostgreSQL, a reference to a table name (or alias) in a query is effectively a reference to the composite value of the table's current row. For example, if we had a table - inventory_item as shown + inventory_item as shown above, we could write: SELECT c FROM inventory_item c; @@ -278,12 +278,12 @@ SELECT c FROM inventory_item c; Note however that simple names are matched to column names before table names, so this example works only because there is no column - named c in the query's tables. + named c in the query's tables. The ordinary qualified-column-name - syntax table_name.column_name + syntax table_name.column_name can be understood as applying field selection to the composite value of the table's current row. (For efficiency reasons, it's not actually implemented that way.) @@ -306,13 +306,13 @@ SELECT c.* FROM inventory_item c; SELECT c.name, c.supplier_id, c.price FROM inventory_item c; - PostgreSQL will apply this expansion behavior to + PostgreSQL will apply this expansion behavior to any composite-valued expression, although as shown above, you need to write parentheses - around the value that .* is applied to whenever it's not a - simple table name. For example, if myfunc() is a function - returning a composite type with columns a, - b, and c, then these two queries have the + around the value that .* is applied to whenever it's not a + simple table name. For example, if myfunc() is a function + returning a composite type with columns a, + b, and c, then these two queries have the same result: SELECT (myfunc(x)).* FROM some_table; @@ -322,33 +322,33 @@ SELECT (myfunc(x)).a, (myfunc(x)).b, (myfunc(x)).c FROM some_table; - PostgreSQL handles column expansion by + PostgreSQL handles column expansion by actually transforming the first form into the second. So, in this - example, myfunc() would get invoked three times per row + example, myfunc() would get invoked three times per row with either syntax. If it's an expensive function you may wish to avoid that, which you can do with a query like: SELECT (m).* FROM (SELECT myfunc(x) AS m FROM some_table OFFSET 0) ss; - The OFFSET 0 clause keeps the optimizer - from flattening the sub-select to arrive at the form with - multiple calls of myfunc(). + The OFFSET 0 clause keeps the optimizer + from flattening the sub-select to arrive at the form with + multiple calls of myfunc(). - The composite_value.* syntax results in + The composite_value.* syntax results in column expansion of this kind when it appears at the top level of - a SELECT output - list, a RETURNING - list in INSERT/UPDATE/DELETE, - a VALUES clause, or + a SELECT output + list, a RETURNING + list in INSERT/UPDATE/DELETE, + a VALUES clause, or a row constructor. In all other contexts (including when nested inside one of those - constructs), attaching .* to a composite value does not - change the value, since it means all columns and so the + constructs), attaching .* to a composite value does not + change the value, since it means all columns and so the same composite value is produced again. For example, - if somefunc() accepts a composite-valued argument, + if somefunc() accepts a composite-valued argument, these queries are the same: @@ -356,16 +356,16 @@ SELECT somefunc(c.*) FROM inventory_item c; SELECT somefunc(c) FROM inventory_item c; - In both cases, the current row of inventory_item is + In both cases, the current row of inventory_item is passed to the function as a single composite-valued argument. - Even though .* does nothing in such cases, using it is good + Even though .* does nothing in such cases, using it is good style, since it makes clear that a composite value is intended. In - particular, the parser will consider c in c.* to + particular, the parser will consider c in c.* to refer to a table name or alias, not to a column name, so that there is - no ambiguity; whereas without .*, it is not clear - whether c means a table name or a column name, and in fact + no ambiguity; whereas without .*, it is not clear + whether c means a table name or a column name, and in fact the column-name interpretation will be preferred if there is a column - named c. + named c. @@ -376,27 +376,27 @@ SELECT * FROM inventory_item c ORDER BY c; SELECT * FROM inventory_item c ORDER BY c.*; SELECT * FROM inventory_item c ORDER BY ROW(c.*); - All of these ORDER BY clauses specify the row's composite + All of these ORDER BY clauses specify the row's composite value, resulting in sorting the rows according to the rules described in . However, - if inventory_item contained a column - named c, the first case would be different from the + if inventory_item contained a column + named c, the first case would be different from the others, as it would mean to sort by that column only. Given the column names previously shown, these queries are also equivalent to those above: SELECT * FROM inventory_item c ORDER BY ROW(c.name, c.supplier_id, c.price); SELECT * FROM inventory_item c ORDER BY (c.name, c.supplier_id, c.price); - (The last case uses a row constructor with the key word ROW + (The last case uses a row constructor with the key word ROW omitted.) Another special syntactical behavior associated with composite values is - that we can use functional notation for extracting a field + that we can use functional notation for extracting a field of a composite value. The simple way to explain this is that - the notations field(table) - and table.field + the notations field(table) + and table.field are interchangeable. For example, these queries are equivalent: @@ -418,7 +418,7 @@ SELECT c.somefunc FROM inventory_item c; This equivalence between functional notation and field notation makes it possible to use functions on composite types to implement - computed fields. + computed fields. computed field @@ -427,7 +427,7 @@ SELECT c.somefunc FROM inventory_item c; computed An application using the last query above wouldn't need to be directly - aware that somefunc isn't a real column of the table. + aware that somefunc isn't a real column of the table. @@ -438,7 +438,7 @@ SELECT c.somefunc FROM inventory_item c; interpretation will be preferred, so that such a function could not be called without tricks. One way to force the function interpretation is to schema-qualify the function name, that is, write - schema.func(compositevalue). + schema.func(compositevalue). @@ -450,8 +450,8 @@ SELECT c.somefunc FROM inventory_item c; The external text representation of a composite value consists of items that are interpreted according to the I/O conversion rules for the individual field types, plus decoration that indicates the composite structure. - The decoration consists of parentheses (( and )) - around the whole value, plus commas (,) between adjacent + The decoration consists of parentheses (( and )) + around the whole value, plus commas (,) between adjacent items. Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. @@ -466,7 +466,7 @@ SELECT c.somefunc FROM inventory_item c; As shown previously, when writing a composite value you can write double quotes around any individual field value. - You must do so if the field value would otherwise + You must do so if the field value would otherwise confuse the composite-value parser. In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted. To put a double quote or backslash in a quoted composite field value, @@ -481,7 +481,7 @@ SELECT c.somefunc FROM inventory_item c; A completely empty field value (no characters at all between the commas or parentheses) represents a NULL. To write a value that is an empty - string rather than NULL, write "". + string rather than NULL, write "". @@ -497,7 +497,7 @@ SELECT c.somefunc FROM inventory_item c; Remember that what you write in an SQL command will first be interpreted as a string literal, and then as a composite. This doubles the number of backslashes you need (assuming escape string syntax is used). - For example, to insert a text field + For example, to insert a text field containing a double quote and a backslash in a composite value, you'd need to write: @@ -505,11 +505,11 @@ INSERT ... VALUES (E'("\\"\\\\")'); The string-literal processor removes one level of backslashes, so that what arrives at the composite-value parser looks like - ("\"\\"). In turn, the string - fed to the text data type's input routine - becomes "\. (If we were working + ("\"\\"). In turn, the string + fed to the text data type's input routine + becomes "\. (If we were working with a data type whose input routine also treated backslashes specially, - bytea for example, we might need as many as eight backslashes + bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored composite field.) Dollar quoting (see ) can be used to avoid the need to double backslashes. @@ -518,10 +518,10 @@ INSERT ... VALUES (E'("\\"\\\\")'); - The ROW constructor syntax is usually easier to work with + The ROW constructor syntax is usually easier to work with than the composite-literal syntax when writing composite values in SQL commands. - In ROW, individual field values are written the same way + In ROW, individual field values are written the same way they would be written when not members of a composite. diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 61c801a693..095bf6459c 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -99,7 +99,7 @@ the range table - range table + range table @@ -150,7 +150,7 @@ the target list - target list + target list @@ -168,9 +168,9 @@ DELETE commands don't need a normal target list because they don't produce any result. Instead, the rule system - adds a special CTID entry to the empty target list, + adds a special CTID entry to the empty target list, to allow the executor to find the row to be deleted. - (CTID is added when the result relation is an ordinary + (CTID is added when the result relation is an ordinary table. If it is a view, a whole-row variable is added instead, as described in .) @@ -178,7 +178,7 @@ For INSERT commands, the target list describes the new rows that should go into the result relation. It consists of the - expressions in the VALUES clause or the ones from the + expressions in the VALUES clause or the ones from the SELECT clause in INSERT ... SELECT. The first step of the rewrite process adds target list entries for any columns that were not assigned to by @@ -193,8 +193,8 @@ rule system, it contains just the expressions from the SET column = expression part of the command. The planner will handle missing columns by inserting expressions that copy the values - from the old row into the new one. Just as for DELETE, - the rule system adds a CTID or whole-row variable so that + from the old row into the new one. Just as for DELETE, + the rule system adds a CTID or whole-row variable so that the executor can identify the old row to be updated. @@ -218,7 +218,7 @@ this expression is a Boolean that tells whether the operation (INSERT, UPDATE, DELETE, or SELECT) for the - final result row should be executed or not. It corresponds to the WHERE clause + final result row should be executed or not. It corresponds to the WHERE clause of an SQL statement. @@ -230,18 +230,18 @@ - The query's join tree shows the structure of the FROM clause. + The query's join tree shows the structure of the FROM clause. For a simple query like SELECT ... FROM a, b, c, the join tree is just - a list of the FROM items, because we are allowed to join them in - any order. But when JOIN expressions, particularly outer joins, + a list of the FROM items, because we are allowed to join them in + any order. But when JOIN expressions, particularly outer joins, are used, we have to join in the order shown by the joins. - In that case, the join tree shows the structure of the JOIN expressions. The - restrictions associated with particular JOIN clauses (from ON or - USING expressions) are stored as qualification expressions attached + In that case, the join tree shows the structure of the JOIN expressions. The + restrictions associated with particular JOIN clauses (from ON or + USING expressions) are stored as qualification expressions attached to those join-tree nodes. It turns out to be convenient to store - the top-level WHERE expression as a qualification attached to the + the top-level WHERE expression as a qualification attached to the top-level join-tree item, too. So really the join tree represents - both the FROM and WHERE clauses of a SELECT. + both the FROM and WHERE clauses of a SELECT. @@ -252,7 +252,7 @@ - The other parts of the query tree like the ORDER BY + The other parts of the query tree like the ORDER BY clause aren't of interest here. The rule system substitutes some entries there while applying rules, but that doesn't have much to do with the fundamentals of the rule @@ -274,8 +274,8 @@ - view - implementation through rules + view + implementation through rules @@ -313,7 +313,7 @@ CREATE RULE "_RETURN" AS ON SELECT TO myview DO INSTEAD - Rules ON SELECT are applied to all queries as the last step, even + Rules ON SELECT are applied to all queries as the last step, even if the command given is an INSERT, UPDATE or DELETE. And they have different semantics from rules on the other command types in that they modify the @@ -322,10 +322,10 @@ CREATE RULE "_RETURN" AS ON SELECT TO myview DO INSTEAD - Currently, there can be only one action in an ON SELECT rule, and it must - be an unconditional SELECT action that is INSTEAD. This restriction was + Currently, there can be only one action in an ON SELECT rule, and it must + be an unconditional SELECT action that is INSTEAD. This restriction was required to make rules safe enough to open them for ordinary users, and - it restricts ON SELECT rules to act like views. + it restricts ON SELECT rules to act like views. @@ -423,12 +423,12 @@ CREATE VIEW shoe_ready AS The CREATE VIEW command for the shoelace view (which is the simplest one we - have) will create a relation shoelace and an entry in + have) will create a relation shoelace and an entry in pg_rewrite that tells that there is a - rewrite rule that must be applied whenever the relation shoelace + rewrite rule that must be applied whenever the relation shoelace is referenced in a query's range table. The rule has no rule - qualification (discussed later, with the non-SELECT rules, since - SELECT rules currently cannot have them) and it is INSTEAD. Note + qualification (discussed later, with the non-SELECT rules, since + SELECT rules currently cannot have them) and it is INSTEAD. Note that rule qualifications are not the same as query qualifications. The action of our rule has a query qualification. The action of the rule is one query tree that is a copy of the @@ -438,7 +438,7 @@ CREATE VIEW shoe_ready AS The two extra range - table entries for NEW and OLD that you can see in + table entries for NEW and OLD that you can see in the pg_rewrite entry aren't of interest for SELECT rules. @@ -533,7 +533,7 @@ SELECT shoelace.sl_name, shoelace.sl_avail, There is one difference however: the subquery's range table has two - extra entries shoelace old and shoelace new. These entries don't + extra entries shoelace old and shoelace new. These entries don't participate directly in the query, since they aren't referenced by the subquery's join tree or target list. The rewriter uses them to store the access privilege check information that was originally present @@ -548,8 +548,8 @@ SELECT shoelace.sl_name, shoelace.sl_avail, the remaining range-table entries in the top query (in this example there are no more), and it will recursively check the range-table entries in the added subquery to see if any of them reference views. (But it - won't expand old or new — otherwise we'd have infinite recursion!) - In this example, there are no rewrite rules for shoelace_data or unit, + won't expand old or new — otherwise we'd have infinite recursion!) + In this example, there are no rewrite rules for shoelace_data or unit, so rewriting is complete and the above is the final result given to the planner. @@ -671,8 +671,8 @@ SELECT shoe_ready.shoename, shoe_ready.sh_avail, command other than a SELECT, the result relation points to the range-table entry where the result should go. Everything else is absolutely the same. So having two tables - t1 and t2 with columns a and - b, the query trees for the two statements: + t1 and t2 with columns a and + b, the query trees for the two statements: SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a; @@ -685,27 +685,27 @@ UPDATE t1 SET b = t2.b FROM t2 WHERE t1.a = t2.a; - The range tables contain entries for the tables t1 and t2. + The range tables contain entries for the tables t1 and t2. The target lists contain one variable that points to column - b of the range table entry for table t2. + b of the range table entry for table t2. - The qualification expressions compare the columns a of both + The qualification expressions compare the columns a of both range-table entries for equality. - The join trees show a simple join between t1 and t2. + The join trees show a simple join between t1 and t2. @@ -714,7 +714,7 @@ UPDATE t1 SET b = t2.b FROM t2 WHERE t1.a = t2.a; The consequence is, that both query trees result in similar execution plans: They are both joins over the two tables. For the - UPDATE the missing columns from t1 are added to + UPDATE the missing columns from t1 are added to the target list by the planner and the final query tree will read as: @@ -736,7 +736,7 @@ SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; one is a SELECT command and the other is an UPDATE is handled higher up in the executor, where it knows that this is an UPDATE, and it knows that - this result should go into table t1. But which of the rows + this result should go into table t1. But which of the rows that are there has to be replaced by the new row? @@ -744,12 +744,12 @@ SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; To resolve this problem, another entry is added to the target list in UPDATE (and also in DELETE) statements: the current tuple ID - (CTID).CTID + (CTID).CTID This is a system column containing the file block number and position in the block for the row. Knowing - the table, the CTID can be used to retrieve the - original row of t1 to be updated. After adding the - CTID to the target list, the query actually looks like: + the table, the CTID can be used to retrieve the + original row of t1 to be updated. After adding the + CTID to the target list, the query actually looks like: SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; @@ -759,9 +759,9 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; the stage. Old table rows aren't overwritten, and this is why ROLLBACK is fast. In an UPDATE, the new result row is inserted into the table (after stripping the - CTID) and in the row header of the old row, which the - CTID pointed to, the cmax and - xmax entries are set to the current command counter + CTID) and in the row header of the old row, which the + CTID pointed to, the cmax and + xmax entries are set to the current command counter and current transaction ID. Thus the old row is hidden, and after the transaction commits the vacuum cleaner can eventually remove the dead row. @@ -780,7 +780,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; The above demonstrates how the rule system incorporates view definitions into the original query tree. In the second example, a simple SELECT from one view created a final - query tree that is a join of 4 tables (unit was used twice with + query tree that is a join of 4 tables (unit was used twice with different names). @@ -811,7 +811,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; DELETE? Doing the substitutions described above would give a query tree in which the result relation points at a subquery range-table entry, which will not - work. There are several ways in which PostgreSQL + work. There are several ways in which PostgreSQL can support the appearance of updating a view, however. @@ -821,20 +821,20 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; underlying base relation so that the INSERT, UPDATE, or DELETE is applied to the base relation in the appropriate way. Views that are - simple enough for this are called automatically - updatable. For detailed information on the kinds of view that can + simple enough for this are called automatically + updatable. For detailed information on the kinds of view that can be automatically updated, see . Alternatively, the operation may be handled by a user-provided - INSTEAD OF trigger on the view. + INSTEAD OF trigger on the view. Rewriting works slightly differently in this case. For INSERT, the rewriter does nothing at all with the view, leaving it as the result relation for the query. For UPDATE and DELETE, it's still necessary to expand the - view query to produce the old rows that the command will + view query to produce the old rows that the command will attempt to update or delete. So the view is expanded as normal, but another unexpanded range-table entry is added to the query to represent the view in its capacity as the result relation. @@ -843,21 +843,21 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; The problem that now arises is how to identify the rows to be updated in the view. Recall that when the result relation - is a table, a special CTID entry is added to the target + is a table, a special CTID entry is added to the target list to identify the physical locations of the rows to be updated. This does not work if the result relation is a view, because a view - does not have any CTID, since its rows do not have + does not have any CTID, since its rows do not have actual physical locations. Instead, for an UPDATE - or DELETE operation, a special wholerow + or DELETE operation, a special wholerow entry is added to the target list, which expands to include all columns from the view. The executor uses this value to supply the - old row to the INSTEAD OF trigger. It is + old row to the INSTEAD OF trigger. It is up to the trigger to work out what to update based on the old and new row values. - Another possibility is for the user to define INSTEAD + Another possibility is for the user to define INSTEAD rules that specify substitute actions for INSERT, UPDATE, and DELETE commands on a view. These rules will rewrite the command, typically into a command @@ -868,8 +868,8 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; Note that rules are evaluated first, rewriting the original query before it is planned and executed. Therefore, if a view has - INSTEAD OF triggers as well as rules on INSERT, - UPDATE, or DELETE, then the rules will be + INSTEAD OF triggers as well as rules on INSERT, + UPDATE, or DELETE, then the rules will be evaluated first, and depending on the result, the triggers may not be used at all. @@ -883,7 +883,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; - If there are no INSTEAD rules or INSTEAD OF + If there are no INSTEAD rules or INSTEAD OF triggers for the view, and the rewriter cannot automatically rewrite the query as an update on the underlying base relation, an error will be thrown because the executor cannot update a view as such. @@ -902,13 +902,13 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; - materialized view - implementation through rules + materialized view + implementation through rules - view - materialized + view + materialized @@ -1030,7 +1030,7 @@ SELECT count(*) FROM words WHERE word = 'caterpiler'; (1 row) - With EXPLAIN ANALYZE, we see: + With EXPLAIN ANALYZE, we see: Aggregate (cost=21763.99..21764.00 rows=1 width=0) (actual time=188.180..188.181 rows=1 loops=1) @@ -1104,7 +1104,7 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; -Rules on <command>INSERT</>, <command>UPDATE</>, and <command>DELETE</> +Rules on <command>INSERT</command>, <command>UPDATE</command>, and <command>DELETE</command> rule @@ -1122,8 +1122,8 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; - Rules that are defined on INSERT, UPDATE, - and DELETE are significantly different from the view rules + Rules that are defined on INSERT, UPDATE, + and DELETE are significantly different from the view rules described in the previous section. First, their CREATE RULE command allows more: @@ -1142,13 +1142,13 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; - They can be INSTEAD or ALSO (the default). + They can be INSTEAD or ALSO (the default). - The pseudorelations NEW and OLD become useful. + The pseudorelations NEW and OLD become useful. @@ -1167,7 +1167,7 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; In many cases, tasks that could be performed by rules - on INSERT/UPDATE/DELETE are better done + on INSERT/UPDATE/DELETE are better done with triggers. Triggers are notationally a bit more complicated, but their semantics are much simpler to understand. Rules tend to have surprising results when the original query contains volatile functions: volatile @@ -1177,9 +1177,9 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; Also, there are some cases that are not supported by these types of rules at - all, notably including WITH clauses in the original query and - multiple-assignment sub-SELECTs in the SET list - of UPDATE queries. This is because copying these constructs + all, notably including WITH clauses in the original query and + multiple-assignment sub-SELECTs in the SET list + of UPDATE queries. This is because copying these constructs into a rule query would result in multiple evaluations of the sub-query, contrary to the express intent of the query's author. @@ -1198,8 +1198,8 @@ CREATE [ OR REPLACE ] RULE name AS in mind. - In the following, update rules means rules that are defined - on INSERT, UPDATE, or DELETE. + In the following, update rules means rules that are defined + on INSERT, UPDATE, or DELETE. @@ -1208,16 +1208,16 @@ CREATE [ OR REPLACE ] RULE name AS object and event given in the CREATE RULE command. For update rules, the rule system creates a list of query trees. Initially the query-tree list is empty. - There can be zero (NOTHING key word), one, or multiple actions. + There can be zero (NOTHING key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule - can have a qualification or not and it can be INSTEAD or - ALSO (the default). + can have a qualification or not and it can be INSTEAD or + ALSO (the default). What is a rule qualification? It is a restriction that tells when the actions of the rule should be done and when not. This - qualification can only reference the pseudorelations NEW and/or OLD, + qualification can only reference the pseudorelations NEW and/or OLD, which basically represent the relation that was given as object (but with a special meaning). @@ -1228,8 +1228,8 @@ CREATE [ OR REPLACE ] RULE name AS - No qualification, with either ALSO or - INSTEAD + No qualification, with either ALSO or + INSTEAD the query tree from the rule action with the original query @@ -1239,7 +1239,7 @@ CREATE [ OR REPLACE ] RULE name AS - Qualification given and ALSO + Qualification given and ALSO the query tree from the rule action with the rule @@ -1250,7 +1250,7 @@ CREATE [ OR REPLACE ] RULE name AS - Qualification given and INSTEAD + Qualification given and INSTEAD the query tree from the rule action with the rule @@ -1262,17 +1262,17 @@ CREATE [ OR REPLACE ] RULE name AS - Finally, if the rule is ALSO, the unchanged original query tree is - added to the list. Since only qualified INSTEAD rules already add the + Finally, if the rule is ALSO, the unchanged original query tree is + added to the list. Since only qualified INSTEAD rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. - For ON INSERT rules, the original query (if not suppressed by INSTEAD) + For ON INSERT rules, the original query (if not suppressed by INSTEAD) is done before any actions added by rules. This allows the actions to - see the inserted row(s). But for ON UPDATE and ON - DELETE rules, the original query is done after the actions added by rules. + see the inserted row(s). But for ON UPDATE and ON + DELETE rules, the original query is done after the actions added by rules. This ensures that the actions can see the to-be-updated or to-be-deleted rows; otherwise, the actions might do nothing because they find no rows matching their qualifications. @@ -1293,12 +1293,12 @@ CREATE [ OR REPLACE ] RULE name AS The query trees found in the actions of the pg_rewrite system catalog are only templates. Since they can reference the range-table entries for - NEW and OLD, some substitutions have to be made before they can be - used. For any reference to NEW, the target list of the original + NEW and OLD, some substitutions have to be made before they can be + used. For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that - entry's expression replaces the reference. Otherwise, NEW means the - same as OLD (for an UPDATE) or is replaced by - a null value (for an INSERT). Any reference to OLD is + entry's expression replaces the reference. Otherwise, NEW means the + same as OLD (for an UPDATE) or is replaced by + a null value (for an INSERT). Any reference to OLD is replaced by a reference to the range-table entry that is the result relation. @@ -1313,7 +1313,7 @@ CREATE [ OR REPLACE ] RULE name AS A First Rule Step by Step - Say we want to trace changes to the sl_avail column in the + Say we want to trace changes to the sl_avail column in the shoelace_data relation. So we set up a log table and a rule that conditionally writes a log entry when an UPDATE is performed on @@ -1367,7 +1367,7 @@ UPDATE shoelace_data SET sl_avail = 6 WHERE shoelace_data.sl_name = 'sl7'; - There is a rule log_shoelace that is ON UPDATE with the rule + There is a rule log_shoelace that is ON UPDATE with the rule qualification expression: @@ -1384,15 +1384,15 @@ INSERT INTO shoelace_log VALUES ( (This looks a little strange since you cannot normally write - INSERT ... VALUES ... FROM. The FROM + INSERT ... VALUES ... FROM. The FROM clause here is just to indicate that there are range-table entries - in the query tree for new and old. + in the query tree for new and old. These are needed so that they can be referenced by variables in the INSERT command's query tree.) - The rule is a qualified ALSO rule, so the rule system + The rule is a qualified ALSO rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: @@ -1406,7 +1406,7 @@ INSERT INTO shoelace_log VALUES ( In step 2, the rule qualification is added to it, so the result set - is restricted to rows where sl_avail changes: + is restricted to rows where sl_avail changes: INSERT INTO shoelace_log VALUES ( @@ -1417,10 +1417,10 @@ INSERT INTO shoelace_log VALUES ( WHERE new.sl_avail <> old.sl_avail; - (This looks even stranger, since INSERT ... VALUES doesn't have - a WHERE clause either, but the planner and executor will have no + (This looks even stranger, since INSERT ... VALUES doesn't have + a WHERE clause either, but the planner and executor will have no difficulty with it. They need to support this same functionality - anyway for INSERT ... SELECT.) + anyway for INSERT ... SELECT.) @@ -1440,7 +1440,7 @@ INSERT INTO shoelace_log VALUES ( - Step 4 replaces references to NEW by the target list entries from the + Step 4 replaces references to NEW by the target list entries from the original query tree or by the matching variable references from the result relation: @@ -1457,7 +1457,7 @@ INSERT INTO shoelace_log VALUES ( - Step 5 changes OLD references into result relation references: + Step 5 changes OLD references into result relation references: INSERT INTO shoelace_log VALUES ( @@ -1471,7 +1471,7 @@ INSERT INTO shoelace_log VALUES ( - That's it. Since the rule is ALSO, we also output the + That's it. Since the rule is ALSO, we also output the original query tree. In short, the output from the rule system is a list of two query trees that correspond to these statements: @@ -1502,8 +1502,8 @@ UPDATE shoelace_data SET sl_color = 'green' no log entry would get written. In that case, the original query tree does not contain a target list entry for - sl_avail, so NEW.sl_avail will get - replaced by shoelace_data.sl_avail. Thus, the extra + sl_avail, so NEW.sl_avail will get + replaced by shoelace_data.sl_avail. Thus, the extra command generated by the rule is: @@ -1527,8 +1527,8 @@ UPDATE shoelace_data SET sl_avail = 0 WHERE sl_color = 'black'; - four rows in fact get updated (sl1, sl2, sl3, and sl4). - But sl3 already has sl_avail = 0. In this case, the original + four rows in fact get updated (sl1, sl2, sl3, and sl4). + But sl3 already has sl_avail = 0. In this case, the original query trees qualification is different and that results in the extra query tree: @@ -1559,7 +1559,7 @@ SELECT shoelace_data.sl_name, 0, Cooperation with Views -viewupdating +viewupdating A simple way to protect view relations from the mentioned @@ -1579,7 +1579,7 @@ CREATE RULE shoe_del_protect AS ON DELETE TO shoe If someone now tries to do any of these operations on the view relation shoe, the rule system will apply these rules. Since the rules have - no actions and are INSTEAD, the resulting list of + no actions and are INSTEAD, the resulting list of query trees will be empty and the whole query will become nothing because there is nothing left to be optimized or executed after the rule system is done with it. @@ -1621,8 +1621,8 @@ CREATE RULE shoelace_del AS ON DELETE TO shoelace - If you want to support RETURNING queries on the view, - you need to make the rules include RETURNING clauses that + If you want to support RETURNING queries on the view, + you need to make the rules include RETURNING clauses that compute the view rows. This is usually pretty trivial for views on a single table, but it's a bit tedious for join views such as shoelace. An example for the insert case is: @@ -1643,9 +1643,9 @@ CREATE RULE shoelace_ins AS ON INSERT TO shoelace FROM unit u WHERE shoelace_data.sl_unit = u.un_name); - Note that this one rule supports both INSERT and - INSERT RETURNING queries on the view — the - RETURNING clause is simply ignored for INSERT. + Note that this one rule supports both INSERT and + INSERT RETURNING queries on the view — the + RETURNING clause is simply ignored for INSERT. @@ -1785,7 +1785,7 @@ UPDATE shoelace_data AND shoelace_data.sl_name = shoelace.sl_name; - Again it's an INSTEAD rule and the previous query tree is trashed. + Again it's an INSTEAD rule and the previous query tree is trashed. Note that this query still uses the view shoelace. But the rule system isn't finished with this step, so it continues and applies the _RETURN rule on it, and we get: @@ -2041,16 +2041,16 @@ GRANT SELECT ON phone_number TO assistant; Nobody except that user (and the database superusers) can access the - phone_data table. But because of the GRANT, + phone_data table. But because of the GRANT, the assistant can run a SELECT on the - phone_number view. The rule system will rewrite the - SELECT from phone_number into a - SELECT from phone_data. + phone_number view. The rule system will rewrite the + SELECT from phone_number into a + SELECT from phone_data. Since the user is the owner of - phone_number and therefore the owner of the rule, the - read access to phone_data is now checked against the user's + phone_number and therefore the owner of the rule, the + read access to phone_data is now checked against the user's privileges and the query is permitted. The check for accessing - phone_number is also performed, but this is done + phone_number is also performed, but this is done against the invoking user, so nobody but the user and the assistant can use it. @@ -2059,19 +2059,19 @@ GRANT SELECT ON phone_number TO assistant; The privileges are checked rule by rule. So the assistant is for now the only one who can see the public phone numbers. But the assistant can set up another view and grant access to that to the public. Then, anyone - can see the phone_number data through the assistant's view. + can see the phone_number data through the assistant's view. What the assistant cannot do is to create a view that directly - accesses phone_data. (Actually the assistant can, but it will not work since + accesses phone_data. (Actually the assistant can, but it will not work since every access will be denied during the permission checks.) And as soon as the user notices that the assistant opened - their phone_number view, the user can revoke the assistant's access. Immediately, any + their phone_number view, the user can revoke the assistant's access. Immediately, any access to the assistant's view would fail. One might think that this rule-by-rule checking is a security hole, but in fact it isn't. But if it did not work this way, the assistant - could set up a table with the same columns as phone_number and + could set up a table with the same columns as phone_number and copy the data to there once per day. Then it's the assistant's own data and the assistant can grant access to everyone they want. A GRANT command means, I trust you. @@ -2090,9 +2090,9 @@ CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE phone NOT LIKE '412%'; This view might seem secure, since the rule system will rewrite any - SELECT from phone_number into a - SELECT from phone_data and add the - qualification that only entries where phone does not begin + SELECT from phone_number into a + SELECT from phone_data and add the + qualification that only entries where phone does not begin with 412 are wanted. But if the user can create their own functions, it is not difficult to convince the planner to execute the user-defined function prior to the NOT LIKE expression. @@ -2107,7 +2107,7 @@ $$ LANGUAGE plpgsql COST 0.0000000000000000000001; SELECT * FROM phone_number WHERE tricky(person, phone); - Every person and phone number in the phone_data table will be + Every person and phone number in the phone_data table will be printed as a NOTICE, because the planner will choose to execute the inexpensive tricky function before the more expensive NOT LIKE. Even if the user is @@ -2119,17 +2119,17 @@ SELECT * FROM phone_number WHERE tricky(person, phone); Similar considerations apply to update rules. In the examples of the previous section, the owner of the tables in the example - database could grant the privileges SELECT, - INSERT, UPDATE, and DELETE on - the shoelace view to someone else, but only - SELECT on shoelace_log. The rule action to + database could grant the privileges SELECT, + INSERT, UPDATE, and DELETE on + the shoelace view to someone else, but only + SELECT on shoelace_log. The rule action to write log entries will still be executed successfully, and that other user could see the log entries. But they could not create fake entries, nor could they manipulate or remove existing ones. In this case, there is no possibility of subverting the rules by convincing the planner to alter the order of operations, because the only rule - which references shoelace_log is an unqualified - INSERT. This might not be true in more complex scenarios. + which references shoelace_log is an unqualified + INSERT. This might not be true in more complex scenarios. @@ -2189,7 +2189,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS The PostgreSQL server returns a command - status string, such as INSERT 149592 1, for each + status string, such as INSERT 149592 1, for each command it receives. This is simple enough when there are no rules involved, but what happens when the query is rewritten by rules? @@ -2200,10 +2200,10 @@ CREATE VIEW phone_number WITH (security_barrier) AS - If there is no unconditional INSTEAD rule for the query, then + If there is no unconditional INSTEAD rule for the query, then the originally given query will be executed, and its command status will be returned as usual. (But note that if there were - any conditional INSTEAD rules, the negation of their qualifications + any conditional INSTEAD rules, the negation of their qualifications will have been added to the original query. This might reduce the number of rows it processes, and if so the reported status will be affected.) @@ -2212,10 +2212,10 @@ CREATE VIEW phone_number WITH (security_barrier) AS - If there is any unconditional INSTEAD rule for the query, then + If there is any unconditional INSTEAD rule for the query, then the original query will not be executed at all. In this case, the server will return the command status for the last query - that was inserted by an INSTEAD rule (conditional or + that was inserted by an INSTEAD rule (conditional or unconditional) and is of the same command type (INSERT, UPDATE, or DELETE) as the original query. If no query @@ -2228,7 +2228,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS - The programmer can ensure that any desired INSTEAD rule is the one + The programmer can ensure that any desired INSTEAD rule is the one that sets the command status in the second case, by giving it the alphabetically last rule name among the active rules, so that it gets applied last. @@ -2253,7 +2253,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS implemented using the PostgreSQL rule system. One of the things that cannot be implemented by rules are some kinds of constraints, especially foreign keys. It is possible - to place a qualified rule that rewrites a command to NOTHING + to place a qualified rule that rewrites a command to NOTHING if the value of a column does not appear in another table. But then the data is silently thrown away and that's not a good idea. If checks for valid values are required, @@ -2264,7 +2264,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS In this chapter, we focused on using rules to update views. All of the update rule examples in this chapter can also be implemented - using INSTEAD OF triggers on the views. Writing such + using INSTEAD OF triggers on the views. Writing such triggers is often easier than writing rules, particularly if complex logic is required to perform the update. @@ -2298,8 +2298,8 @@ CREATE TABLE software ( Both tables have many thousands of rows and the indexes on - hostname are unique. The rule or trigger should - implement a constraint that deletes rows from software + hostname are unique. The rule or trigger should + implement a constraint that deletes rows from software that reference a deleted computer. The trigger would use this command: @@ -2307,8 +2307,8 @@ DELETE FROM software WHERE hostname = $1; Since the trigger is called for each individual row deleted from - computer, it can prepare and save the plan for this - command and pass the hostname value in the + computer, it can prepare and save the plan for this + command and pass the hostname value in the parameter. The rule would be written as: @@ -2324,7 +2324,7 @@ CREATE RULE computer_del AS ON DELETE TO computer DELETE FROM computer WHERE hostname = 'mypc.local.net'; - the table computer is scanned by index (fast), and the + the table computer is scanned by index (fast), and the command issued by the trigger would also use an index scan (also fast). The extra command from the rule would be: @@ -2348,8 +2348,8 @@ Nestloop With the next delete we want to get rid of all the 2000 computers - where the hostname starts with - old. There are two possible commands to do that. One + where the hostname starts with + old. There are two possible commands to do that. One is: @@ -2389,17 +2389,17 @@ Nestloop This shows, that the planner does not realize that the - qualification for hostname in - computer could also be used for an index scan on - software when there are multiple qualification - expressions combined with AND, which is what it does + qualification for hostname in + computer could also be used for an index scan on + software when there are multiple qualification + expressions combined with AND, which is what it does in the regular-expression version of the command. The trigger will get invoked once for each of the 2000 old computers that have to be deleted, and that will result in one index scan over - computer and 2000 index scans over - software. The rule implementation will do it with two + computer and 2000 index scans over + software. The rule implementation will do it with two commands that use indexes. And it depends on the overall size of - the table software whether the rule will still be faster in the + the table software whether the rule will still be faster in the sequential scan situation. 2000 command executions from the trigger over the SPI manager take some time, even if all the index blocks will soon be in the cache. @@ -2412,7 +2412,7 @@ DELETE FROM computer WHERE manufacturer = 'bim'; Again this could result in many rows to be deleted from - computer. So the trigger will again run many commands + computer. So the trigger will again run many commands through the executor. The command generated by the rule will be: @@ -2421,7 +2421,7 @@ DELETE FROM software WHERE computer.manufacturer = 'bim' The plan for that command will again be the nested loop over two - index scans, only using a different index on computer: + index scans, only using a different index on computer: Nestloop diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 6c4c7f4a8e..c8bc684c0e 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -73,12 +73,12 @@ /usr/local/pgsql/data or /var/lib/pgsql/data are popular. To initialize a database cluster, use the command ,initdb which is + linkend="app-initdb">,initdb which is installed with PostgreSQL. The desired file system location of your database cluster is indicated by the option, for example: -$ initdb -D /usr/local/pgsql/data +$ initdb -D /usr/local/pgsql/data Note that you must execute this command while logged into the PostgreSQL user account, which is @@ -96,9 +96,9 @@ Alternatively, you can run initdb via the - programpg_ctl like so: + programpg_ctl like so: -$ pg_ctl -D /usr/local/pgsql/data initdb +$ pg_ctl -D /usr/local/pgsql/data initdb This may be more intuitive if you are using pg_ctl for starting and stopping the @@ -148,14 +148,14 @@ postgres$ initdb -D /usr/local/pgsql/data initdb's , or options to assign a password to the database superuser. - password - of the superuser + password + of the superuser - Also, specify - Non-C and non-POSIX locales rely on the + Non-C and non-POSIX locales rely on the operating system's collation library for character set ordering. This controls the ordering of keys stored in indexes. For this reason, a cluster cannot switch to an incompatible collation library version, @@ -201,14 +201,14 @@ postgres$ initdb -D /usr/local/pgsql/data Many installations create their database clusters on file systems - (volumes) other than the machine's root volume. If you + (volumes) other than the machine's root volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such - as pg_upgrade, and it also ensures clean failures if + as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline. @@ -220,30 +220,30 @@ postgres$ initdb -D /usr/local/pgsql/data Network File Systems - NFSNetwork File Systems - Network Attached Storage (NAS)Network File Systems + NFSNetwork File Systems + Network Attached Storage (NAS)Network File Systems Many installations create their database clusters on network file - systems. Sometimes this is done via NFS, or by using a - Network Attached Storage (NAS) device that uses - NFS internally. PostgreSQL does nothing - special for NFS file systems, meaning it assumes - NFS behaves exactly like locally-connected drives. - If the client or server NFS implementation does not + systems. Sometimes this is done via NFS, or by using a + Network Attached Storage (NAS) device that uses + NFS internally. PostgreSQL does nothing + special for NFS file systems, meaning it assumes + NFS behaves exactly like locally-connected drives. + If the client or server NFS implementation does not provide standard file system semantics, this can cause reliability problems (see ). - Specifically, delayed (asynchronous) writes to the NFS + Specifically, delayed (asynchronous) writes to the NFS server can cause data corruption problems. If possible, mount the - NFS file system synchronously (without caching) to avoid - this hazard. Also, soft-mounting the NFS file system is + NFS file system synchronously (without caching) to avoid + this hazard. Also, soft-mounting the NFS file system is not recommended. - Storage Area Networks (SAN) typically use communication - protocols other than NFS, and may or may not be subject + Storage Area Networks (SAN) typically use communication + protocols other than NFS, and may or may not be subject to hazards of this sort. It's advisable to consult the vendor's documentation concerning data consistency guarantees. PostgreSQL cannot be more reliable than @@ -260,7 +260,7 @@ postgres$ initdb -D /usr/local/pgsql/data Before anyone can access the database, you must start the database server. The database server program is called - postgres.postgres + postgres.postgres The postgres program must know where to find the data it is supposed to use. This is done with the option. Thus, the simplest way to start the @@ -281,8 +281,8 @@ $ postgres -D /usr/local/pgsql/data $ postgres -D /usr/local/pgsql/data >logfile 2>&1 & - It is important to store the server's stdout and - stderr output somewhere, as shown above. It will help + It is important to store the server's stdout and + stderr output somewhere, as shown above. It will help for auditing purposes and to diagnose problems. (See for a more thorough discussion of log file handling.) @@ -312,13 +312,13 @@ pg_ctl start -l logfile Normally, you will want to start the database server when the computer boots. - booting - starting the server during + booting + starting the server during Autostart scripts are operating-system-specific. There are a few distributed with PostgreSQL in the - contrib/start-scripts directory. Installing one will require + contrib/start-scripts directory. Installing one will require root privileges. @@ -327,7 +327,7 @@ pg_ctl start -l logfile at boot time. Many systems have a file /etc/rc.local or /etc/rc.d/rc.local. Others use init.d or - rc.d directories. Whatever you do, the server must be + rc.d directories. Whatever you do, the server must be run by the PostgreSQL user account and not by root or any other user. Therefore you probably should form your commands using @@ -348,7 +348,7 @@ su postgres -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' For FreeBSD, look at the file contrib/start-scripts/freebsd in the PostgreSQL source distribution. - FreeBSDstart script + FreeBSDstart script @@ -356,7 +356,7 @@ su postgres -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' On OpenBSD, add the following lines to the file /etc/rc.local: - OpenBSDstart script + OpenBSDstart script if [ -x /usr/local/pgsql/bin/pg_ctl -a -x /usr/local/pgsql/bin/postgres ]; then su -l postgres -c '/usr/local/pgsql/bin/pg_ctl start -s -l /var/postgresql/log -D /usr/local/pgsql/data' @@ -369,7 +369,7 @@ fi On Linux systems either add - Linuxstart script + Linuxstart script /usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data @@ -421,7 +421,7 @@ WantedBy=multi-user.target FreeBSD or Linux start scripts, depending on preference. - NetBSDstart script + NetBSDstart script @@ -430,12 +430,12 @@ WantedBy=multi-user.target On Solaris, create a file called /etc/init.d/postgresql that contains the following line: - Solarisstart script + Solarisstart script su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data" - Then, create a symbolic link to it in /etc/rc3.d as - S99postgresql. + Then, create a symbolic link to it in /etc/rc3.d as + S99postgresql. @@ -509,7 +509,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). does not mean you've run out of disk space. It means your kernel's limit on the number of System V semaphores is smaller than the number + class="osname">System V semaphores is smaller than the number PostgreSQL wants to create. As above, you might be able to work around the problem by starting the server with a reduced number of allowed connections @@ -518,15 +518,15 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). - If you get an illegal system call error, it is likely that + If you get an illegal system call error, it is likely that shared memory or semaphores are not supported in your kernel at all. In that case your only option is to reconfigure the kernel to enable these features. - Details about configuring System V - IPC facilities are given in . + Details about configuring System V + IPC facilities are given in . @@ -586,10 +586,10 @@ psql: could not connect to server: No such file or directory Managing Kernel Resources - PostgreSQL can sometimes exhaust various operating system + PostgreSQL can sometimes exhaust various operating system resource limits, especially when multiple copies of the server are running on the same system, or in very large installations. This section explains - the kernel resources used by PostgreSQL and the steps you + the kernel resources used by PostgreSQL and the steps you can take to resolve problems related to kernel resource consumption. @@ -605,27 +605,27 @@ psql: could not connect to server: No such file or directory - PostgreSQL requires the operating system to provide - inter-process communication (IPC) features, specifically + PostgreSQL requires the operating system to provide + inter-process communication (IPC) features, specifically shared memory and semaphores. Unix-derived systems typically provide - System V IPC, - POSIX IPC, or both. - Windows has its own implementation of + System V IPC, + POSIX IPC, or both. + Windows has its own implementation of these features and is not discussed here. The complete lack of these facilities is usually manifested by an - Illegal system call error upon server + Illegal system call error upon server start. In that case there is no alternative but to reconfigure your - kernel. PostgreSQL won't work without them. + kernel. PostgreSQL won't work without them. This situation is rare, however, among modern operating systems. - Upon starting the server, PostgreSQL normally allocates + Upon starting the server, PostgreSQL normally allocates a very small amount of System V shared memory, as well as a much larger - amount of POSIX (mmap) shared memory. + amount of POSIX (mmap) shared memory. In addition a significant number of semaphores, which can be either System V or POSIX style, are created at server startup. Currently, POSIX semaphores are used on Linux and FreeBSD systems while other @@ -634,7 +634,7 @@ psql: could not connect to server: No such file or directory - Prior to PostgreSQL 9.3, only System V shared memory + Prior to PostgreSQL 9.3, only System V shared memory was used, so the amount of System V shared memory required to start the server was much larger. If you are running an older version of the server, please consult the documentation for your server version. @@ -642,9 +642,9 @@ psql: could not connect to server: No such file or directory - System V IPC features are typically constrained by + System V IPC features are typically constrained by system-wide allocation limits. - When PostgreSQL exceeds one of these limits, + When PostgreSQL exceeds one of these limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See also - <systemitem class="osname">System V</> <acronym>IPC</> Parameters + <systemitem class="osname">System V</systemitem> <acronym>IPC</acronym> Parameters - Name - Description - Values needed to run one PostgreSQL instance + Name + Description + Values needed to run one PostgreSQL instance - SHMMAX - Maximum size of shared memory segment (bytes) + SHMMAX + Maximum size of shared memory segment (bytes) at least 1kB, but the default is usually much higher - SHMMIN - Minimum size of shared memory segment (bytes) - 1 + SHMMIN + Minimum size of shared memory segment (bytes) + 1 - SHMALL - Total amount of shared memory available (bytes or pages) + SHMALL + Total amount of shared memory available (bytes or pages) same as SHMMAX if bytes, or ceil(SHMMAX/PAGE_SIZE) if pages, - plus room for other applications + plus room for other applications - SHMSEG - Maximum number of shared memory segments per process - only 1 segment is needed, but the default is much higher + SHMSEG + Maximum number of shared memory segments per process + only 1 segment is needed, but the default is much higher - SHMMNI - Maximum number of shared memory segments system-wide - like SHMSEG plus room for other applications + SHMMNI + Maximum number of shared memory segments system-wide + like SHMSEG plus room for other applications - SEMMNI - Maximum number of semaphore identifiers (i.e., sets) - at least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) plus room for other applications + SEMMNI + Maximum number of semaphore identifiers (i.e., sets) + at least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) plus room for other applications - SEMMNS - Maximum number of semaphores system-wide - ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) * 17 plus room for other applications + SEMMNS + Maximum number of semaphores system-wide + ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) * 17 plus room for other applications - SEMMSL - Maximum number of semaphores per set - at least 17 + SEMMSL + Maximum number of semaphores per set + at least 17 - SEMMAP - Number of entries in semaphore map - see text + SEMMAP + Number of entries in semaphore map + see text - SEMVMX - Maximum value of semaphore - at least 1000 (The default is often 32767; do not change unless necessary) + SEMVMX + Maximum value of semaphore + at least 1000 (The default is often 32767; do not change unless necessary) @@ -734,28 +734,28 @@ psql: could not connect to server: No such file or directory
- PostgreSQL requires a few bytes of System V shared memory + PostgreSQL requires a few bytes of System V shared memory (typically 48 bytes, on 64-bit platforms) for each copy of the server. On most modern operating systems, this amount can easily be allocated. However, if you are running many copies of the server, or if other applications are also using System V shared memory, it may be necessary to - increase SHMALL, which is the total amount of System V shared - memory system-wide. Note that SHMALL is measured in pages + increase SHMALL, which is the total amount of System V shared + memory system-wide. Note that SHMALL is measured in pages rather than bytes on many systems. Less likely to cause problems is the minimum size for shared - memory segments (SHMMIN), which should be at most - approximately 32 bytes for PostgreSQL (it is + memory segments (SHMMIN), which should be at most + approximately 32 bytes for PostgreSQL (it is usually just 1). The maximum number of segments system-wide - (SHMMNI) or per-process (SHMSEG) are unlikely + (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has them set to zero. When using System V semaphores, - PostgreSQL uses one semaphore per allowed connection + PostgreSQL uses one semaphore per allowed connection (), allowed autovacuum worker process () and allowed background process (), in sets of 16. @@ -763,25 +763,25 @@ psql: could not connect to server: No such file or directory also contain a 17th semaphore which contains a magic number, to detect collision with semaphore sets used by other applications. The maximum number of semaphores in the system - is set by SEMMNS, which consequently must be at least - as high as max_connections plus - autovacuum_max_workers plus max_worker_processes, + is set by SEMMNS, which consequently must be at least + as high as max_connections plus + autovacuum_max_workers plus max_worker_processes, plus one extra for each 16 allowed connections plus workers (see the formula in ). The parameter SEMMNI + linkend="sysvipc-parameters">). The parameter SEMMNI determines the limit on the number of semaphore sets that can exist on the system at one time. Hence this parameter must be at - least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). + least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). Lowering the number of allowed connections is a temporary workaround for failures, which are usually confusingly worded No space - left on device, from the function semget. + left on device, from the function semget. In some cases it might also be necessary to increase - SEMMAP to be at least on the order of - SEMMNS. This parameter defines the size of the semaphore + SEMMAP to be at least on the order of + SEMMNS. This parameter defines the size of the semaphore resource map, in which each contiguous block of available semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that is adjacent to the freed block or it is @@ -792,9 +792,9 @@ psql: could not connect to server: No such file or directory - Various other settings related to semaphore undo, such as - SEMMNU and SEMUME, do not affect - PostgreSQL. + Various other settings related to semaphore undo, such as + SEMMNU and SEMUME, do not affect + PostgreSQL. @@ -810,8 +810,8 @@ psql: could not connect to server: No such file or directory - AIX - AIXIPC configuration + AIX + AIXIPC configuration @@ -833,8 +833,8 @@ psql: could not connect to server: No such file or directory - FreeBSD - FreeBSDIPC configuration + FreeBSD + FreeBSDIPC configuration @@ -861,8 +861,8 @@ kern.ipc.semmnu=256 After modifying these values a reboot is required for the new settings to take effect. - (Note: FreeBSD does not use SEMMAP. Older versions - would accept but ignore a setting for kern.ipc.semmap; + (Note: FreeBSD does not use SEMMAP. Older versions + would accept but ignore a setting for kern.ipc.semmap; newer versions reject it altogether.) @@ -874,8 +874,8 @@ kern.ipc.semmnu=256 - If running in FreeBSD jails by enabling sysctl's - security.jail.sysvipc_allowed, postmasters + If running in FreeBSD jails by enabling sysctl's + security.jail.sysvipc_allowed, postmasters running in different jails should be run by different operating system users. This improves security because it prevents non-root users from interfering with shared memory or semaphores in different jails, @@ -886,19 +886,19 @@ kern.ipc.semmnu=256 - FreeBSD versions before 4.0 work like - OpenBSD (see below). + FreeBSD versions before 4.0 work like + OpenBSD (see below). - NetBSD - NetBSDIPC configuration + NetBSD + NetBSDIPC configuration - In NetBSD 5.0 and later, + In NetBSD 5.0 and later, IPC parameters can be adjusted using sysctl, for example: @@ -916,24 +916,24 @@ kern.ipc.semmnu=256 - NetBSD versions before 5.0 work like - OpenBSD (see below), except that - parameters should be set with the keyword options not - option. + NetBSD versions before 5.0 work like + OpenBSD (see below), except that + parameters should be set with the keyword options not + option. - OpenBSD - OpenBSDIPC configuration + OpenBSD + OpenBSDIPC configuration - The options SYSVSHM and SYSVSEM need + The options SYSVSHM and SYSVSEM need to be enabled when the kernel is compiled. (They are by default.) The maximum size of shared memory is determined by - the option SHMMAXPGS (in pages). The following + the option SHMMAXPGS (in pages). The following shows an example of how to set the various parameters: option SYSVSHM @@ -958,30 +958,30 @@ option SEMMAP=256 - HP-UX - HP-UXIPC configuration + HP-UX + HP-UXIPC configuration The default settings tend to suffice for normal installations. - On HP-UX 10, the factory default for - SEMMNS is 128, which might be too low for larger + On HP-UX 10, the factory default for + SEMMNS is 128, which might be too low for larger database sites. - IPC parameters can be set in the System - Administration Manager (SAM) under + IPC parameters can be set in the System + Administration Manager (SAM) under Kernel - ConfigurationConfigurable Parameters. Choose - Create A New Kernel when you're done. + ConfigurationConfigurable Parameters. Choose + Create A New Kernel when you're done. - Linux - LinuxIPC configuration + Linux + LinuxIPC configuration @@ -1023,13 +1023,13 @@ option SEMMAP=256 - macOS - macOSIPC configuration + macOS + macOSIPC configuration The recommended method for configuring shared memory in macOS - is to create a file named /etc/sysctl.conf, + is to create a file named /etc/sysctl.conf, containing variable assignments such as: kern.sysv.shmmax=4194304 @@ -1039,32 +1039,32 @@ kern.sysv.shmseg=8 kern.sysv.shmall=1024 Note that in some macOS versions, - all five shared-memory parameters must be set in - /etc/sysctl.conf, else the values will be ignored. + all five shared-memory parameters must be set in + /etc/sysctl.conf, else the values will be ignored. Beware that recent releases of macOS ignore attempts to set - SHMMAX to a value that isn't an exact multiple of 4096. + SHMMAX to a value that isn't an exact multiple of 4096. - SHMALL is measured in 4 kB pages on this platform. + SHMALL is measured in 4 kB pages on this platform. In older macOS versions, you will need to reboot to have changes in the shared memory parameters take effect. As of 10.5 it is possible to - change all but SHMMNI on the fly, using - sysctl. But it's still best to set up your preferred - values via /etc/sysctl.conf, so that the values will be + change all but SHMMNI on the fly, using + sysctl. But it's still best to set up your preferred + values via /etc/sysctl.conf, so that the values will be kept across reboots. - The file /etc/sysctl.conf is only honored in macOS + The file /etc/sysctl.conf is only honored in macOS 10.3.9 and later. If you are running a previous 10.3.x release, - you must edit the file /etc/rc + you must edit the file /etc/rc and change the values in the following commands: sysctl -w kern.sysv.shmmax @@ -1074,27 +1074,27 @@ sysctl -w kern.sysv.shmseg sysctl -w kern.sysv.shmall Note that - /etc/rc is usually overwritten by macOS system updates, + /etc/rc is usually overwritten by macOS system updates, so you should expect to have to redo these edits after each update. In macOS 10.2 and earlier, instead edit these commands in the file - /System/Library/StartupItems/SystemTuning/SystemTuning. + /System/Library/StartupItems/SystemTuning/SystemTuning. - Solaris 2.6 to 2.9 (Solaris + Solaris 2.6 to 2.9 (Solaris 6 to Solaris 9) - SolarisIPC configuration + SolarisIPC configuration The relevant settings can be changed in - /etc/system, for example: + /etc/system, for example: set shmsys:shminfo_shmmax=0x2000000 set shmsys:shminfo_shmmin=1 @@ -1114,30 +1114,30 @@ set semsys:seminfo_semmsl=32 - Solaris 2.10 (Solaris + Solaris 2.10 (Solaris 10) and later - OpenSolaris + OpenSolaris In Solaris 10 and later, and OpenSolaris, the default shared memory and semaphore settings are good enough for most - PostgreSQL applications. Solaris now defaults - to a SHMMAX of one-quarter of system RAM. + PostgreSQL applications. Solaris now defaults + to a SHMMAX of one-quarter of system RAM. To further adjust this setting, use a project setting associated - with the postgres user. For example, run the - following as root: + with the postgres user. For example, run the + following as root: projadd -c "PostgreSQL DB User" -K "project.max-shm-memory=(privileged,8GB,deny)" -U postgres -G postgres user.postgres - This command adds the user.postgres project and - sets the shared memory maximum for the postgres + This command adds the user.postgres project and + sets the shared memory maximum for the postgres user to 8GB, and takes effect the next time that user logs - in, or when you restart PostgreSQL (not reload). - The above assumes that PostgreSQL is run by - the postgres user in the postgres + in, or when you restart PostgreSQL (not reload). + The above assumes that PostgreSQL is run by + the postgres user in the postgres group. No server reboot is required. @@ -1152,11 +1152,11 @@ project.max-msg-ids=(priv,4096,deny) - Additionally, if you are running PostgreSQL + Additionally, if you are running PostgreSQL inside a zone, you may need to raise the zone resource usage limits as well. See "Chapter2: Projects and Tasks" in the - System Administrator's Guide for more - information on projects and prctl. + System Administrator's Guide for more + information on projects and prctl. @@ -1259,7 +1259,7 @@ RemoveIPC=no limit can only be changed by the root user. The system call setrlimit is responsible for setting these parameters. The shell's built-in command ulimit - (Bourne shells) or limit (csh) is + (Bourne shells) or limit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file /etc/login.conf controls the various resource limits set during login. See the @@ -1320,7 +1320,7 @@ default:\ processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the - system-wide limit, you can set PostgreSQL's PostgreSQL's configuration parameter to limit the consumption of open files. @@ -1380,36 +1380,36 @@ Out of Memory: Killed process 12345 (postgres). system running out of memory, you can avoid the problem by changing your configuration. In some cases, it may help to lower memory-related configuration parameters, particularly - shared_buffers - and work_mem. In + shared_buffers + and work_mem. In other cases, the problem may be caused by allowing too many connections to the database server itself. In many cases, it may be better to reduce - max_connections + max_connections and instead make use of external connection-pooling software.
On Linux 2.6 and later, it is possible to modify the - kernel's behavior so that it will not overcommit memory. + kernel's behavior so that it will not overcommit memory. Although this setting will not prevent the OOM killer from being invoked + url="http://lwn.net/Articles/104179/">OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl: sysctl -w vm.overcommit_memory=2 - or placing an equivalent entry in /etc/sysctl.conf. + or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting - vm.overcommit_ratio. For details see the kernel documentation + vm.overcommit_ratio. For details see the kernel documentation file . Another approach, which can be used with or without altering - vm.overcommit_memory, is to set the process-specific - OOM score adjustment value for the postmaster process to - -1000, thereby guaranteeing it will not be targeted by the OOM + vm.overcommit_memory, is to set the process-specific + OOM score adjustment value for the postmaster process to + -1000, thereby guaranteeing it will not be targeted by the OOM killer. The simplest way to do this is to execute echo -1000 > /proc/self/oom_score_adj @@ -1426,33 +1426,33 @@ export PG_OOM_ADJUST_VALUE=0 These settings will cause postmaster child processes to run with the normal OOM score adjustment of zero, so that the OOM killer can still target them at need. You could use some other value for - PG_OOM_ADJUST_VALUE if you want the child processes to run - with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE + PG_OOM_ADJUST_VALUE if you want the child processes to run + with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE can also be omitted, in which case it defaults to zero.) If you do not - set PG_OOM_ADJUST_FILE, the child processes will run with the + set PG_OOM_ADJUST_FILE, the child processes will run with the same OOM score adjustment as the postmaster, which is unwise since the whole point is to ensure that the postmaster has a preferential setting. - Older Linux kernels do not offer /proc/self/oom_score_adj, + Older Linux kernels do not offer /proc/self/oom_score_adj, but may have a previous version of the same functionality called - /proc/self/oom_adj. This works the same except the disable - value is -17 not -1000. + /proc/self/oom_adj. This works the same except the disable + value is -17 not -1000. Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl parameter. However, setting - vm.overcommit_memory to 2 + vm.overcommit_memory to 2 on a 2.4 kernel that does not have the relevant code will make things worse, not better. It is recommended that you inspect the actual kernel source code (see the function - vm_enough_memory in the file mm/mmap.c) + vm_enough_memory in the file mm/mmap.c) to verify what is supported in your kernel before you try this in a 2.4 - installation. The presence of the overcommit-accounting - documentation file should not be taken as evidence that the + installation. The presence of the overcommit-accounting + documentation file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor. @@ -1473,7 +1473,7 @@ export PG_OOM_ADJUST_VALUE=0 number of huge pages needed, start PostgreSQL without huge pages enabled and check the postmaster's VmPeak value, as well as the system's - huge page size, using the /proc file system. This might + huge page size, using the /proc file system. This might look like: $ head -1 $PGDATA/postmaster.pid @@ -1509,8 +1509,8 @@ $ grep Huge /proc/meminfo It may also be necessary to give the database server's operating system user permission to use huge pages by setting - vm.hugetlb_shm_group via sysctl, and/or - give permission to lock memory with ulimit -l. + vm.hugetlb_shm_group via sysctl, and/or + give permission to lock memory with ulimit -l. @@ -1518,8 +1518,8 @@ $ grep Huge /proc/meminfo PostgreSQL is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge pages, you can set - to on in postgresql.conf. - Note that with this setting PostgreSQL will fail to + to on in postgresql.conf. + Note that with this setting PostgreSQL will fail to start if not enough huge pages are available. @@ -1537,7 +1537,7 @@ $ grep Huge /proc/meminfo Shutting Down the Server - shutdown + shutdown @@ -1547,7 +1547,7 @@ $ grep Huge /proc/meminfo - SIGTERMSIGTERM + SIGTERMSIGTERM This is the Smart Shutdown mode. @@ -1566,7 +1566,7 @@ $ grep Huge /proc/meminfo - SIGINTSIGINT + SIGINTSIGINT This is the Fast Shutdown mode. @@ -1581,7 +1581,7 @@ $ grep Huge /proc/meminfo - SIGQUITSIGQUIT + SIGQUITSIGQUIT This is the Immediate Shutdown mode. @@ -1602,9 +1602,9 @@ $ grep Huge /proc/meminfo The program provides a convenient interface for sending these signals to shut down the server. - Alternatively, you can send the signal directly using kill + Alternatively, you can send the signal directly using kill on non-Windows systems. - The PID of the postgres process can be + The PID of the postgres process can be found using the ps program, or from the file postmaster.pid in the data directory. For example, to do a fast shutdown: @@ -1628,15 +1628,15 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` To terminate an individual session while allowing other sessions to - continue, use pg_terminate_backend() (see pg_terminate_backend() (see ) or send a - SIGTERM signal to the child process associated with + SIGTERM signal to the child process associated with the session. - Upgrading a <productname>PostgreSQL</> Cluster + Upgrading a <productname>PostgreSQL</productname> Cluster upgrading @@ -1649,7 +1649,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` This section discusses how to upgrade your database data from one - PostgreSQL release to a newer one. + PostgreSQL release to a newer one. @@ -1676,7 +1676,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - For major releases of PostgreSQL, the + For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version is to dump and reload the database, though this can be slow. A @@ -1698,7 +1698,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`PostgreSQL major upgrade, consider the + testing a PostgreSQL major upgrade, consider the following categories of possible changes: @@ -1728,7 +1728,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`Library API - Typically libraries like libpq only add new + Typically libraries like libpq only add new functionality, again unless mentioned in the release notes. @@ -1757,13 +1757,13 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - Upgrading Data via <application>pg_dumpall</> + Upgrading Data via <application>pg_dumpall</application> One upgrade method is to dump data from one major version of - PostgreSQL and reload it in another — to do - this, you must use a logical backup tool like - pg_dumpall; file system + PostgreSQL and reload it in another — to do + this, you must use a logical backup tool like + pg_dumpall; file system level backup methods will not work. (There are checks in place that prevent you from using a data directory with an incompatible version of PostgreSQL, so no great harm can be done by @@ -1771,18 +1771,18 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - It is recommended that you use the pg_dump and - pg_dumpall programs from the newer + It is recommended that you use the pg_dump and + pg_dumpall programs from the newer version of - PostgreSQL, to take advantage of enhancements + PostgreSQL, to take advantage of enhancements that might have been made in these programs. Current releases of the dump programs can read data from any server version back to 7.0. These instructions assume that your existing installation is under the - /usr/local/pgsql directory, and that the data area is in - /usr/local/pgsql/data. Substitute your paths + /usr/local/pgsql directory, and that the data area is in + /usr/local/pgsql/data. Substitute your paths appropriately. @@ -1792,7 +1792,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`/usr/local/pgsql/data/pg_hba.conf + permissions in the file /usr/local/pgsql/data/pg_hba.conf (or equivalent) to disallow access from everyone except you. See for additional information on access control. @@ -1806,7 +1806,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -pg_dumpall > outputfile +pg_dumpall > outputfile @@ -1830,11 +1830,11 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Shut down the old server: -pg_ctl stop +pg_ctl stop - On systems that have PostgreSQL started at boot time, + On systems that have PostgreSQL started at boot time, there is probably a start-up file that will accomplish the same thing. For - example, on a Red Hat Linux system one + example, on a Red Hat Linux system one might find that this works: /etc/rc.d/init.d/postgresql stop @@ -1853,7 +1853,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -mv /usr/local/pgsql /usr/local/pgsql.old +mv /usr/local/pgsql /usr/local/pgsql.old (Be sure to move the directory as a single unit so relative paths remain unchanged.) @@ -1873,15 +1873,15 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data +/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
- Restore your previous pg_hba.conf and any - postgresql.conf modifications. + Restore your previous pg_hba.conf and any + postgresql.conf modifications. @@ -1890,7 +1890,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data +/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
@@ -1899,9 +1899,9 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Finally, restore your data from backup with: -/usr/local/pgsql/bin/psql -d postgres -f outputfile +/usr/local/pgsql/bin/psql -d postgres -f outputfile - using the new psql. + using the new psql.
@@ -1920,16 +1920,16 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - Upgrading Data via <application>pg_upgrade</> + Upgrading Data via <application>pg_upgrade</application> The module allows an installation to - be migrated in-place from one major PostgreSQL + be migrated in-place from one major PostgreSQL version to another. Upgrades can be performed in minutes, - particularly with @@ -1939,12 +1939,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 It is also possible to use certain replication methods, such as - Slony, to create a standby server with the updated version of - PostgreSQL. This is possible because Slony supports + Slony, to create a standby server with the updated version of + PostgreSQL. This is possible because Slony supports replication between different major versions of - PostgreSQL. The standby can be on the same computer or + PostgreSQL. The standby can be on the same computer or a different computer. Once it has synced up with the master server - (running the older version of PostgreSQL), you can + (running the older version of PostgreSQL), you can switch masters and make the standby the master and shut down the older database instance. Such a switch-over results in only several seconds of downtime for an upgrade. @@ -1966,28 +1966,28 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 server is down, it is possible for a local user to spoof the normal server by starting their own server. The spoof server could read passwords and queries sent by clients, but could not return any data - because the PGDATA directory would still be secure because + because the PGDATA directory would still be secure because of directory permissions. Spoofing is possible because any user can start a database server; a client cannot identify an invalid server unless it is specially configured. - One way to prevent spoofing of local + One way to prevent spoofing of local connections is to use a Unix domain socket directory () that has write permission only for a trusted local user. This prevents a malicious user from creating their own socket file in that directory. If you are concerned that - some applications might still reference /tmp for the + some applications might still reference /tmp for the socket file and hence be vulnerable to spoofing, during operating system - startup create a symbolic link /tmp/.s.PGSQL.5432 that points + startup create a symbolic link /tmp/.s.PGSQL.5432 that points to the relocated socket file. You also might need to modify your - /tmp cleanup script to prevent removal of the symbolic link. + /tmp cleanup script to prevent removal of the symbolic link. - Another option for local connections is for clients to use - requirepeer + Another option for local connections is for clients to use + requirepeer to specify the required owner of the server process connected to the socket. @@ -1996,11 +1996,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To prevent spoofing on TCP connections, the best solution is to use SSL certificates and make sure that clients check the server's certificate. To do that, the server - must be configured to accept only hostssl connections (hostssl connections () and have SSL key and certificate files (). The TCP client must connect using - sslmode=verify-ca or - verify-full and have the appropriate root certificate + sslmode=verify-ca or + verify-full and have the appropriate root certificate file installed ().
@@ -2091,7 +2091,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - The MD5 authentication method double-encrypts the + The MD5 authentication method double-encrypts the password on the client before sending it to the server. It first MD5-encrypts it based on the user name, and then encrypts it based on a random salt sent by the server when the database @@ -2111,12 +2111,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 SSL connections encrypt all data sent across the network: the password, the queries, and the data returned. The - pg_hba.conf file allows administrators to specify - which hosts can use non-encrypted connections (host) + pg_hba.conf file allows administrators to specify + which hosts can use non-encrypted connections (host) and which require SSL-encrypted connections - (hostssl). Also, clients can specify that they - connect to servers only via SSL. Stunnel or - SSH can also be used to encrypt transmissions. + (hostssl). Also, clients can specify that they + connect to servers only via SSL. Stunnel or + SSH can also be used to encrypt transmissions. @@ -2131,7 +2131,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 on each side, but this provides stronger verification of identity than the mere use of passwords. It prevents a computer from pretending to be the server just long enough to read the password - sent by the client. It also helps prevent man in the middle + sent by the client. It also helps prevent man in the middle attacks where a computer between the client and server pretends to be the server and reads and passes all data between the client and server. @@ -2166,32 +2166,32 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - PostgreSQL has native support for using - SSL connections to encrypt client/server communications + PostgreSQL has native support for using + SSL connections to encrypt client/server communications for increased security. This requires that OpenSSL is installed on both client and - server systems and that support in PostgreSQL is + server systems and that support in PostgreSQL is enabled at build time (see ). - With SSL support compiled in, the - PostgreSQL server can be started with - SSL enabled by setting the parameter - to on in - postgresql.conf. The server will listen for both normal - and SSL connections on the same TCP port, and will negotiate - with any connecting client on whether to use SSL. By + With SSL support compiled in, the + PostgreSQL server can be started with + SSL enabled by setting the parameter + to on in + postgresql.conf. The server will listen for both normal + and SSL connections on the same TCP port, and will negotiate + with any connecting client on whether to use SSL. By default, this is at the client's option; see about how to set up the server to require - use of SSL for some or all connections. + use of SSL for some or all connections. PostgreSQL reads the system-wide OpenSSL configuration file. By default, this file is named openssl.cnf and is located in the - directory reported by openssl version -d. + directory reported by openssl version -d. This default can be overridden by setting environment variable OPENSSL_CONF to the name of the desired configuration file. @@ -2202,13 +2202,13 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 ciphers can be specified in the OpenSSL configuration file, you can specify ciphers specifically for use by the database server by modifying in - postgresql.conf. + postgresql.conf.
It is possible to have authentication without encryption overhead by - using NULL-SHA or NULL-MD5 ciphers. However, + using NULL-SHA or NULL-MD5 ciphers. However, a man-in-the-middle could read and pass communications between client and server. Also, encryption overhead is minimal compared to the overhead of authentication. For these reasons NULL ciphers are not @@ -2217,9 +2217,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - To start in SSL mode, files containing the server certificate + To start in SSL mode, files containing the server certificate and private key must exist. By default, these files are expected to be - named server.crt and server.key, respectively, in + named server.crt and server.key, respectively, in the server's data directory, but other names and locations can be specified using the configuration parameters and . @@ -2248,11 +2248,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 In some cases, the server certificate might be signed by an - intermediate certificate authority, rather than one that is + intermediate certificate authority, rather than one that is directly trusted by clients. To use such a certificate, append the - certificate of the signing authority to the server.crt file, + certificate of the signing authority to the server.crt file, then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by + authority, root or intermediate, that is trusted by clients, i.e. signed by a certificate in the clients' root.crt files. @@ -2267,7 +2267,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 directory, set the parameter in postgresql.conf to root.crt, and add the authentication option clientcert=1 to the - appropriate hostssl line(s) in pg_hba.conf. + appropriate hostssl line(s) in pg_hba.conf. A certificate will then be requested from the client during SSL connection startup. (See for a description of how to set up certificates on the client.) The server will @@ -2276,21 +2276,21 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - If intermediate CAs appear in + If intermediate CAs appear in root.crt, the file must also contain certificate - chains to their root CAs. Certificate Revocation List + chains to their root CAs. Certificate Revocation List (CRL) entries are also checked if the parameter is set. (See + url="http://h71000.www7.hp.com/doc/83final/ba554_90007/ch04s02.html"> for diagrams showing SSL certificate usage.) The clientcert authentication option is available for - all authentication methods, but only in pg_hba.conf lines - specified as hostssl. When clientcert is + all authentication methods, but only in pg_hba.conf lines + specified as hostssl. When clientcert is not specified or is set to 0, the server will still verify any presented client certificates against its CA file, if one is configured — but it will not insist that a client certificate be presented. @@ -2306,11 +2306,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 If you are setting up client certificates, you may wish to use - the cert authentication method, so that the certificates + the cert authentication method, so that the certificates control user authentication as well as providing connection security. See for details. (It is not necessary to specify clientcert=1 explicitly when using - the cert authentication method.) + the cert authentication method.) @@ -2337,13 +2337,13 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - ($PGDATA/server.crt) + ($PGDATA/server.crt) server certificate sent to client to indicate server's identity - ($PGDATA/server.key) + ($PGDATA/server.key) server private key proves server certificate was sent by the owner; does not indicate certificate owner is trustworthy @@ -2368,7 +2368,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 The server reads these files at server start and whenever the server - configuration is reloaded. On Windows + configuration is reloaded. On Windows systems, they are also re-read whenever a new backend process is spawned for a new client connection. @@ -2377,7 +2377,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 If an error in these files is detected at server start, the server will refuse to start. But if an error is detected during a configuration reload, the files are ignored and the old SSL configuration continues to - be used. On Windows systems, if an error in + be used. On Windows systems, if an error in these files is detected at backend start, that backend will be unable to establish an SSL connection. In all these cases, the error condition is reported in the server log. @@ -2390,10 +2390,10 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To create a quick self-signed certificate for the server, valid for 365 days, use the following OpenSSL command, - replacing yourdomain.com with the server's host name: + replacing yourdomain.com with the server's host name: openssl req -new -x509 -days 365 -nodes -text -out server.crt \ - -keyout server.key -subj "/CN=yourdomain.com" + -keyout server.key -subj "/CN=yourdomain.com" Then do: @@ -2402,15 +2402,15 @@ chmod og-rwx server.key because the server will reject the file if its permissions are more liberal than this. For more details on how to create your server private key and - certificate, refer to the OpenSSL documentation. + certificate, refer to the OpenSSL documentation. A self-signed certificate can be used for testing, but a certificate - signed by a certificate authority (CA) (either one of the - global CAs or a local one) should be used in production + signed by a certificate authority (CA) (either one of the + global CAs or a local one) should be used in production so that clients can verify the server's identity. If all the clients - are local to the organization, using a local CA is + are local to the organization, using a local CA is recommended. @@ -2511,8 +2511,8 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com - Registering <application>Event Log</> on <systemitem - class="osname">Windows</> + Registering <application>Event Log</application> on <systemitem + class="osname">Windows</systemitem> event log @@ -2520,11 +2520,11 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com - To register a Windows - event log library with the operating system, + To register a Windows + event log library with the operating system, issue this command: -regsvr32 pgsql_library_directory/pgevent.dll +regsvr32 pgsql_library_directory/pgevent.dll This creates registry entries used by the event viewer, under the default event source named PostgreSQL. @@ -2535,15 +2535,15 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com ), use the /n and /i options: -regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll +regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll - To unregister the event log library from + To unregister the event log library from the operating system, issue this command: -regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll +regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll diff --git a/doc/src/sgml/seg.sgml b/doc/src/sgml/seg.sgml index 5d1f546b53..c7e9b5f4af 100644 --- a/doc/src/sgml/seg.sgml +++ b/doc/src/sgml/seg.sgml @@ -8,9 +8,9 @@ - This module implements a data type seg for + This module implements a data type seg for representing line segments, or floating point intervals. - seg can represent uncertainty in the interval endpoints, + seg can represent uncertainty in the interval endpoints, making it especially useful for representing laboratory measurements. @@ -92,40 +92,40 @@ test=> select '6.25 .. 6.50'::seg as "pH"; - In , x, y, and - delta denote - floating-point numbers. x and y, but - not delta, can be preceded by a certainty indicator. + In , x, y, and + delta denote + floating-point numbers. x and y, but + not delta, can be preceded by a certainty indicator. - <type>seg</> External Representations + <type>seg</type> External Representations - x + x Single value (zero-length interval) - x .. y - Interval from x to y + x .. y + Interval from x to y - x (+-) delta - Interval from x - delta to - x + delta + x (+-) delta + Interval from x - delta to + x + delta - x .. - Open interval with lower bound x + x .. + Open interval with lower bound x - .. x - Open interval with upper bound x + .. x + Open interval with upper bound x @@ -133,7 +133,7 @@ test=> select '6.25 .. 6.50'::seg as "pH";
- Examples of Valid <type>seg</> Input + Examples of Valid <type>seg</type> Input @@ -146,8 +146,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; ~5.0 Creates a zero-length segment and records - ~ in the data. ~ is ignored - by seg operations, but + ~ in the data. ~ is ignored + by seg operations, but is preserved as a comment. @@ -169,7 +169,7 @@ test=> select '6.25 .. 6.50'::seg as "pH"; 5(+-)0.3 Creates an interval 4.7 .. 5.3. - Note that the (+-) notation isn't preserved. + Note that the (+-) notation isn't preserved. @@ -197,17 +197,17 @@ test=> select '6.25 .. 6.50'::seg as "pH";
- Because ... is widely used in data sources, it is allowed - as an alternative spelling of ... Unfortunately, this + Because ... is widely used in data sources, it is allowed + as an alternative spelling of ... Unfortunately, this creates a parsing ambiguity: it is not clear whether the upper bound - in 0...23 is meant to be 23 or 0.23. + in 0...23 is meant to be 23 or 0.23. This is resolved by requiring at least one digit before the decimal - point in all numbers in seg input. + point in all numbers in seg input. - As a sanity check, seg rejects intervals with the lower bound - greater than the upper, for example 5 .. 2. + As a sanity check, seg rejects intervals with the lower bound + greater than the upper, for example 5 .. 2. @@ -216,7 +216,7 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Precision - seg values are stored internally as pairs of 32-bit floating point + seg values are stored internally as pairs of 32-bit floating point numbers. This means that numbers with more than 7 significant digits will be truncated. @@ -235,8 +235,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Usage - The seg module includes a GiST index operator class for - seg values. + The seg module includes a GiST index operator class for + seg values. The operators supported by the GiST operator class are shown in . @@ -304,8 +304,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; - (Before PostgreSQL 8.2, the containment operators @> and <@ were - respectively called @ and ~. These names are still available, but are + (Before PostgreSQL 8.2, the containment operators @> and <@ were + respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) @@ -349,11 +349,11 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Notes - For examples of usage, see the regression test sql/seg.sql. + For examples of usage, see the regression test sql/seg.sql. - The mechanism that converts (+-) to regular ranges + The mechanism that converts (+-) to regular ranges isn't completely accurate in determining the number of significant digits for the boundaries. For example, it adds an extra digit to the lower boundary if the resulting interval includes a power of ten: @@ -369,7 +369,7 @@ postgres=> select '10(+-)1'::seg as seg; The performance of an R-tree index can largely depend on the initial order of input values. It may be very helpful to sort the input table - on the seg column; see the script sort-segments.pl + on the seg column; see the script sort-segments.pl for an example. diff --git a/doc/src/sgml/sepgsql.sgml b/doc/src/sgml/sepgsql.sgml index 6a8d3765a2..c6c89a389d 100644 --- a/doc/src/sgml/sepgsql.sgml +++ b/doc/src/sgml/sepgsql.sgml @@ -8,8 +8,8 @@ - sepgsql is a loadable module that supports label-based - mandatory access control (MAC) based on SELinux security + sepgsql is a loadable module that supports label-based + mandatory access control (MAC) based on SELinux security policy. @@ -25,10 +25,10 @@ Overview - This module integrates with SELinux to provide an + This module integrates with SELinux to provide an additional layer of security checking above and beyond what is normally provided by PostgreSQL. From the perspective of - SELinux, this module allows + SELinux, this module allows PostgreSQL to function as a user-space object manager. Each table or function access initiated by a DML query will be checked against the system security policy. This check is in addition to @@ -39,7 +39,7 @@ SELinux access control decisions are made using security labels, which are represented by strings such as - system_u:object_r:sepgsql_table_t:s0. Each access control + system_u:object_r:sepgsql_table_t:s0. Each access control decision involves two labels: the label of the subject attempting to perform the action, and the label of the object on which the operation is to be performed. Since these labels can be applied to any sort of object, @@ -60,17 +60,17 @@ Installation - sepgsql can only be used on Linux + sepgsql can only be used on Linux 2.6.28 or higher with SELinux enabled. It is not available on any other platform. You will also need - libselinux 2.1.10 or higher and - selinux-policy 3.9.13 or higher (although some + libselinux 2.1.10 or higher and + selinux-policy 3.9.13 or higher (although some distributions may backport the necessary rules into older policy versions). - The sestatus command allows you to check the status of + The sestatus command allows you to check the status of SELinux. A typical display is: $ sestatus @@ -81,20 +81,20 @@ Mode from config file: enforcing Policy version: 24 Policy from config file: targeted - If SELinux is disabled or not installed, you must set + If SELinux is disabled or not installed, you must set that product up first before installing this module. - To build this module, include the option --with-selinux in - your PostgreSQL configure command. Be sure that the - libselinux-devel RPM is installed at build time. + To build this module, include the option --with-selinux in + your PostgreSQL configure command. Be sure that the + libselinux-devel RPM is installed at build time. - To use this module, you must include sepgsql + To use this module, you must include sepgsql in the parameter in - postgresql.conf. The module will not function correctly + postgresql.conf. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute sepgsql.sql in each database. This will install functions needed for security label management, and @@ -103,7 +103,7 @@ Policy from config file: targeted Here is an example showing how to initialize a fresh database cluster - with sepgsql functions and security labels installed. + with sepgsql functions and security labels installed. Adjust the paths shown as appropriate for your installation: @@ -124,7 +124,7 @@ $ for DBNAME in template0 template1 postgres; do Please note that you may see some or all of the following notifications depending on the particular versions you have of - libselinux and selinux-policy: + libselinux and selinux-policy: /etc/selinux/targeted/contexts/sepgsql_contexts: line 33 has invalid object type db_blobs /etc/selinux/targeted/contexts/sepgsql_contexts: line 36 has invalid object type db_language @@ -147,16 +147,16 @@ $ for DBNAME in template0 template1 postgres; do Due to the nature of SELinux, running the - regression tests for sepgsql requires several extra + regression tests for sepgsql requires several extra configuration steps, some of which must be done as root. The regression tests will not be run by an ordinary - make check or make installcheck command; you must + make check or make installcheck command; you must set up the configuration and then invoke the test script manually. - The tests must be run in the contrib/sepgsql directory + The tests must be run in the contrib/sepgsql directory of a configured PostgreSQL build tree. Although they require a build tree, the tests are designed to be executed against an installed server, - that is they are comparable to make installcheck not - make check. + that is they are comparable to make installcheck not + make check. @@ -168,17 +168,17 @@ $ for DBNAME in template0 template1 postgres; do Second, build and install the policy package for the regression test. - The sepgsql-regtest policy is a special purpose policy package + The sepgsql-regtest policy is a special purpose policy package which provides a set of rules to be allowed during the regression tests. It should be built from the policy source file - sepgsql-regtest.te, which is done using + sepgsql-regtest.te, which is done using make with a Makefile supplied by SELinux. You will need to locate the appropriate Makefile on your system; the path shown below is only an example. Once built, install this policy package using the - semodule command, which loads supplied policy packages + semodule command, which loads supplied policy packages into the kernel. If the package is correctly installed, - semodule -l should list sepgsql-regtest as an + semodule -l should list sepgsql-regtest as an available policy package: @@ -191,12 +191,12 @@ sepgsql-regtest 1.07 - Third, turn on sepgsql_regression_test_mode. - For security reasons, the rules in sepgsql-regtest + Third, turn on sepgsql_regression_test_mode. + For security reasons, the rules in sepgsql-regtest are not enabled by default; the sepgsql_regression_test_mode parameter enables the rules needed to launch the regression tests. - It can be turned on using the setsebool command: + It can be turned on using the setsebool command: @@ -206,7 +206,7 @@ sepgsql_regression_test_mode --> on - Fourth, verify your shell is operating in the unconfined_t + Fourth, verify your shell is operating in the unconfined_t domain: @@ -229,7 +229,7 @@ $ ./test_sepgsql This script will attempt to verify that you have done all the configuration steps correctly, and then it will run the regression tests for the - sepgsql module. + sepgsql module. @@ -242,7 +242,7 @@ $ sudo setsebool sepgsql_regression_test_mode off - You might prefer to remove the sepgsql-regtest policy + You might prefer to remove the sepgsql-regtest policy entirely: @@ -257,22 +257,22 @@ $ sudo semodule -r sepgsql-regtest - sepgsql.permissive (boolean) + sepgsql.permissive (boolean) - sepgsql.permissive configuration parameter + sepgsql.permissive configuration parameter - This parameter enables sepgsql to function + This parameter enables sepgsql to function in permissive mode, regardless of the system setting. The default is off. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - When this parameter is on, sepgsql functions + When this parameter is on, sepgsql functions in permissive mode, even if SELinux in general is working in enforcing mode. This parameter is primarily useful for testing purposes. @@ -281,9 +281,9 @@ $ sudo semodule -r sepgsql-regtest - sepgsql.debug_audit (boolean) + sepgsql.debug_audit (boolean) - sepgsql.debug_audit configuration parameter + sepgsql.debug_audit configuration parameter @@ -295,7 +295,7 @@ $ sudo semodule -r sepgsql-regtest - The security policy of SELinux also has rules to + The security policy of SELinux also has rules to control whether or not particular accesses are logged. By default, access violations are logged, but allowed accesses are not. @@ -315,13 +315,13 @@ $ sudo semodule -r sepgsql-regtest Controlled Object Classes - The security model of SELinux describes all the access + The security model of SELinux describes all the access control rules as relationships between a subject entity (typically, a client of the database) and an object entity (such as a database object), each of which is identified by a security label. If access to an unlabeled object is attempted, the object is treated as if it were assigned the label - unlabeled_t. + unlabeled_t. @@ -349,22 +349,22 @@ $ sudo semodule -r sepgsql-regtest DML Permissions - For tables, db_table:select, db_table:insert, - db_table:update or db_table:delete are + For tables, db_table:select, db_table:insert, + db_table:update or db_table:delete are checked for all the referenced target tables depending on the kind of - statement; in addition, db_table:select is also checked for + statement; in addition, db_table:select is also checked for all the tables that contain columns referenced in the - WHERE or RETURNING clause, as a data source - for UPDATE, and so on. + WHERE or RETURNING clause, as a data source + for UPDATE, and so on. Column-level permissions will also be checked for each referenced column. - db_column:select is checked on not only the columns being - read using SELECT, but those being referenced in other DML - statements; db_column:update or db_column:insert - will also be checked for columns being modified by UPDATE or - INSERT. + db_column:select is checked on not only the columns being + read using SELECT, but those being referenced in other DML + statements; db_column:update or db_column:insert + will also be checked for columns being modified by UPDATE or + INSERT. @@ -373,43 +373,43 @@ $ sudo semodule -r sepgsql-regtest UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100;
- Here, db_column:update will be checked for - t1.x, since it is being updated, - db_column:{select update} will be checked for - t1.y, since it is both updated and referenced, and - db_column:select will be checked for t1.z, since + Here, db_column:update will be checked for + t1.x, since it is being updated, + db_column:{select update} will be checked for + t1.y, since it is both updated and referenced, and + db_column:select will be checked for t1.z, since it is only referenced. - db_table:{select update} will also be checked + db_table:{select update} will also be checked at the table level. - For sequences, db_sequence:get_value is checked when we - reference a sequence object using SELECT; however, note that we + For sequences, db_sequence:get_value is checked when we + reference a sequence object using SELECT; however, note that we do not currently check permissions on execution of corresponding functions - such as lastval(). + such as lastval(). - For views, db_view:expand will be checked, then any other + For views, db_view:expand will be checked, then any other required permissions will be checked on the objects being expanded from the view, individually. - For functions, db_procedure:{execute} will be checked when + For functions, db_procedure:{execute} will be checked when user tries to execute a function as a part of query, or using fast-path invocation. If this function is a trusted procedure, it also checks - db_procedure:{entrypoint} permission to check whether it + db_procedure:{entrypoint} permission to check whether it can perform as entry point of trusted procedure. - In order to access any schema object, db_schema:search + In order to access any schema object, db_schema:search permission is required on the containing schema. When an object is referenced without schema qualification, schemas on which this permission is not present will not be searched (just as if the user did - not have USAGE privilege on the schema). If an explicit schema + not have USAGE privilege on the schema). If an explicit schema qualification is present, an error will occur if the user does not have the requisite permission on the named schema. @@ -425,22 +425,22 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; The default database privilege system allows database superusers to modify system catalogs using DML commands, and reference or modify toast tables. These operations are prohibited when - sepgsql is enabled. + sepgsql is enabled. DDL Permissions - SELinux defines several permissions to control common + SELinux defines several permissions to control common operations for each object type; such as creation, alter, drop and relabel of security label. In addition, several object types have special permissions to control their characteristic operations; such as addition or deletion of name entries within a particular schema. - Creating a new database object requires create permission. - SELinux will grant or deny this permission based on the + Creating a new database object requires create permission. + SELinux will grant or deny this permission based on the client's security label and the proposed security label for the new object. In some cases, additional privileges are required: @@ -449,12 +449,12 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; additionally requires - getattr permission for the source or template database. + getattr permission for the source or template database. - Creating a schema object additionally requires add_name + Creating a schema object additionally requires add_name permission on the parent schema. @@ -467,23 +467,23 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; - Creating a function marked as LEAKPROOF additionally - requires install permission. (This permission is also - checked when LEAKPROOF is set for an existing function.) + Creating a function marked as LEAKPROOF additionally + requires install permission. (This permission is also + checked when LEAKPROOF is set for an existing function.) - When DROP command is executed, drop will be + When DROP command is executed, drop will be checked on the object being removed. Permissions will be also checked for - objects dropped indirectly via CASCADE. Deletion of objects + objects dropped indirectly via CASCADE. Deletion of objects contained within a particular schema (tables, views, sequences and - procedures) additionally requires remove_name on the schema. + procedures) additionally requires remove_name on the schema. - When ALTER command is executed, setattr will be + When ALTER command is executed, setattr will be checked on the object being modified for each object types, except for subsidiary objects such as the indexes or triggers of a table, where permissions are instead checked on the parent object. In some cases, @@ -494,25 +494,25 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; Moving an object to a new schema additionally requires - remove_name permission on the old schema and - add_name permission on the new one. + remove_name permission on the old schema and + add_name permission on the new one. - Setting the LEAKPROOF attribute on a function requires - install permission. + Setting the LEAKPROOF attribute on a function requires + install permission. Using on an object additionally - requires relabelfrom permission for the object in - conjunction with its old security label and relabelto + requires relabelfrom permission for the object in + conjunction with its old security label and relabelto permission for the object in conjunction with its new security label. (In cases where multiple label providers are installed and the user tries to set a security label, but it is not managed by - SELinux, only setattr should be checked here. + SELinux, only setattr should be checked here. This is currently not done due to implementation restrictions.) @@ -524,7 +524,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; Trusted Procedures Trusted procedures are similar to security definer functions or setuid - commands. SELinux provides a feature to allow trusted + commands. SELinux provides a feature to allow trusted code to run using a security label different from that of the client, generally for the purpose of providing highly controlled access to sensitive data (e.g. rows might be omitted, or the precision of stored @@ -569,8 +569,8 @@ postgres=# SELECT cid, cname, show_credit(cid) FROM customer; - In this case, a regular user cannot reference customer.credit - directly, but a trusted procedure show_credit allows the user + In this case, a regular user cannot reference customer.credit + directly, but a trusted procedure show_credit allows the user to print the credit card numbers of customers with some of the digits masked out. @@ -582,8 +582,8 @@ postgres=# SELECT cid, cname, show_credit(cid) FROM customer; It is possible to use SELinux's dynamic domain transition feature to switch the security label of the client process, the client domain, to a new context, if that is allowed by the security policy. - The client domain needs the setcurrent permission and also - dyntransition from the old to the new domain. + The client domain needs the setcurrent permission and also + dyntransition from the old to the new domain. Dynamic domain transitions should be considered carefully, because they @@ -612,7 +612,7 @@ ERROR: SELinux: security policy violation In this example above we were allowed to switch from the larger MCS - range c1.c1023 to the smaller range c1.c4, but + range c1.c1023 to the smaller range c1.c4, but switching back was denied. @@ -726,7 +726,7 @@ ERROR: SELinux: security policy violation Row-level access control - PostgreSQL supports row-level access, but + PostgreSQL supports row-level access, but sepgsql does not. @@ -736,7 +736,7 @@ ERROR: SELinux: security policy violation Covert channels - sepgsql does not try to hide the existence of + sepgsql does not try to hide the existence of a certain object, even if the user is not allowed to reference it. For example, we can infer the existence of an invisible object as a result of primary key conflicts, foreign key violations, and so on, @@ -766,7 +766,7 @@ ERROR: SELinux: security policy violation This document provides a wide spectrum of knowledge to administer - SELinux on your systems. + SELinux on your systems. It focuses primarily on Red Hat operating systems, but is not limited to them. diff --git a/doc/src/sgml/sourcerepo.sgml b/doc/src/sgml/sourcerepo.sgml index dd9da5a7b0..b5618d7166 100644 --- a/doc/src/sgml/sourcerepo.sgml +++ b/doc/src/sgml/sourcerepo.sgml @@ -18,18 +18,18 @@ Note that building PostgreSQL from the source - repository requires reasonably up-to-date versions of bison, - flex, and Perl. These tools are not needed + repository requires reasonably up-to-date versions of bison, + flex, and Perl. These tools are not needed to build from a distribution tarball, because the files that these tools are used to build are included in the tarball. Other tool requirements are the same as shown in . - Getting The Source via <productname>Git</> + Getting The Source via <productname>Git</productname> - With Git you will make a copy of the entire code repository + With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test patches. @@ -40,9 +40,9 @@ - You will need an installed version of Git, which you can + You will need an installed version of Git, which you can get from . Many systems already - have a recent version of Git installed by default, or + have a recent version of Git installed by default, or available in their package distribution system. @@ -57,14 +57,14 @@ git clone git://git.postgresql.org/git/postgresql.git This will copy the full repository to your local machine, so it may take a while to complete, especially if you have a slow Internet connection. - The files will be placed in a new subdirectory postgresql of + The files will be placed in a new subdirectory postgresql of your current directory. The Git mirror can also be reached via the HTTP protocol, if for example a firewall is blocking access to the Git protocol. Just change the URL - prefix to https, as in: + prefix to https, as in: git clone https://git.postgresql.org/git/postgresql.git @@ -77,7 +77,7 @@ git clone https://git.postgresql.org/git/postgresql.git - Whenever you want to get the latest updates in the system, cd + Whenever you want to get the latest updates in the system, cd into the repository, and run: @@ -88,9 +88,9 @@ git fetch - Git can do a lot more things than just fetch the source. For - more information, consult the Git man pages, or see the - website at . + Git can do a lot more things than just fetch the source. For + more information, consult the Git man pages, or see the + website at . diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml index 7777bf5199..4c777de16f 100644 --- a/doc/src/sgml/sources.sgml +++ b/doc/src/sgml/sources.sgml @@ -14,8 +14,8 @@ Layout rules (brace positioning, etc) follow BSD conventions. In - particular, curly braces for the controlled blocks of if, - while, switch, etc go on their own lines. + particular, curly braces for the controlled blocks of if, + while, switch, etc go on their own lines. @@ -26,7 +26,7 @@ - Do not use C++ style comments (// comments). Strict ANSI C + Do not use C++ style comments (// comments). Strict ANSI C compilers do not accept them. For the same reason, do not use C++ extensions such as declaring new variables mid-block. @@ -40,7 +40,7 @@ */ Note that comment blocks that begin in column 1 will be preserved as-is - by pgindent, but it will re-flow indented comment blocks + by pgindent, but it will re-flow indented comment blocks as though they were plain text. If you want to preserve the line breaks in an indented block, add dashes like this: @@ -55,10 +55,10 @@ While submitted patches do not absolutely have to follow these formatting rules, it's a good idea to do so. Your code will get run through - pgindent before the next release, so there's no point in + pgindent before the next release, so there's no point in making it look nice under some other set of formatting conventions. A good rule of thumb for patches is make the new code look like - the existing code around it. + the existing code around it. @@ -92,37 +92,37 @@ less -x4 Error, warning, and log messages generated within the server code - should be created using ereport, or its older cousin - elog. The use of this function is complex enough to + should be created using ereport, or its older cousin + elog. The use of this function is complex enough to require some explanation. There are two required elements for every message: a severity level - (ranging from DEBUG to PANIC) and a primary + (ranging from DEBUG to PANIC) and a primary message text. In addition there are optional elements, the most common of which is an error identifier code that follows the SQL spec's SQLSTATE conventions. - ereport itself is just a shell function, that exists + ereport itself is just a shell function, that exists mainly for the syntactic convenience of making message generation look like a function call in the C source code. The only parameter - accepted directly by ereport is the severity level. + accepted directly by ereport is the severity level. The primary message text and any optional message elements are - generated by calling auxiliary functions, such as errmsg, - within the ereport call. + generated by calling auxiliary functions, such as errmsg, + within the ereport call. - A typical call to ereport might look like this: + A typical call to ereport might look like this: ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); - This specifies error severity level ERROR (a run-of-the-mill - error). The errcode call specifies the SQLSTATE error code - using a macro defined in src/include/utils/errcodes.h. The - errmsg call provides the primary message text. Notice the + This specifies error severity level ERROR (a run-of-the-mill + error). The errcode call specifies the SQLSTATE error code + using a macro defined in src/include/utils/errcodes.h. The + errmsg call provides the primary message text. Notice the extra set of parentheses surrounding the auxiliary function calls — these are annoying but syntactically necessary. @@ -139,72 +139,72 @@ ereport(ERROR, "You might need to add explicit typecasts."))); This illustrates the use of format codes to embed run-time values into - a message text. Also, an optional hint message is provided. + a message text. Also, an optional hint message is provided. - If the severity level is ERROR or higher, - ereport aborts the execution of the user-defined + If the severity level is ERROR or higher, + ereport aborts the execution of the user-defined function and does not return to the caller. If the severity level is - lower than ERROR, ereport returns normally. + lower than ERROR, ereport returns normally. - The available auxiliary routines for ereport are: + The available auxiliary routines for ereport are: errcode(sqlerrcode) specifies the SQLSTATE error identifier code for the condition. If this routine is not called, the error identifier defaults to - ERRCODE_INTERNAL_ERROR when the error severity level is - ERROR or higher, ERRCODE_WARNING when the - error level is WARNING, otherwise (for NOTICE - and below) ERRCODE_SUCCESSFUL_COMPLETION. + ERRCODE_INTERNAL_ERROR when the error severity level is + ERROR or higher, ERRCODE_WARNING when the + error level is WARNING, otherwise (for NOTICE + and below) ERRCODE_SUCCESSFUL_COMPLETION. While these defaults are often convenient, always think whether they - are appropriate before omitting the errcode() call. + are appropriate before omitting the errcode() call. errmsg(const char *msg, ...) specifies the primary error message text, and possibly run-time values to insert into it. Insertions - are specified by sprintf-style format codes. In addition to - the standard format codes accepted by sprintf, the format - code %m can be used to insert the error message returned - by strerror for the current value of errno. + are specified by sprintf-style format codes. In addition to + the standard format codes accepted by sprintf, the format + code %m can be used to insert the error message returned + by strerror for the current value of errno. - That is, the value that was current when the ereport call - was reached; changes of errno within the auxiliary reporting + That is, the value that was current when the ereport call + was reached; changes of errno within the auxiliary reporting routines will not affect it. That would not be true if you were to - write strerror(errno) explicitly in errmsg's + write strerror(errno) explicitly in errmsg's parameter list; accordingly, do not do so. - %m does not require any - corresponding entry in the parameter list for errmsg. - Note that the message string will be run through gettext + %m does not require any + corresponding entry in the parameter list for errmsg. + Note that the message string will be run through gettext for possible localization before format codes are processed. errmsg_internal(const char *msg, ...) is the same as - errmsg, except that the message string will not be + errmsg, except that the message string will not be translated nor included in the internationalization message dictionary. - This should be used for cannot happen cases that are probably + This should be used for cannot happen cases that are probably not worth expending translation effort on. errmsg_plural(const char *fmt_singular, const char *fmt_plural, - unsigned long n, ...) is like errmsg, but with + unsigned long n, ...) is like errmsg, but with support for various plural forms of the message. - fmt_singular is the English singular format, - fmt_plural is the English plural format, - n is the integer value that determines which plural + fmt_singular is the English singular format, + fmt_plural is the English plural format, + n is the integer value that determines which plural form is needed, and the remaining arguments are formatted according to the selected format string. For more information see . @@ -213,16 +213,16 @@ ereport(ERROR, errdetail(const char *msg, ...) supplies an optional - detail message; this is to be used when there is additional + detail message; this is to be used when there is additional information that seems inappropriate to put in the primary message. The message string is processed in just the same way as for - errmsg. + errmsg. errdetail_internal(const char *msg, ...) is the same - as errdetail, except that the message string will not be + as errdetail, except that the message string will not be translated nor included in the internationalization message dictionary. This should be used for detail messages that are not worth expending translation effort on, for instance because they are too technical to be @@ -232,7 +232,7 @@ ereport(ERROR, errdetail_plural(const char *fmt_singular, const char *fmt_plural, - unsigned long n, ...) is like errdetail, but with + unsigned long n, ...) is like errdetail, but with support for various plural forms of the message. For more information see . @@ -240,10 +240,10 @@ ereport(ERROR, errdetail_log(const char *msg, ...) is the same as - errdetail except that this string goes only to the server - log, never to the client. If both errdetail (or one of + errdetail except that this string goes only to the server + log, never to the client. If both errdetail (or one of its equivalents above) and - errdetail_log are used then one string goes to the client + errdetail_log are used then one string goes to the client and the other to the log. This is useful for error details that are too security-sensitive or too bulky to include in the report sent to the client. @@ -253,7 +253,7 @@ ereport(ERROR, errdetail_log_plural(const char *fmt_singular, const char *fmt_plural, unsigned long n, ...) is like - errdetail_log, but with support for various plural forms of + errdetail_log, but with support for various plural forms of the message. For more information see . @@ -261,23 +261,23 @@ ereport(ERROR, errhint(const char *msg, ...) supplies an optional - hint message; this is to be used when offering suggestions + hint message; this is to be used when offering suggestions about how to fix the problem, as opposed to factual details about what went wrong. The message string is processed in just the same way as for - errmsg. + errmsg. errcontext(const char *msg, ...) is not normally called - directly from an ereport message site; rather it is used - in error_context_stack callback functions to provide + directly from an ereport message site; rather it is used + in error_context_stack callback functions to provide information about the context in which an error occurred, such as the current location in a PL function. The message string is processed in just the same way as for - errmsg. Unlike the other auxiliary functions, this can - be called more than once per ereport call; the successive + errmsg. Unlike the other auxiliary functions, this can + be called more than once per ereport call; the successive strings thus supplied are concatenated with separating newlines. @@ -309,9 +309,9 @@ ereport(ERROR, specifies a table constraint whose name, table name, and schema name should be included as auxiliary fields in the error report. Indexes should be considered to be constraints for this purpose, whether or - not they have an associated pg_constraint entry. Be + not they have an associated pg_constraint entry. Be careful to pass the underlying heap relation, not the index itself, as - rel. + rel. @@ -330,17 +330,17 @@ ereport(ERROR, - errcode_for_file_access() is a convenience function that + errcode_for_file_access() is a convenience function that selects an appropriate SQLSTATE error identifier for a failure in a file-access-related system call. It uses the saved - errno to determine which error code to generate. - Usually this should be used in combination with %m in the + errno to determine which error code to generate. + Usually this should be used in combination with %m in the primary error message text. - errcode_for_socket_access() is a convenience function that + errcode_for_socket_access() is a convenience function that selects an appropriate SQLSTATE error identifier for a failure in a socket-related system call. @@ -348,7 +348,7 @@ ereport(ERROR, errhidestmt(bool hide_stmt) can be called to specify - suppression of the STATEMENT: portion of a message in the + suppression of the STATEMENT: portion of a message in the postmaster log. Generally this is appropriate if the message text includes the current statement already. @@ -356,7 +356,7 @@ ereport(ERROR, errhidecontext(bool hide_ctx) can be called to - specify suppression of the CONTEXT: portion of a message in + specify suppression of the CONTEXT: portion of a message in the postmaster log. This should only be used for verbose debugging messages where the repeated inclusion of context would bloat the log volume too much. @@ -367,24 +367,24 @@ ereport(ERROR, - At most one of the functions errtable, - errtablecol, errtableconstraint, - errdatatype, or errdomainconstraint should - be used in an ereport call. These functions exist to + At most one of the functions errtable, + errtablecol, errtableconstraint, + errdatatype, or errdomainconstraint should + be used in an ereport call. These functions exist to allow applications to extract the name of a database object associated with the error condition without having to examine the potentially-localized error message text. These functions should be used in error reports for which it's likely that applications would wish to have automatic error handling. As of - PostgreSQL 9.3, complete coverage exists only for + PostgreSQL 9.3, complete coverage exists only for errors in SQLSTATE class 23 (integrity constraint violation), but this is likely to be expanded in future. - There is an older function elog that is still heavily used. - An elog call: + There is an older function elog that is still heavily used. + An elog call: elog(level, "format string", ...); @@ -394,11 +394,11 @@ ereport(level, (errmsg_internal("format string", ...))); Notice that the SQLSTATE error code is always defaulted, and the message string is not subject to translation. - Therefore, elog should be used only for internal errors and + Therefore, elog should be used only for internal errors and low-level debug logging. Any message that is likely to be of interest to - ordinary users should go through ereport. Nonetheless, - there are enough internal cannot happen error checks in the - system that elog is still widely used; it is preferred for + ordinary users should go through ereport. Nonetheless, + there are enough internal cannot happen error checks in the + system that elog is still widely used; it is preferred for those messages for its notational simplicity. @@ -414,7 +414,7 @@ ereport(level, (errmsg_internal("format string", ...))); This style guide is offered in the hope of maintaining a consistent, user-friendly style throughout all the messages generated by - PostgreSQL. + PostgreSQL. @@ -643,7 +643,7 @@ cannot open file "%s" - Rationale: Otherwise no one will know what foo.bar.baz + Rationale: Otherwise no one will know what foo.bar.baz refers to. @@ -866,7 +866,7 @@ BETTER: unrecognized node type: 42 C Standard - Code in PostgreSQL should only rely on language + Code in PostgreSQL should only rely on language features available in the C89 standard. That means a conforming C89 compiler has to be able to compile postgres, at least aside from a few platform dependent pieces. Features from later @@ -874,7 +874,7 @@ BETTER: unrecognized node type: 42 used, if a fallback is provided. - For example static inline and + For example static inline and _StaticAssert() are currently used, even though they are from newer revisions of the C standard. If not available we respectively fall back to defining the functions @@ -886,7 +886,7 @@ BETTER: unrecognized node type: 42 Function-Like Macros and Inline Functions - Both, macros with arguments and static inline + Both, macros with arguments and static inline functions, may be used. The latter are preferable if there are multiple-evaluation hazards when written as a macro, as e.g. the case with @@ -914,7 +914,7 @@ MemoryContextSwitchTo(MemoryContext context) } #endif /* FRONTEND */ - In this example CurrentMemoryContext, which is only + In this example CurrentMemoryContext, which is only available in the backend, is referenced and the function thus hidden with a #ifndef FRONTEND. This rule exists because some compilers emit references to symbols @@ -957,8 +957,8 @@ handle_sighup(SIGNAL_ARGS) errno = save_errno; } - errno is saved and restored because - SetLatch() might change it. If that were not done + errno is saved and restored because + SetLatch() might change it. If that were not done interrupted code that's currently inspecting errno might see the wrong value. diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index cd4a8d07c4..3f2d31b4c0 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -57,7 +57,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the SP-GiST operator classes shown in . @@ -74,92 +74,92 @@ - kd_point_ops - point + kd_point_ops + point - << - <@ - <^ - >> - >^ - ~= + << + <@ + <^ + >> + >^ + ~= - quad_point_ops - point + quad_point_ops + point - << - <@ - <^ - >> - >^ - ~= + << + <@ + <^ + >> + >^ + ~= - range_ops + range_ops any range type - && - &< - &> - -|- - << - <@ - = - >> - @> + && + &< + &> + -|- + << + <@ + = + >> + @> - box_ops - box + box_ops + box - << - &< - && - &> - >> - ~= - @> - <@ - &<| - <<| + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| |>> - |&> + |&> - text_ops - text + text_ops + text - < - <= - = - > - >= - ~<=~ - ~<~ - ~>=~ - ~>~ + < + <= + = + > + >= + ~<=~ + ~<~ + ~>=~ + ~>~ - inet_ops - inet, cidr + inet_ops + inet, cidr - && - >> - >>= - > - >= - <> - << - <<= - < - <= - = + && + >> + >>= + > + >= + <> + << + <<= + < + <= + = @@ -167,8 +167,8 @@ - Of the two operator classes for type point, - quad_point_ops is the default. kd_point_ops + Of the two operator classes for type point, + quad_point_ops is the default. kd_point_ops supports the same operators but uses a different index data structure which may offer better performance in some applications. @@ -199,15 +199,15 @@ Inner tuples are more complex, since they are branching points in the search tree. Each inner tuple contains a set of one or more - nodes, which represent groups of similar leaf values. + nodes, which represent groups of similar leaf values. A node contains a downlink that leads either to another, lower-level inner tuple, or to a short list of leaf tuples that all lie on the same index page. - Each node normally has a label that describes it; for example, + Each node normally has a label that describes it; for example, in a radix tree the node label could be the next character of the string value. (Alternatively, an operator class can omit the node labels, if it works with a fixed set of nodes for all inner tuples; see .) - Optionally, an inner tuple can have a prefix value + Optionally, an inner tuple can have a prefix value that describes all its members. In a radix tree this could be the common prefix of the represented strings. The prefix value is not necessarily really a prefix, but can be any data needed by the operator class; @@ -223,7 +223,7 @@ operator classes to manage level counting while descending the tree. There is also support for incrementally reconstructing the represented value when that is needed, and for passing down additional data (called - traverse values) during a tree descent. + traverse values) during a tree descent. @@ -241,12 +241,12 @@ There are five user-defined methods that an index operator class for SP-GiST must provide. All five follow the convention - of accepting two internal arguments, the first of which is a + of accepting two internal arguments, the first of which is a pointer to a C struct containing input values for the support method, while the second argument is a pointer to a C struct where output values - must be placed. Four of the methods just return void, since + must be placed. Four of the methods just return void, since all their results appear in the output struct; but - leaf_consistent additionally returns a boolean result. + leaf_consistent additionally returns a boolean result. The methods must not modify any fields of their input structs. In all cases, the output struct is initialized to zeroes before calling the user-defined method. @@ -258,20 +258,20 @@ - config + config Returns static information about the index implementation, including the data type OIDs of the prefix and node label data types. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_config(internal, internal) RETURNS void ... - The first argument is a pointer to a spgConfigIn + The first argument is a pointer to a spgConfigIn C struct, containing input data for the function. - The second argument is a pointer to a spgConfigOut + The second argument is a pointer to a spgConfigOut C struct, which the function must fill with result data. typedef struct spgConfigIn @@ -288,20 +288,20 @@ typedef struct spgConfigOut } spgConfigOut; - attType is passed in order to support polymorphic + attType is passed in order to support polymorphic index operator classes; for ordinary fixed-data-type operator classes, it will always have the same value and so can be ignored. For operator classes that do not use prefixes, - prefixType can be set to VOIDOID. + prefixType can be set to VOIDOID. Likewise, for operator classes that do not use node labels, - labelType can be set to VOIDOID. - canReturnData should be set true if the operator class + labelType can be set to VOIDOID. + canReturnData should be set true if the operator class is capable of reconstructing the originally-supplied index value. - longValuesOK should be set true only when the - attType is of variable length and the operator + longValuesOK should be set true only when the + attType is of variable length and the operator class is capable of segmenting long values by repeated suffixing (see ). @@ -309,20 +309,20 @@ typedef struct spgConfigOut - choose + choose Chooses a method for inserting a new value into an inner tuple. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_choose(internal, internal) RETURNS void ... - The first argument is a pointer to a spgChooseIn + The first argument is a pointer to a spgChooseIn C struct, containing input data for the function. - The second argument is a pointer to a spgChooseOut + The second argument is a pointer to a spgChooseOut C struct, which the function must fill with result data. typedef struct spgChooseIn @@ -380,25 +380,25 @@ typedef struct spgChooseOut } spgChooseOut; - datum is the original datum that was to be inserted + datum is the original datum that was to be inserted into the index. - leafDatum is initially the same as - datum, but can change at lower levels of the tree + leafDatum is initially the same as + datum, but can change at lower levels of the tree if the choose or picksplit methods change it. When the insertion search reaches a leaf page, - the current value of leafDatum is what will be stored + the current value of leafDatum is what will be stored in the newly created leaf tuple. - level is the current inner tuple's level, starting at + level is the current inner tuple's level, starting at zero for the root level. - allTheSame is true if the current inner tuple is + allTheSame is true if the current inner tuple is marked as containing multiple equivalent nodes (see ). - hasPrefix is true if the current inner tuple contains + hasPrefix is true if the current inner tuple contains a prefix; if so, - prefixDatum is its value. - nNodes is the number of child nodes contained in the + prefixDatum is its value. + nNodes is the number of child nodes contained in the inner tuple, and - nodeLabels is an array of their label values, or + nodeLabels is an array of their label values, or NULL if there are no labels. @@ -412,80 +412,80 @@ typedef struct spgChooseOut If the new value matches one of the existing child nodes, - set resultType to spgMatchNode. - Set nodeN to the index (from zero) of that node in + set resultType to spgMatchNode. + Set nodeN to the index (from zero) of that node in the node array. - Set levelAdd to the increment in - level caused by descending through that node, + Set levelAdd to the increment in + level caused by descending through that node, or leave it as zero if the operator class does not use levels. - Set restDatum to equal datum + Set restDatum to equal datum if the operator class does not modify datums from one level to the next, or otherwise set it to the modified value to be used as - leafDatum at the next level. + leafDatum at the next level. If a new child node must be added, - set resultType to spgAddNode. - Set nodeLabel to the label to be used for the new - node, and set nodeN to the index (from zero) at which + set resultType to spgAddNode. + Set nodeLabel to the label to be used for the new + node, and set nodeN to the index (from zero) at which to insert the node in the node array. After the node has been added, the choose function will be called again with the modified inner tuple; - that call should result in an spgMatchNode result. + that call should result in an spgMatchNode result. If the new value is inconsistent with the tuple prefix, - set resultType to spgSplitTuple. + set resultType to spgSplitTuple. This action moves all the existing nodes into a new lower-level inner tuple, and replaces the existing inner tuple with a tuple having a single downlink pointing to the new lower-level inner tuple. - Set prefixHasPrefix to indicate whether the new + Set prefixHasPrefix to indicate whether the new upper tuple should have a prefix, and if so set - prefixPrefixDatum to the prefix value. This new + prefixPrefixDatum to the prefix value. This new prefix value must be sufficiently less restrictive than the original to accept the new value to be indexed. - Set prefixNNodes to the number of nodes needed in the - new tuple, and set prefixNodeLabels to a palloc'd array + Set prefixNNodes to the number of nodes needed in the + new tuple, and set prefixNodeLabels to a palloc'd array holding their labels, or to NULL if node labels are not required. Note that the total size of the new upper tuple must be no more than the total size of the tuple it is replacing; this constrains the lengths of the new prefix and new labels. - Set childNodeN to the index (from zero) of the node + Set childNodeN to the index (from zero) of the node that will downlink to the new lower-level inner tuple. - Set postfixHasPrefix to indicate whether the new + Set postfixHasPrefix to indicate whether the new lower-level inner tuple should have a prefix, and if so set - postfixPrefixDatum to the prefix value. The + postfixPrefixDatum to the prefix value. The combination of these two prefixes and the downlink node's label (if any) must have the same meaning as the original prefix, because there is no opportunity to alter the node labels that are moved to the new lower-level tuple, nor to change any child index entries. After the node has been split, the choose function will be called again with the replacement inner tuple. - That call may return an spgAddNode result, if no suitable - node was created by the spgSplitTuple action. Eventually - choose must return spgMatchNode to + That call may return an spgAddNode result, if no suitable + node was created by the spgSplitTuple action. Eventually + choose must return spgMatchNode to allow the insertion to descend to the next level. - picksplit + picksplit Decides how to create a new inner tuple over a set of leaf tuples. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_picksplit(internal, internal) RETURNS void ... - The first argument is a pointer to a spgPickSplitIn + The first argument is a pointer to a spgPickSplitIn C struct, containing input data for the function. - The second argument is a pointer to a spgPickSplitOut + The second argument is a pointer to a spgPickSplitOut C struct, which the function must fill with result data. typedef struct spgPickSplitIn @@ -508,52 +508,52 @@ typedef struct spgPickSplitOut } spgPickSplitOut; - nTuples is the number of leaf tuples provided. - datums is an array of their datum values. - level is the current level that all the leaf tuples + nTuples is the number of leaf tuples provided. + datums is an array of their datum values. + level is the current level that all the leaf tuples share, which will become the level of the new inner tuple. - Set hasPrefix to indicate whether the new inner + Set hasPrefix to indicate whether the new inner tuple should have a prefix, and if so set - prefixDatum to the prefix value. - Set nNodes to indicate the number of nodes that + prefixDatum to the prefix value. + Set nNodes to indicate the number of nodes that the new inner tuple will contain, and - set nodeLabels to an array of their label values, + set nodeLabels to an array of their label values, or to NULL if node labels are not required. - Set mapTuplesToNodes to an array that gives the index + Set mapTuplesToNodes to an array that gives the index (from zero) of the node that each leaf tuple should be assigned to. - Set leafTupleDatums to an array of the values to + Set leafTupleDatums to an array of the values to be stored in the new leaf tuples (these will be the same as the - input datums if the operator class does not modify + input datums if the operator class does not modify datums from one level to the next). - Note that the picksplit function is + Note that the picksplit function is responsible for palloc'ing the - nodeLabels, mapTuplesToNodes and - leafTupleDatums arrays. + nodeLabels, mapTuplesToNodes and + leafTupleDatums arrays. If more than one leaf tuple is supplied, it is expected that the - picksplit function will classify them into more than + picksplit function will classify them into more than one node; otherwise it is not possible to split the leaf tuples across multiple pages, which is the ultimate purpose of this - operation. Therefore, if the picksplit function + operation. Therefore, if the picksplit function ends up placing all the leaf tuples in the same node, the core SP-GiST code will override that decision and generate an inner tuple in which the leaf tuples are assigned at random to several identically-labeled nodes. Such a tuple is marked - allTheSame to signify that this has happened. The - choose and inner_consistent functions + allTheSame to signify that this has happened. The + choose and inner_consistent functions must take suitable care with such inner tuples. See for more information. - picksplit can be applied to a single leaf tuple only - in the case that the config function set - longValuesOK to true and a larger-than-a-page input + picksplit can be applied to a single leaf tuple only + in the case that the config function set + longValuesOK to true and a larger-than-a-page input value has been supplied. In this case the point of the operation is to strip off a prefix and produce a new, shorter leaf datum value. The call will be repeated until a leaf datum short enough to fit on @@ -564,20 +564,20 @@ typedef struct spgPickSplitOut - inner_consistent + inner_consistent Returns set of nodes (branches) to follow during tree search. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_inner_consistent(internal, internal) RETURNS void ... - The first argument is a pointer to a spgInnerConsistentIn + The first argument is a pointer to a spgInnerConsistentIn C struct, containing input data for the function. - The second argument is a pointer to a spgInnerConsistentOut + The second argument is a pointer to a spgInnerConsistentOut C struct, which the function must fill with result data. @@ -610,90 +610,90 @@ typedef struct spgInnerConsistentOut } spgInnerConsistentOut; - The array scankeys, of length nkeys, + The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of - them are interesting. (Note that nkeys = 0 implies + them are interesting. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent - function only cares about the sk_strategy and - sk_argument fields of each array entry, which + function only cares about the sk_strategy and + sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. - In particular it is not necessary to check sk_flags to + In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. - reconstructedValue is the value reconstructed for the - parent tuple; it is (Datum) 0 at the root level or if the - inner_consistent function did not provide a value at the + reconstructedValue is the value reconstructed for the + parent tuple; it is (Datum) 0 at the root level or if the + inner_consistent function did not provide a value at the parent level. - traversalValue is a pointer to any traverse data - passed down from the previous call of inner_consistent + traversalValue is a pointer to any traverse data + passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. - traversalMemoryContext is the memory context in which + traversalMemoryContext is the memory context in which to store output traverse values (see below). - level is the current inner tuple's level, starting at + level is the current inner tuple's level, starting at zero for the root level. - returnData is true if reconstructed data is + returnData is true if reconstructed data is required for this query; this will only be so if the - config function asserted canReturnData. - allTheSame is true if the current inner tuple is - marked all-the-same; in this case all the nodes have the + config function asserted canReturnData. + allTheSame is true if the current inner tuple is + marked all-the-same; in this case all the nodes have the same label (if any) and so either all or none of them match the query (see ). - hasPrefix is true if the current inner tuple contains + hasPrefix is true if the current inner tuple contains a prefix; if so, - prefixDatum is its value. - nNodes is the number of child nodes contained in the + prefixDatum is its value. + nNodes is the number of child nodes contained in the inner tuple, and - nodeLabels is an array of their label values, or + nodeLabels is an array of their label values, or NULL if the nodes do not have labels. - nNodes must be set to the number of child nodes that + nNodes must be set to the number of child nodes that need to be visited by the search, and - nodeNumbers must be set to an array of their indexes. + nodeNumbers must be set to an array of their indexes. If the operator class keeps track of levels, set - levelAdds to an array of the level increments + levelAdds to an array of the level increments required when descending to each node to be visited. (Often these increments will be the same for all the nodes, but that's not necessarily so, so an array is used.) If value reconstruction is needed, set - reconstructedValues to an array of the values + reconstructedValues to an array of the values reconstructed for each child node to be visited; otherwise, leave - reconstructedValues as NULL. + reconstructedValues as NULL. If it is desired to pass down additional out-of-band information - (traverse values) to lower levels of the tree search, - set traversalValues to an array of the appropriate + (traverse values) to lower levels of the tree search, + set traversalValues to an array of the appropriate traverse values, one for each child node to be visited; otherwise, - leave traversalValues as NULL. - Note that the inner_consistent function is + leave traversalValues as NULL. + Note that the inner_consistent function is responsible for palloc'ing the - nodeNumbers, levelAdds, - reconstructedValues, and - traversalValues arrays in the current memory context. + nodeNumbers, levelAdds, + reconstructedValues, and + traversalValues arrays in the current memory context. However, any output traverse values pointed to by - the traversalValues array should be allocated - in traversalMemoryContext. + the traversalValues array should be allocated + in traversalMemoryContext. Each traverse value must be a single palloc'd chunk. - leaf_consistent + leaf_consistent Returns true if a leaf tuple satisfies a query. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_leaf_consistent(internal, internal) RETURNS bool ... - The first argument is a pointer to a spgLeafConsistentIn + The first argument is a pointer to a spgLeafConsistentIn C struct, containing input data for the function. - The second argument is a pointer to a spgLeafConsistentOut + The second argument is a pointer to a spgLeafConsistentOut C struct, which the function must fill with result data. typedef struct spgLeafConsistentIn @@ -716,40 +716,40 @@ typedef struct spgLeafConsistentOut } spgLeafConsistentOut; - The array scankeys, of length nkeys, + The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of - them satisfy the query. (Note that nkeys = 0 implies + them satisfy the query. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent - function only cares about the sk_strategy and - sk_argument fields of each array entry, which + function only cares about the sk_strategy and + sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. - In particular it is not necessary to check sk_flags to + In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. - reconstructedValue is the value reconstructed for the - parent tuple; it is (Datum) 0 at the root level or if the - inner_consistent function did not provide a value at the + reconstructedValue is the value reconstructed for the + parent tuple; it is (Datum) 0 at the root level or if the + inner_consistent function did not provide a value at the parent level. - traversalValue is a pointer to any traverse data - passed down from the previous call of inner_consistent + traversalValue is a pointer to any traverse data + passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. - level is the current leaf tuple's level, starting at + level is the current leaf tuple's level, starting at zero for the root level. - returnData is true if reconstructed data is + returnData is true if reconstructed data is required for this query; this will only be so if the - config function asserted canReturnData. - leafDatum is the key value stored in the current + config function asserted canReturnData. + leafDatum is the key value stored in the current leaf tuple. - The function must return true if the leaf tuple matches the - query, or false if not. In the true case, - if returnData is true then - leafValue must be set to the value originally supplied + The function must return true if the leaf tuple matches the + query, or false if not. In the true case, + if returnData is true then + leafValue must be set to the value originally supplied to be indexed for this leaf tuple. Also, - recheck may be set to true if the match + recheck may be set to true if the match is uncertain and so the operator(s) must be re-applied to the actual heap tuple to verify the match. @@ -759,18 +759,18 @@ typedef struct spgLeafConsistentOut All the SP-GiST support methods are normally called in a short-lived - memory context; that is, CurrentMemoryContext will be reset + memory context; that is, CurrentMemoryContext will be reset after processing of each tuple. It is therefore not very important to - worry about pfree'ing everything you palloc. (The config + worry about pfree'ing everything you palloc. (The config method is an exception: it should try to avoid leaking memory. But - usually the config method need do nothing but assign + usually the config method need do nothing but assign constants into the passed parameter struct.) If the indexed column is of a collatable data type, the index collation will be passed to all the support methods, using the standard - PG_GET_COLLATION() mechanism. + PG_GET_COLLATION() mechanism. @@ -794,7 +794,7 @@ typedef struct spgLeafConsistentOut trees, in which each level of the tree includes a prefix that is short enough to fit on a page, and the final leaf level includes a suffix also short enough to fit on a page. The operator class should set - longValuesOK to TRUE only if it is prepared to arrange for + longValuesOK to TRUE only if it is prepared to arrange for this to happen. Otherwise, the SP-GiST core will reject any request to index a value that is too large to fit on an index page. @@ -814,8 +814,8 @@ typedef struct spgLeafConsistentOut links that chain such tuples together.) If the set of leaf tuples grows too large for a page, a split is performed and an intermediate inner tuple is inserted. For this to fix the problem, the new inner - tuple must divide the set of leaf values into more than one - node group. If the operator class's picksplit function + tuple must divide the set of leaf values into more than one + node group. If the operator class's picksplit function fails to do that, the SP-GiST core resorts to extraordinary measures described in . @@ -830,58 +830,58 @@ typedef struct spgLeafConsistentOut corresponding to the four quadrants around the inner tuple's centroid point. In such a case the code typically works with the nodes by number, and there is no need for explicit node labels. To suppress - node labels (and thereby save some space), the picksplit - function can return NULL for the nodeLabels array, - and likewise the choose function can return NULL for - the prefixNodeLabels array during - a spgSplitTuple action. - This will in turn result in nodeLabels being NULL during - subsequent calls to choose and inner_consistent. + node labels (and thereby save some space), the picksplit + function can return NULL for the nodeLabels array, + and likewise the choose function can return NULL for + the prefixNodeLabels array during + a spgSplitTuple action. + This will in turn result in nodeLabels being NULL during + subsequent calls to choose and inner_consistent. In principle, node labels could be used for some inner tuples and omitted for others in the same index. When working with an inner tuple having unlabeled nodes, it is an error - for choose to return spgAddNode, since the set + for choose to return spgAddNode, since the set of nodes is supposed to be fixed in such cases. - <quote>All-the-same</> Inner Tuples + <quote>All-the-same</quote> Inner Tuples The SP-GiST core can override the results of the - operator class's picksplit function when - picksplit fails to divide the supplied leaf values into + operator class's picksplit function when + picksplit fails to divide the supplied leaf values into at least two node categories. When this happens, the new inner tuple is created with multiple nodes that each have the same label (if any) - that picksplit gave to the one node it did use, and the + that picksplit gave to the one node it did use, and the leaf values are divided at random among these equivalent nodes. - The allTheSame flag is set on the inner tuple to warn the - choose and inner_consistent functions that the + The allTheSame flag is set on the inner tuple to warn the + choose and inner_consistent functions that the tuple does not have the node set that they might otherwise expect. - When dealing with an allTheSame tuple, a choose - result of spgMatchNode is interpreted to mean that the new + When dealing with an allTheSame tuple, a choose + result of spgMatchNode is interpreted to mean that the new value can be assigned to any of the equivalent nodes; the core code will - ignore the supplied nodeN value and descend into one + ignore the supplied nodeN value and descend into one of the nodes at random (so as to keep the tree balanced). It is an - error for choose to return spgAddNode, since + error for choose to return spgAddNode, since that would make the nodes not all equivalent; the - spgSplitTuple action must be used if the value to be inserted + spgSplitTuple action must be used if the value to be inserted doesn't match the existing nodes. - When dealing with an allTheSame tuple, the - inner_consistent function should return either all or none + When dealing with an allTheSame tuple, the + inner_consistent function should return either all or none of the nodes as targets for continuing the index search, since they are all equivalent. This may or may not require any special-case code, - depending on how much the inner_consistent function normally + depending on how much the inner_consistent function normally assumes about the meaning of the nodes. @@ -895,8 +895,8 @@ typedef struct spgLeafConsistentOut The PostgreSQL source distribution includes several examples of index operator classes for SP-GiST, as described in . Look - into src/backend/access/spgist/ - and src/backend/utils/adt/ to see the code. + into src/backend/access/spgist/ + and src/backend/utils/adt/ to see the code. diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 3594f9dce1..e2b44c5fa1 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -203,7 +203,7 @@ int SPI_execute(const char * command, bool rea SPI_execute executes the specified SQL command for count rows. If read_only - is true, the command must be read-only, and execution overhead + is true, the command must be read-only, and execution overhead is somewhat reduced. @@ -225,13 +225,13 @@ SPI_execute("SELECT * FROM foo", true, 5); SPI_execute("INSERT INTO foo SELECT * FROM bar", false, 5); - inserts all rows from bar, ignoring the + inserts all rows from bar, ignoring the count parameter. However, with SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); at most 5 rows would be inserted, since execution would stop after the - fifth RETURNING result row is retrieved. + fifth RETURNING result row is retrieved. @@ -244,26 +244,26 @@ SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); - When read_only is false, + When read_only is false, SPI_execute increments the command - counter and computes a new snapshot before executing each + counter and computes a new snapshot before executing each command in the string. The snapshot does not actually change if the - current transaction isolation level is SERIALIZABLE or REPEATABLE READ, but in - READ COMMITTED mode the snapshot update allows each command to + current transaction isolation level is SERIALIZABLE or REPEATABLE READ, but in + READ COMMITTED mode the snapshot update allows each command to see the results of newly committed transactions from other sessions. This is essential for consistent behavior when the commands are modifying the database. - When read_only is true, + When read_only is true, SPI_execute does not update either the snapshot - or the command counter, and it allows only plain SELECT + or the command counter, and it allows only plain SELECT commands to appear in the command string. The commands are executed using the snapshot previously established for the surrounding query. This execution mode is somewhat faster than the read/write mode due to eliminating per-command overhead. It also allows genuinely - stable functions to be built: since successive executions + stable functions to be built: since successive executions will all use the same snapshot, there will be no change in the results. @@ -284,11 +284,11 @@ SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); then you can use the global pointer SPITupleTable *SPI_tuptable to access the result rows. Some utility commands (such as - EXPLAIN) also return row sets, and SPI_tuptable + EXPLAIN) also return row sets, and SPI_tuptable will contain the result in these cases too. Some utility commands - (COPY, CREATE TABLE AS) don't return a row set, so - SPI_tuptable is NULL, but they still return the number of - rows processed in SPI_processed. + (COPY, CREATE TABLE AS) don't return a row set, so + SPI_tuptable is NULL, but they still return the number of + rows processed in SPI_processed. @@ -304,17 +304,17 @@ typedef struct HeapTuple *vals; /* rows */ } SPITupleTable; - vals is an array of pointers to rows. (The number + vals is an array of pointers to rows. (The number of valid entries is given by SPI_processed.) - tupdesc is a row descriptor which you can pass to - SPI functions dealing with rows. tuptabcxt, - alloced, and free are internal + tupdesc is a row descriptor which you can pass to + SPI functions dealing with rows. tuptabcxt, + alloced, and free are internal fields not intended for use by SPI callers. SPI_finish frees all - SPITupleTables allocated during the current + SPITupleTables allocated during the current procedure. You can free a particular result table earlier, if you are done with it, by calling SPI_freetuptable. @@ -336,7 +336,7 @@ typedef struct bool read_only - true for read-only execution + true for read-only execution @@ -345,7 +345,7 @@ typedef struct maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -365,7 +365,7 @@ typedef struct if a SELECT (but not SELECT - INTO) was executed + INTO) was executed @@ -473,7 +473,7 @@ typedef struct SPI_ERROR_COPY - if COPY TO stdout or COPY FROM stdin + if COPY TO stdout or COPY FROM stdin was attempted @@ -484,13 +484,13 @@ typedef struct if a transaction manipulation command was attempted - (BEGIN, - COMMIT, - ROLLBACK, - SAVEPOINT, - PREPARE TRANSACTION, - COMMIT PREPARED, - ROLLBACK PREPARED, + (BEGIN, + COMMIT, + ROLLBACK, + SAVEPOINT, + PREPARE TRANSACTION, + COMMIT PREPARED, + ROLLBACK PREPARED, or any variant thereof) @@ -560,7 +560,7 @@ int SPI_exec(const char * command, long count< SPI_exec is the same as SPI_execute, with the latter's read_only parameter always taken as - false. + false. @@ -582,7 +582,7 @@ int SPI_exec(const char * command, long count< maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -628,7 +628,7 @@ int SPI_execute_with_args(const char *command, SPI_execute_with_args executes a command that might include references to externally supplied parameters. The command text - refers to a parameter as $n, and + refers to a parameter as $n, and the call specifies data types and values for each such symbol. read_only and count have the same interpretation as in SPI_execute. @@ -642,7 +642,7 @@ int SPI_execute_with_args(const char *command, - Similar results can be achieved with SPI_prepare followed by + Similar results can be achieved with SPI_prepare followed by SPI_execute_plan; however, when using this function the query plan is always customized to the specific parameter values provided. @@ -670,7 +670,7 @@ int SPI_execute_with_args(const char *command, int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -707,12 +707,12 @@ int SPI_execute_with_args(const char *command, If nulls is NULL then SPI_execute_with_args assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -720,7 +720,7 @@ int SPI_execute_with_args(const char *command, bool read_only - true for read-only execution + true for read-only execution @@ -729,7 +729,7 @@ int SPI_execute_with_args(const char *command, maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -796,7 +796,7 @@ SPIPlanPtr SPI_prepare(const char * command, int A prepared command can be generalized by writing parameters - ($1, $2, etc.) in place of what would be + ($1, $2, etc.) in place of what would be constants in a normal command. The actual values of the parameters are then specified when SPI_execute_plan is called. This allows the prepared command to be used over a wider range of @@ -829,7 +829,7 @@ SPIPlanPtr SPI_prepare(const char * command, int int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -851,14 +851,14 @@ SPIPlanPtr SPI_prepare(const char * command, int SPI_prepare returns a non-null pointer to an - SPIPlan, which is an opaque struct representing a prepared + SPIPlan, which is an opaque struct representing a prepared statement. On error, NULL will be returned, and SPI_result will be set to one of the same error codes used by SPI_execute, except that it is set to SPI_ERROR_ARGUMENT if command is NULL, or if - nargs is less than 0, or if nargs is - greater than 0 and argtypes is NULL. + nargs is less than 0, or if nargs is + greater than 0 and argtypes is NULL. @@ -875,21 +875,21 @@ SPIPlanPtr SPI_prepare(const char * command, int CURSOR_OPT_GENERIC_PLAN or - CURSOR_OPT_CUSTOM_PLAN flag to + passing the CURSOR_OPT_GENERIC_PLAN or + CURSOR_OPT_CUSTOM_PLAN flag to SPI_prepare_cursor, to force use of generic or custom plans respectively. Although the main point of a prepared statement is to avoid repeated parse - analysis and planning of the statement, PostgreSQL will + analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new - search_path. (This latter behavior is new as of + search_path. (This latter behavior is new as of PostgreSQL 9.3.) See for more information about the behavior of prepared statements. @@ -900,14 +900,14 @@ SPIPlanPtr SPI_prepare(const char * command, int - SPIPlanPtr is declared as a pointer to an opaque struct type in - spi.h. It is unwise to try to access its contents + SPIPlanPtr is declared as a pointer to an opaque struct type in + spi.h. It is unwise to try to access its contents directly, as that makes your code much more likely to break in future revisions of PostgreSQL. - The name SPIPlanPtr is somewhat historical, since the data + The name SPIPlanPtr is somewhat historical, since the data structure no longer necessarily contains an execution plan. @@ -941,9 +941,9 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < SPI_prepare_cursor is identical to SPI_prepare, except that it also allows specification - of the planner's cursor options parameter. This is a bit mask + of the planner's cursor options parameter. This is a bit mask having the values shown in nodes/parsenodes.h - for the options field of DeclareCursorStmt. + for the options field of DeclareCursorStmt. SPI_prepare always takes the cursor options as zero. @@ -965,7 +965,7 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -1004,7 +1004,7 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < Notes - Useful bits to set in cursorOptions include + Useful bits to set in cursorOptions include CURSOR_OPT_SCROLL, CURSOR_OPT_NO_SCROLL, CURSOR_OPT_FAST_PLAN, @@ -1262,9 +1262,9 @@ bool SPI_is_cursor_plan(SPIPlanPtr plan) as an argument to SPI_cursor_open, or false if that is not the case. The criteria are that the plan represents one single command and that this - command returns tuples to the caller; for example, SELECT - is allowed unless it contains an INTO clause, and - UPDATE is allowed only if it contains a RETURNING + command returns tuples to the caller; for example, SELECT + is allowed unless it contains an INTO clause, and + UPDATE is allowed only if it contains a RETURNING clause. @@ -1368,12 +1368,12 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * If nulls is NULL then SPI_execute_plan assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1381,7 +1381,7 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * bool read_only - true for read-only execution + true for read-only execution @@ -1390,7 +1390,7 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1467,10 +1467,10 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, prepared by SPI_prepare. This function is equivalent to SPI_execute_plan except that information about the parameter values to be passed to the - query is presented differently. The ParamListInfo + query is presented differently. The ParamListInfo representation can be convenient for passing down values that are already available in that format. It also supports use of dynamic - parameter sets via hook functions specified in ParamListInfo. + parameter sets via hook functions specified in ParamListInfo. @@ -1499,7 +1499,7 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, bool read_only - true for read-only execution + true for read-only execution @@ -1508,7 +1508,7 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1558,7 +1558,7 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< SPI_execp is the same as SPI_execute_plan, with the latter's read_only parameter always taken as - false. + false. @@ -1597,12 +1597,12 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< If nulls is NULL then SPI_execp assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1612,7 +1612,7 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1729,12 +1729,12 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr nulls is NULL then SPI_cursor_open assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1742,7 +1742,7 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr bool read_only - true for read-only execution + true for read-only execution @@ -1753,7 +1753,7 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -1836,7 +1836,7 @@ Portal SPI_cursor_open_with_args(const char *name, int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -1873,12 +1873,12 @@ Portal SPI_cursor_open_with_args(const char *name, If nulls is NULL then SPI_cursor_open_with_args assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1886,7 +1886,7 @@ Portal SPI_cursor_open_with_args(const char *name, bool read_only - true for read-only execution + true for read-only execution @@ -1906,7 +1906,7 @@ Portal SPI_cursor_open_with_args(const char *name, Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -1944,10 +1944,10 @@ Portal SPI_cursor_open_with_paramlist(const char *name, SPI_prepare. This function is equivalent to SPI_cursor_open except that information about the parameter values to be passed to the - query is presented differently. The ParamListInfo + query is presented differently. The ParamListInfo representation can be convenient for passing down values that are already available in that format. It also supports use of dynamic - parameter sets via hook functions specified in ParamListInfo. + parameter sets via hook functions specified in ParamListInfo. @@ -1991,7 +1991,7 @@ Portal SPI_cursor_open_with_paramlist(const char *name, bool read_only - true for read-only execution + true for read-only execution @@ -2002,7 +2002,7 @@ Portal SPI_cursor_open_with_paramlist(const char *name, Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -2090,7 +2090,7 @@ void SPI_cursor_fetch(Portal portal, bool forw SPI_cursor_fetch fetches some rows from a cursor. This is equivalent to a subset of the SQL command - FETCH (see SPI_scroll_cursor_fetch + FETCH (see SPI_scroll_cursor_fetch for more functionality). @@ -2175,7 +2175,7 @@ void SPI_cursor_move(Portal portal, bool forwa SPI_cursor_move skips over some number of rows in a cursor. This is equivalent to a subset of the SQL command - MOVE (see SPI_scroll_cursor_move + MOVE (see SPI_scroll_cursor_move for more functionality). @@ -2250,7 +2250,7 @@ void SPI_scroll_cursor_fetch(Portal portal, FetchDirectio SPI_scroll_cursor_fetch fetches some rows from a - cursor. This is equivalent to the SQL command FETCH. + cursor. This is equivalent to the SQL command FETCH. @@ -2350,7 +2350,7 @@ void SPI_scroll_cursor_move(Portal portal, FetchDirection SPI_scroll_cursor_move skips over some number of rows in a cursor. This is equivalent to the SQL command - MOVE. + MOVE. @@ -2400,7 +2400,7 @@ void SPI_scroll_cursor_move(Portal portal, FetchDirection SPI_processed is set as in SPI_execute if successful. - SPI_tuptable is set to NULL, since + SPI_tuptable is set to NULL, since no rows are returned by this function. @@ -2628,7 +2628,7 @@ SPIPlanPtr SPI_saveplan(SPIPlanPtr plan) The originally passed-in statement is not freed, so you might wish to do SPI_freeplan on it to avoid leaking memory - until SPI_finish. + until SPI_finish. @@ -2975,7 +2975,7 @@ int SPI_register_trigger_data(TriggerData *tdata) The functions described here provide an interface for extracting - information from result sets returned by SPI_execute and + information from result sets returned by SPI_execute and other SPI functions. @@ -3082,7 +3082,7 @@ int SPI_fnumber(TupleDesc rowdesc, const char * If colname refers to a system column (e.g., - oid) then the appropriate negative column number will + oid) then the appropriate negative column number will be returned. The caller should be careful to test the return value for exact equality to SPI_ERROR_NOATTRIBUTE to detect an error; testing the result for less than or equal to 0 is @@ -3617,7 +3617,7 @@ const char * SPI_result_code_string(int code); to keep track of individual objects to avoid memory leaks; instead only a relatively small number of contexts have to be managed. palloc and related functions allocate memory - from the current context. + from the current context. @@ -3943,7 +3943,7 @@ HeapTupleHeader SPI_returntuple(HeapTuple row, TupleDesc Note that this should be used for functions that are declared to return composite types. It is not used for triggers; use - SPI_copytuple for returning a modified row in a trigger. + SPI_copytuple for returning a modified row in a trigger. @@ -4087,12 +4087,12 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple nulls is NULL then SPI_modifytuple assumes that no new values are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding new value is - non-null, or 'n' if the corresponding new value is + array should be ' ' if the corresponding new value is + non-null, or 'n' if the corresponding new value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: it - does not need a '\0' terminator. + does not need a '\0' terminator. @@ -4115,10 +4115,10 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple SPI_ERROR_ARGUMENT - if rel is NULL, or if - row is NULL, or if ncols - is less than or equal to 0, or if colnum is - NULL, or if values is NULL. + if rel is NULL, or if + row is NULL, or if ncols + is less than or equal to 0, or if colnum is + NULL, or if values is NULL. @@ -4127,9 +4127,9 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple SPI_ERROR_NOATTRIBUTE - if colnum contains an invalid column number (less + if colnum contains an invalid column number (less than or equal to 0 or greater than the number of columns in - row) + row) @@ -4211,7 +4211,7 @@ void SPI_freetuple(HeapTuple row) SPI_freetuptable - free a row set created by SPI_execute or a similar + free a row set created by SPI_execute or a similar function @@ -4227,7 +4227,7 @@ void SPI_freetuptable(SPITupleTable * tuptable) SPI_freetuptable frees a row set created by a prior SPI command execution function, such as - SPI_execute. Therefore, this function is often called + SPI_execute. Therefore, this function is often called with the global variable SPI_tuptable as argument. @@ -4236,14 +4236,14 @@ void SPI_freetuptable(SPITupleTable * tuptable) This function is useful if a SPI procedure needs to execute multiple commands and does not want to keep the results of earlier commands around until it ends. Note that any unfreed row sets will - be freed anyway at SPI_finish. + be freed anyway at SPI_finish. Also, if a subtransaction is started and then aborted within execution of a SPI procedure, SPI automatically frees any row sets created while the subtransaction was running. - Beginning in PostgreSQL 9.3, + Beginning in PostgreSQL 9.3, SPI_freetuptable contains guard logic to protect against duplicate deletion requests for the same row set. In previous releases, duplicate deletions would lead to crashes. @@ -4370,8 +4370,8 @@ INSERT INTO a SELECT * FROM a; All standard procedural languages set the SPI read-write mode depending on the volatility attribute of the function. Commands of - STABLE and IMMUTABLE functions are done in - read-only mode, while commands of VOLATILE functions are + STABLE and IMMUTABLE functions are done in + read-only mode, while commands of VOLATILE functions are done in read-write mode. While authors of C functions are able to violate this convention, it's unlikely to be a good idea to do so. diff --git a/doc/src/sgml/sslinfo.sgml b/doc/src/sgml/sslinfo.sgml index 1fd323a0b6..308e3e03a4 100644 --- a/doc/src/sgml/sslinfo.sgml +++ b/doc/src/sgml/sslinfo.sgml @@ -8,15 +8,15 @@ - The sslinfo module provides information about the SSL + The sslinfo module provides information about the SSL certificate that the current client provided when connecting to - PostgreSQL. The module is useless (most functions + PostgreSQL. The module is useless (most functions will return NULL) if the current connection does not use SSL. This extension won't build at all unless the installation was - configured with --with-openssl. + configured with --with-openssl. @@ -126,7 +126,7 @@ - The result looks like /CN=Somebody /C=Some country/O=Some organization. + The result looks like /CN=Somebody /C=Some country/O=Some organization. @@ -142,7 +142,7 @@ Returns the full issuer name of the current client certificate, converting character data into the current database encoding. Encoding conversions - are handled the same as for ssl_client_dn. + are handled the same as for ssl_client_dn. The combination of the return value of this function with the @@ -195,7 +195,7 @@ role emailAddress - All of these fields are optional, except commonName. + All of these fields are optional, except commonName. It depends entirely on your CA's policy which of them would be included and which wouldn't. The meaning of these fields, however, is strictly defined by @@ -214,7 +214,7 @@ emailAddress - Same as ssl_client_dn_field, but for the certificate issuer + Same as ssl_client_dn_field, but for the certificate issuer rather than the certificate subject. diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml index 1ce1a24e10..7a61b50579 100644 --- a/doc/src/sgml/start.sgml +++ b/doc/src/sgml/start.sgml @@ -162,7 +162,7 @@ createdb: command not found - then PostgreSQL was not installed properly. Either it was not + then PostgreSQL was not installed properly. Either it was not installed at all or your shell's search path was not set to include it. Try calling the command with an absolute path instead: @@ -191,17 +191,17 @@ createdb: could not connect to database postgres: could not connect to server: N createdb: could not connect to database postgres: FATAL: role "joe" does not exist where your own login name is mentioned. This will happen if the - administrator has not created a PostgreSQL user account - for you. (PostgreSQL user accounts are distinct from + administrator has not created a PostgreSQL user account + for you. (PostgreSQL user accounts are distinct from operating system user accounts.) If you are the administrator, see for help creating accounts. You will need to - become the operating system user under which PostgreSQL - was installed (usually postgres) to create the first user + become the operating system user under which PostgreSQL + was installed (usually postgres) to create the first user account. It could also be that you were assigned a - PostgreSQL user name that is different from your - operating system user name; in that case you need to use the @@ -288,7 +288,7 @@ createdb: database creation failed: ERROR: permission denied to create database Running the PostgreSQL interactive - terminal program, called psql, which allows you + terminal program, called psql, which allows you to interactively enter, edit, and execute SQL commands. @@ -298,7 +298,7 @@ createdb: database creation failed: ERROR: permission denied to create database Using an existing graphical frontend tool like pgAdmin or an office suite with - ODBC or JDBC support to create and manipulate a + ODBC or JDBC support to create and manipulate a database. These possibilities are not covered in this tutorial. diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml index aed2cf8bca..0f9bddf7ab 100644 --- a/doc/src/sgml/storage.sgml +++ b/doc/src/sgml/storage.sgml @@ -21,23 +21,23 @@ directories. Traditionally, the configuration and data files used by a database cluster are stored together within the cluster's data -directory, commonly referred to as PGDATA (after the name of the +directory, commonly referred to as PGDATA (after the name of the environment variable that can be used to define it). A common location for -PGDATA is /var/lib/pgsql/data. Multiple clusters, +PGDATA is /var/lib/pgsql/data. Multiple clusters, managed by different server instances, can exist on the same machine. -The PGDATA directory contains several subdirectories and control +The PGDATA directory contains several subdirectories and control files, as shown in . In addition to these required items, the cluster configuration files postgresql.conf, pg_hba.conf, and pg_ident.conf are traditionally stored in -PGDATA, although it is possible to place them elsewhere. +PGDATA, although it is possible to place them elsewhere. -Contents of <varname>PGDATA</> +Contents of <varname>PGDATA</varname> @@ -51,126 +51,126 @@ Item - PG_VERSION + PG_VERSION A file containing the major version number of PostgreSQL - base + base Subdirectory containing per-database subdirectories - current_logfiles + current_logfiles File recording the log file(s) currently written to by the logging collector - global + global Subdirectory containing cluster-wide tables, such as - pg_database + pg_database - pg_commit_ts + pg_commit_ts Subdirectory containing transaction commit timestamp data - pg_dynshmem + pg_dynshmem Subdirectory containing files used by the dynamic shared memory subsystem - pg_logical + pg_logical Subdirectory containing status data for logical decoding - pg_multixact + pg_multixact Subdirectory containing multitransaction status data (used for shared row locks) - pg_notify + pg_notify Subdirectory containing LISTEN/NOTIFY status data - pg_replslot + pg_replslot Subdirectory containing replication slot data - pg_serial + pg_serial Subdirectory containing information about committed serializable transactions - pg_snapshots + pg_snapshots Subdirectory containing exported snapshots - pg_stat + pg_stat Subdirectory containing permanent files for the statistics subsystem - pg_stat_tmp + pg_stat_tmp Subdirectory containing temporary files for the statistics subsystem - pg_subtrans + pg_subtrans Subdirectory containing subtransaction status data - pg_tblspc + pg_tblspc Subdirectory containing symbolic links to tablespaces - pg_twophase + pg_twophase Subdirectory containing state files for prepared transactions - pg_wal + pg_wal Subdirectory containing WAL (Write Ahead Log) files - pg_xact + pg_xact Subdirectory containing transaction commit status data - postgresql.auto.conf + postgresql.auto.conf A file used for storing configuration parameters that are set by ALTER SYSTEM - postmaster.opts + postmaster.opts A file recording the command-line options the server was last started with - postmaster.pid + postmaster.pid A lock file recording the current postmaster process ID (PID), cluster data directory path, postmaster start timestamp, port number, Unix-domain socket directory path (empty on Windows), - first valid listen_address (IP address or *, or empty if + first valid listen_address (IP address or *, or empty if not listening on TCP), and shared memory segment ID (this file is not present after server shutdown) @@ -182,25 +182,25 @@ last started with For each database in the cluster there is a subdirectory within -PGDATA/base, named after the database's OID in -pg_database. This subdirectory is the default location +PGDATA/base, named after the database's OID in +pg_database. This subdirectory is the default location for the database's files; in particular, its system catalogs are stored there. Each table and index is stored in a separate file. For ordinary relations, -these files are named after the table or index's filenode number, -which can be found in pg_class.relfilenode. But +these files are named after the table or index's filenode number, +which can be found in pg_class.relfilenode. But for temporary relations, the file name is of the form -tBBB_FFF, where BBB -is the backend ID of the backend which created the file, and FFF +tBBB_FFF, where BBB +is the backend ID of the backend which created the file, and FFF is the filenode number. In either case, in addition to the main file (a/k/a -main fork), each table and index has a free space map (see free space map (see ), which stores information about free space available in the relation. The free space map is stored in a file named with the filenode -number plus the suffix _fsm. Tables also have a -visibility map, stored in a fork with the suffix _vm, +number plus the suffix _fsm. Tables also have a +visibility map, stored in a fork with the suffix _vm, to track which pages are known to have no dead tuples. The visibility map is described further in . Unlogged tables and indexes have a third fork, known as the initialization fork, which is stored in a fork @@ -210,36 +210,36 @@ with the suffix _init (see ). Note that while a table's filenode often matches its OID, this is -not necessarily the case; some operations, like -TRUNCATE, REINDEX, CLUSTER and some forms -of ALTER TABLE, can change the filenode while preserving the OID. +not necessarily the case; some operations, like +TRUNCATE, REINDEX, CLUSTER and some forms +of ALTER TABLE, can change the filenode while preserving the OID. Avoid assuming that filenode and table OID are the same. -Also, for certain system catalogs including pg_class itself, -pg_class.relfilenode contains zero. The +Also, for certain system catalogs including pg_class itself, +pg_class.relfilenode contains zero. The actual filenode number of these catalogs is stored in a lower-level data -structure, and can be obtained using the pg_relation_filenode() +structure, and can be obtained using the pg_relation_filenode() function. When a table or index exceeds 1 GB, it is divided into gigabyte-sized -segments. The first segment's file name is the same as the +segments. The first segment's file name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrangement avoids problems on platforms that have file size limitations. (Actually, 1 GB is just the default segment size. The segment size can be adjusted using the configuration option -when building PostgreSQL.) +when building PostgreSQL.) In principle, free space map and visibility map forks could require multiple segments as well, though this is unlikely to happen in practice. A table that has columns with potentially large entries will have an -associated TOAST table, which is used for out-of-line storage of +associated TOAST table, which is used for out-of-line storage of field values that are too large to keep in the table rows proper. -pg_class.reltoastrelid links from a table to -its TOAST table, if any. +pg_class.reltoastrelid links from a table to +its TOAST table, if any. See for more information. @@ -250,45 +250,45 @@ The contents of tables and indexes are discussed further in Tablespaces make the scenario more complicated. Each user-defined tablespace -has a symbolic link inside the PGDATA/pg_tblspc +has a symbolic link inside the PGDATA/pg_tblspc directory, which points to the physical tablespace directory (i.e., the -location specified in the tablespace's CREATE TABLESPACE command). +location specified in the tablespace's CREATE TABLESPACE command). This symbolic link is named after the tablespace's OID. Inside the physical tablespace directory there is -a subdirectory with a name that depends on the PostgreSQL -server version, such as PG_9.0_201008051. (The reason for using +a subdirectory with a name that depends on the PostgreSQL +server version, such as PG_9.0_201008051. (The reason for using this subdirectory is so that successive versions of the database can use -the same CREATE TABLESPACE location value without conflicts.) +the same CREATE TABLESPACE location value without conflicts.) Within the version-specific subdirectory, there is a subdirectory for each database that has elements in the tablespace, named after the database's OID. Tables and indexes are stored within that directory, using the filenode naming scheme. -The pg_default tablespace is not accessed through -pg_tblspc, but corresponds to -PGDATA/base. Similarly, the pg_global -tablespace is not accessed through pg_tblspc, but corresponds to -PGDATA/global. +The pg_default tablespace is not accessed through +pg_tblspc, but corresponds to +PGDATA/base. Similarly, the pg_global +tablespace is not accessed through pg_tblspc, but corresponds to +PGDATA/global. -The pg_relation_filepath() function shows the entire path -(relative to PGDATA) of any relation. It is often useful +The pg_relation_filepath() function shows the entire path +(relative to PGDATA) of any relation. It is often useful as a substitute for remembering many of the above rules. But keep in mind that this function just gives the name of the first segment of the main fork of the relation — you may need to append a segment number -and/or _fsm, _vm, or _init to find all +and/or _fsm, _vm, or _init to find all the files associated with the relation. Temporary files (for operations such as sorting more data than can fit in -memory) are created within PGDATA/base/pgsql_tmp, -or within a pgsql_tmp subdirectory of a tablespace directory -if a tablespace other than pg_default is specified for them. +memory) are created within PGDATA/base/pgsql_tmp, +or within a pgsql_tmp subdirectory of a tablespace directory +if a tablespace other than pg_default is specified for them. The name of a temporary file has the form -pgsql_tmpPPP.NNN, -where PPP is the PID of the owning backend and -NNN distinguishes different temporary files of that backend. +pgsql_tmpPPP.NNN, +where PPP is the PID of the owning backend and +NNN distinguishes different temporary files of that backend. @@ -300,10 +300,10 @@ where PPP is the PID of the owning backend and TOAST - sliced breadTOAST + sliced breadTOAST -This section provides an overview of TOAST (The +This section provides an overview of TOAST (The Oversized-Attribute Storage Technique). @@ -314,36 +314,36 @@ not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately -known as TOAST (or the best thing since sliced bread). -The TOAST infrastructure is also used to improve handling of +known as TOAST (or the best thing since sliced bread). +The TOAST infrastructure is also used to improve handling of large data values in-memory. -Only certain data types support TOAST — there is no need to +Only certain data types support TOAST — there is no need to impose the overhead on data types that cannot produce large field values. -To support TOAST, a data type must have a variable-length -(varlena) representation, in which, ordinarily, the first +To support TOAST, a data type must have a variable-length +(varlena) representation, in which, ordinarily, the first four-byte word of any stored value contains the total length of the value in -bytes (including itself). TOAST does not constrain the rest +bytes (including itself). TOAST does not constrain the rest of the data type's representation. The special representations collectively -called TOASTed values work by modifying or +called TOASTed values work by modifying or reinterpreting this initial length word. Therefore, the C-level functions -supporting a TOAST-able data type must be careful about how they -handle potentially TOASTed input values: an input might not +supporting a TOAST-able data type must be careful about how they +handle potentially TOASTed input values: an input might not actually consist of a four-byte length word and contents until after it's -been detoasted. (This is normally done by invoking -PG_DETOAST_DATUM before doing anything with an input value, +been detoasted. (This is normally done by invoking +PG_DETOAST_DATUM before doing anything with an input value, but in some cases more efficient approaches are possible. See for more detail.) -TOAST usurps two bits of the varlena length word (the high-order +TOAST usurps two bits of the varlena length word (the high-order bits on big-endian machines, the low-order bits on little-endian machines), -thereby limiting the logical size of any value of a TOAST-able -data type to 1 GB (230 - 1 bytes). When both bits are zero, -the value is an ordinary un-TOASTed value of the data type, and +thereby limiting the logical size of any value of a TOAST-able +data type to 1 GB (230 - 1 bytes). When both bits are zero, +the value is an ordinary un-TOASTed value of the data type, and the remaining bits of the length word give the total datum size (including length word) in bytes. When the highest-order or lowest-order bit is set, the value has only a single-byte header instead of the normal four-byte @@ -357,7 +357,7 @@ additional space savings that is significant compared to short values. As a special case, if the remaining bits of a single-byte header are all zero (which would be impossible for a self-inclusive length), the value is a pointer to out-of-line data, with several possible alternatives as -described below. The type and size of such a TOAST pointer +described below. The type and size of such a TOAST pointer are determined by a code stored in the second byte of the datum. Lastly, when the highest-order or lowest-order bit is clear but the adjacent bit is set, the content of the datum has been compressed and must be @@ -365,19 +365,19 @@ decompressed before use. In this case the remaining bits of the four-byte length word give the total size of the compressed datum, not the original data. Note that compression is also possible for out-of-line data but the varlena header does not tell whether it has occurred — -the content of the TOAST pointer tells that, instead. +the content of the TOAST pointer tells that, instead. -As mentioned, there are multiple types of TOAST pointer datums. +As mentioned, there are multiple types of TOAST pointer datums. The oldest and most common type is a pointer to out-of-line data stored in -a TOAST table that is separate from, but -associated with, the table containing the TOAST pointer datum -itself. These on-disk pointer datums are created by the -TOAST management code (in access/heap/tuptoaster.c) +a TOAST table that is separate from, but +associated with, the table containing the TOAST pointer datum +itself. These on-disk pointer datums are created by the +TOAST management code (in access/heap/tuptoaster.c) when a tuple to be stored on disk is too large to be stored as-is. Further details appear in . -Alternatively, a TOAST pointer datum can contain a pointer to +Alternatively, a TOAST pointer datum can contain a pointer to out-of-line data that appears elsewhere in memory. Such datums are necessarily short-lived, and will never appear on-disk, but they are very useful for avoiding copying and redundant processing of large data values. @@ -388,57 +388,57 @@ Further details appear in . The compression technique used for either in-line or out-of-line compressed data is a fairly simple and very fast member of the LZ family of compression techniques. See -src/common/pg_lzcompress.c for the details. +src/common/pg_lzcompress.c for the details. Out-of-line, on-disk TOAST storage -If any of the columns of a table are TOAST-able, the table will -have an associated TOAST table, whose OID is stored in the table's -pg_class.reltoastrelid entry. On-disk -TOASTed values are kept in the TOAST table, as +If any of the columns of a table are TOAST-able, the table will +have an associated TOAST table, whose OID is stored in the table's +pg_class.reltoastrelid entry. On-disk +TOASTed values are kept in the TOAST table, as described in more detail below. Out-of-line values are divided (after compression if used) into chunks of at -most TOAST_MAX_CHUNK_SIZE bytes (by default this value is chosen +most TOAST_MAX_CHUNK_SIZE bytes (by default this value is chosen so that four chunk rows will fit on a page, making it about 2000 bytes). -Each chunk is stored as a separate row in the TOAST table +Each chunk is stored as a separate row in the TOAST table belonging to the owning table. Every -TOAST table has the columns chunk_id (an OID -identifying the particular TOASTed value), -chunk_seq (a sequence number for the chunk within its value), -and chunk_data (the actual data of the chunk). A unique index -on chunk_id and chunk_seq provides fast +TOAST table has the columns chunk_id (an OID +identifying the particular TOASTed value), +chunk_seq (a sequence number for the chunk within its value), +and chunk_data (the actual data of the chunk). A unique index +on chunk_id and chunk_seq provides fast retrieval of the values. A pointer datum representing an out-of-line on-disk -TOASTed value therefore needs to store the OID of the -TOAST table in which to look and the OID of the specific value -(its chunk_id). For convenience, pointer datums also store the +TOASTed value therefore needs to store the OID of the +TOAST table in which to look and the OID of the specific value +(its chunk_id). For convenience, pointer datums also store the logical datum size (original uncompressed data length) and physical stored size (different if compression was applied). Allowing for the varlena header bytes, -the total size of an on-disk TOAST pointer datum is therefore 18 +the total size of an on-disk TOAST pointer datum is therefore 18 bytes regardless of the actual size of the represented value. -The TOAST management code is triggered only +The TOAST management code is triggered only when a row value to be stored in a table is wider than -TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). -The TOAST code will compress and/or move +TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). +The TOAST code will compress and/or move field values out-of-line until the row value is shorter than -TOAST_TUPLE_TARGET bytes (also normally 2 kB) +TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains can be had. During an UPDATE operation, values of unchanged fields are normally preserved as-is; so an -UPDATE of a row with out-of-line values incurs no TOAST costs if +UPDATE of a row with out-of-line values incurs no TOAST costs if none of the out-of-line values change. -The TOAST management code recognizes four different strategies -for storing TOAST-able columns on disk: +The TOAST management code recognizes four different strategies +for storing TOAST-able columns on disk: @@ -447,13 +447,13 @@ for storing TOAST-able columns on disk: out-of-line storage; furthermore it disables use of single-byte headers for varlena types. This is the only possible strategy for - columns of non-TOAST-able data types. + columns of non-TOAST-able data types. EXTENDED allows both compression and out-of-line - storage. This is the default for most TOAST-able data types. + storage. This is the default for most TOAST-able data types. Compression will be attempted first, then out-of-line storage if the row is still too big. @@ -478,9 +478,9 @@ for storing TOAST-able columns on disk: -Each TOAST-able data type specifies a default strategy for columns +Each TOAST-able data type specifies a default strategy for columns of that data type, but the strategy for a given table column can be altered -with ALTER TABLE ... SET STORAGE. +with ALTER TABLE ... SET STORAGE. @@ -488,15 +488,15 @@ This scheme has a number of advantages compared to a more straightforward approach such as allowing row values to span pages. Assuming that queries are usually qualified by comparisons against relatively small key values, most of the work of the executor will be done using the main row entry. The big values -of TOASTed attributes will only be pulled out (if selected at all) +of TOASTed attributes will only be pulled out (if selected at all) at the time the result set is sent to the client. Thus, the main table is much smaller and more of its rows fit in the shared buffer cache than would be the case without any out-of-line storage. Sort sets shrink also, and sorts will more often be done entirely in memory. A little test showed that a table containing typical HTML pages and their URLs was stored in about half of the -raw data size including the TOAST table, and that the main table +raw data size including the TOAST table, and that the main table contained only about 10% of the entire data (the URLs and some small HTML -pages). There was no run time difference compared to an un-TOASTed +pages). There was no run time difference compared to an un-TOASTed comparison table, in which all the HTML pages were cut down to 7 kB to fit. @@ -506,16 +506,16 @@ comparison table, in which all the HTML pages were cut down to 7 kB to fit. Out-of-line, in-memory TOAST storage -TOAST pointers can point to data that is not on disk, but is +TOAST pointers can point to data that is not on disk, but is elsewhere in the memory of the current server process. Such pointers obviously cannot be long-lived, but they are nonetheless useful. There are currently two sub-cases: -pointers to indirect data and -pointers to expanded data. +pointers to indirect data and +pointers to expanded data. -Indirect TOAST pointers simply point at a non-indirect varlena +Indirect TOAST pointers simply point at a non-indirect varlena value stored somewhere in memory. This case was originally created merely as a proof of concept, but it is currently used during logical decoding to avoid possibly having to create physical tuples exceeding 1 GB (as pulling @@ -526,34 +526,34 @@ and there is no infrastructure to help with this. -Expanded TOAST pointers are useful for complex data types +Expanded TOAST pointers are useful for complex data types whose on-disk representation is not especially suited for computational purposes. As an example, the standard varlena representation of a -PostgreSQL array includes dimensionality information, a +PostgreSQL array includes dimensionality information, a nulls bitmap if there are any null elements, then the values of all the elements in order. When the element type itself is variable-length, the -only way to find the N'th element is to scan through all the +only way to find the N'th element is to scan through all the preceding elements. This representation is appropriate for on-disk storage because of its compactness, but for computations with the array it's much -nicer to have an expanded or deconstructed +nicer to have an expanded or deconstructed representation in which all the element starting locations have been -identified. The TOAST pointer mechanism supports this need by +identified. The TOAST pointer mechanism supports this need by allowing a pass-by-reference Datum to point to either a standard varlena -value (the on-disk representation) or a TOAST pointer that +value (the on-disk representation) or a TOAST pointer that points to an expanded representation somewhere in memory. The details of this expanded representation are up to the data type, though it must have a standard header and meet the other API requirements given -in src/include/utils/expandeddatum.h. C-level functions +in src/include/utils/expandeddatum.h. C-level functions working with the data type can choose to handle either representation. Functions that do not know about the expanded representation, but simply -apply PG_DETOAST_DATUM to their inputs, will automatically +apply PG_DETOAST_DATUM to their inputs, will automatically receive the traditional varlena representation; so support for an expanded representation can be introduced incrementally, one function at a time. -TOAST pointers to expanded values are further broken down -into read-write and read-only pointers. +TOAST pointers to expanded values are further broken down +into read-write and read-only pointers. The pointed-to representation is the same either way, but a function that receives a read-write pointer is allowed to modify the referenced value in-place, whereas one that receives a read-only pointer must not; it must @@ -563,11 +563,11 @@ unnecessary copying of expanded values during query execution. -For all types of in-memory TOAST pointer, the TOAST +For all types of in-memory TOAST pointer, the TOAST management code ensures that no such pointer datum can accidentally get -stored on disk. In-memory TOAST pointers are automatically +stored on disk. In-memory TOAST pointers are automatically expanded to normal in-line varlena values before storage — and then -possibly converted to on-disk TOAST pointers, if the containing +possibly converted to on-disk TOAST pointers, if the containing tuple would otherwise be too big. @@ -582,35 +582,35 @@ tuple would otherwise be too big. Free Space Map -FSMFree Space Map +FSMFree Space Map Each heap and index relation, except for hash indexes, has a Free Space Map (FSM) to keep track of available space in the relation. It's stored alongside the main relation data in a separate relation fork, named after the -filenode number of the relation, plus a _fsm suffix. For example, +filenode number of the relation, plus a _fsm suffix. For example, if the filenode of a relation is 12345, the FSM is stored in a file called -12345_fsm, in the same directory as the main relation file. +12345_fsm, in the same directory as the main relation file. -The Free Space Map is organized as a tree of FSM pages. The -bottom level FSM pages store the free space available on each +The Free Space Map is organized as a tree of FSM pages. The +bottom level FSM pages store the free space available on each heap (or index) page, using one byte to represent each such page. The upper levels aggregate information from the lower levels. -Within each FSM page is a binary tree, stored in an array with +Within each FSM page is a binary tree, stored in an array with one byte per node. Each leaf node represents a heap page, or a lower level -FSM page. In each non-leaf node, the higher of its children's +FSM page. In each non-leaf node, the higher of its children's values is stored. The maximum value in the leaf nodes is therefore stored at the root. -See src/backend/storage/freespace/README for more details on -how the FSM is structured, and how it's updated and searched. +See src/backend/storage/freespace/README for more details on +how the FSM is structured, and how it's updated and searched. The module can be used to examine the information stored in free space maps. @@ -624,7 +624,7 @@ can be used to examine the information stored in free space maps. Visibility Map -VMVisibility Map +VMVisibility Map Each heap relation has a Visibility Map @@ -632,9 +632,9 @@ Each heap relation has a Visibility Map visible to all active transactions; it also keeps track of which pages contain only frozen tuples. It's stored alongside the main relation data in a separate relation fork, named after the -filenode number of the relation, plus a _vm suffix. For example, +filenode number of the relation, plus a _vm suffix. For example, if the filenode of a relation is 12345, the VM is stored in a file called -12345_vm, in the same directory as the main relation file. +12345_vm, in the same directory as the main relation file. Note that indexes do not have VMs. @@ -644,7 +644,7 @@ indicates that the page is all-visible, or in other words that the page does not contain any tuples that need to be vacuumed. This information can also be used by index-only -scans to answer queries using only the index tuple. +scans to answer queries using only the index tuple. The second bit, if set, means that all tuples on the page have been frozen. That means that even an anti-wraparound vacuum need not revisit the page. @@ -695,7 +695,7 @@ This section provides an overview of the page format used within the item layout rules. -Sequences and TOAST tables are formatted just like a regular table. +Sequences and TOAST tables are formatted just like a regular table. @@ -708,11 +708,11 @@ an item is a row; in an index, an item is an index entry. -Every table and index is stored as an array of pages of a +Every table and index is stored as an array of pages of a fixed size (usually 8 kB, although a different page size can be selected when compiling the server). In a table, all the pages are logically equivalent, so a particular item (row) can be stored in any page. In -indexes, the first page is generally reserved as a metapage +indexes, the first page is generally reserved as a metapage holding control information, and there can be different types of pages within the index, depending on the index access method. @@ -773,7 +773,7 @@ data. Empty in ordinary tables. The first 24 bytes of each page consists of a page header - (PageHeaderData). Its format is detailed in PageHeaderData). Its format is detailed in . The first field tracks the most recent WAL entry related to this page. The second field contains the page checksum if are @@ -880,7 +880,7 @@ data. Empty in ordinary tables. New item identifiers are allocated as needed from the beginning of the unallocated space. The number of item identifiers present can be determined by looking at - pd_lower, which is increased to allocate a new identifier. + pd_lower, which is increased to allocate a new identifier. Because an item identifier is never moved until it is freed, its index can be used on a long-term basis to reference an item, even when the item itself is moved @@ -908,7 +908,7 @@ data. Empty in ordinary tables. b-tree indexes store links to the page's left and right siblings, as well as some other data relevant to the index structure. Ordinary tables do not use a special section at all (indicated by setting - pd_special to equal the page size). + pd_special to equal the page size). @@ -920,19 +920,19 @@ data. Empty in ordinary tables. detailed in . The actual user data (columns of the row) begins at the offset indicated by - t_hoff, which must always be a multiple of the MAXALIGN + t_hoff, which must always be a multiple of the MAXALIGN distance for the platform. The null bitmap is only present if the HEAP_HASNULL bit is set in t_infomask. If it is present it begins just after the fixed header and occupies enough bytes to have one bit per data column - (that is, t_natts bits altogether). In this list of bits, a + (that is, t_natts bits altogether). In this list of bits, a 1 bit indicates not-null, a 0 bit is a null. When the bitmap is not present, all columns are assumed not-null. The object ID is only present if the HEAP_HASOID bit is set in t_infomask. If present, it appears just - before the t_hoff boundary. Any padding needed to make - t_hoff a MAXALIGN multiple will appear between the null + before the t_hoff boundary. Any padding needed to make + t_hoff a MAXALIGN multiple will appear between the null bitmap and the object ID. (This in turn ensures that the object ID is suitably aligned.) @@ -1031,7 +1031,7 @@ data. Empty in ordinary tables. All variable-length data types share the common header structure struct varlena, which includes the total length of the stored value and some flag bits. Depending on the flags, the data can be either - inline or in a TOAST table; + inline or in a TOAST table; it might be compressed, too (see ). diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index 06f0f0b8e0..e4012cc182 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -119,7 +119,7 @@ INSERT INTO MY_TABLE VALUES (3, 'hi there'); (_). Subsequent characters in an identifier or key word can be letters, underscores, digits (0-9), or dollar signs - ($). Note that dollar signs are not allowed in identifiers + ($). Note that dollar signs are not allowed in identifiers according to the letter of the SQL standard, so their use might render applications less portable. The SQL standard will not define a key word that contains @@ -240,7 +240,7 @@ U&"d!0061t!+000061" UESCAPE '!' The Unicode escape syntax works only when the server encoding is - UTF8. When other server encodings are used, only code + UTF8. When other server encodings are used, only code points in the ASCII range (up to \007F) can be specified. Both the 4-digit and the 6-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code @@ -258,7 +258,7 @@ U&"d!0061t!+000061" UESCAPE '!' PostgreSQL, but "Foo" and "FOO" are different from these three and each other. (The folding of - unquoted names to lower case in PostgreSQL is + unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to "FOO" not @@ -305,8 +305,8 @@ U&"d!0061t!+000061" UESCAPE '!' a single-quote character within a string constant, write two adjacent single quotes, e.g., 'Dianne''s horse'. - Note that this is not the same as a double-quote - character ("). + Note that this is not the same as a double-quote + character ("). @@ -343,15 +343,15 @@ SELECT 'foo' 'bar'; - PostgreSQL also accepts escape + PostgreSQL also accepts escape string constants, which are an extension to the SQL standard. An escape string constant is specified by writing the letter E (upper or lower case) just before the opening single - quote, e.g., E'foo'. (When continuing an escape string - constant across lines, write E only before the first opening + quote, e.g., E'foo'. (When continuing an escape string + constant across lines, write E only before the first opening quote.) - Within an escape string, a backslash character (\) begins a - C-like backslash escape sequence, in which the combination + Within an escape string, a backslash character (\) begins a + C-like backslash escape sequence, in which the combination of backslash and following character(s) represent a special byte value, as shown in . @@ -361,7 +361,7 @@ SELECT 'foo' 'bar'; - Backslash Escape Sequence + Backslash Escape Sequence Interpretation @@ -419,9 +419,9 @@ SELECT 'foo' 'bar'; Any other character following a backslash is taken literally. Thus, to - include a backslash character, write two backslashes (\\). + include a backslash character, write two backslashes (\\). Also, a single quote can be included in an escape string by writing - \', in addition to the normal way of ''. + \', in addition to the normal way of ''. @@ -437,35 +437,35 @@ SELECT 'foo' 'bar'; The Unicode escape syntax works fully only when the server - encoding is UTF8. When other server encodings are + encoding is UTF8. When other server encodings are used, only code points in the ASCII range (up - to \u007F) can be specified. Both the 4-digit and + to \u007F) can be specified. Both the 4-digit and the 8-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 8-digit form technically makes this unnecessary. (When surrogate pairs are used when the server - encoding is UTF8, they are first combined into a + encoding is UTF8, they are first combined into a single code point that is then encoded in UTF-8.) If the configuration parameter - is off, + is off, then PostgreSQL recognizes backslash escapes in both regular and escape string constants. However, as of - PostgreSQL 9.1, the default is on, meaning + PostgreSQL 9.1, the default is on, meaning that backslash escapes are recognized only in escape string constants. This behavior is more standards-compliant, but might break applications which rely on the historical behavior, where backslash escapes were always recognized. As a workaround, you can set this parameter - to off, but it is better to migrate away from using backslash + to off, but it is better to migrate away from using backslash escapes. If you need to use a backslash escape to represent a special - character, write the string constant with an E. + character, write the string constant with an E. - In addition to standard_conforming_strings, the configuration + In addition to standard_conforming_strings, the configuration parameters and govern treatment of backslashes in string constants. @@ -525,13 +525,13 @@ U&'d!0061t!+000061' UESCAPE '!' The Unicode escape syntax works only when the server encoding is - UTF8. When other server encodings are used, only + UTF8. When other server encodings are used, only code points in the ASCII range (up to \007F) can be specified. Both the 4-digit and the 6-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (When surrogate - pairs are used when the server encoding is UTF8, they + pairs are used when the server encoding is UTF8, they are first combined into a single code point that is then encoded in UTF-8.) @@ -573,7 +573,7 @@ U&'d!0061t!+000061' UESCAPE '!' sign, an arbitrary sequence of characters that makes up the string content, a dollar sign, the same tag that began this dollar quote, and a dollar sign. For example, here are two - different ways to specify the string Dianne's horse + different ways to specify the string Dianne's horse using dollar quoting: $$Dianne's horse$$ @@ -598,11 +598,11 @@ BEGIN END; $function$ - Here, the sequence $q$[\t\r\n\v\\]$q$ represents a - dollar-quoted literal string [\t\r\n\v\\], which will + Here, the sequence $q$[\t\r\n\v\\]$q$ represents a + dollar-quoted literal string [\t\r\n\v\\], which will be recognized when the function body is executed by - PostgreSQL. But since the sequence does not match - the outer dollar quoting delimiter $function$, it is + PostgreSQL. But since the sequence does not match + the outer dollar quoting delimiter $function$, it is just some more characters within the constant so far as the outer string is concerned. @@ -707,13 +707,13 @@ $function$ bigint numeric A numeric constant that contains neither a decimal point nor an - exponent is initially presumed to be type integer if its - value fits in type integer (32 bits); otherwise it is - presumed to be type bigint if its - value fits in type bigint (64 bits); otherwise it is - taken to be type numeric. Constants that contain decimal + exponent is initially presumed to be type integer if its + value fits in type integer (32 bits); otherwise it is + presumed to be type bigint if its + value fits in type bigint (64 bits); otherwise it is + taken to be type numeric. Constants that contain decimal points and/or exponents are always initially presumed to be type - numeric. + numeric. @@ -724,7 +724,7 @@ $function$ force a numeric value to be interpreted as a specific data type by casting it.type cast For example, you can force a numeric value to be treated as type - real (float4) by writing: + real (float4) by writing: REAL '1.23' -- string style @@ -780,17 +780,17 @@ CAST ( 'string' AS type ) function-call syntaxes can also be used to specify run-time type conversions of arbitrary expressions, as discussed in . To avoid syntactic ambiguity, the - type 'string' + type 'string' syntax can only be used to specify the type of a simple literal constant. Another restriction on the - type 'string' + type 'string' syntax is that it does not work for array types; use :: or CAST() to specify the type of an array constant. - The CAST() syntax conforms to SQL. The - type 'string' + The CAST() syntax conforms to SQL. The + type 'string' syntax is a generalization of the standard: SQL specifies this syntax only for a few data types, but PostgreSQL allows it for all types. The syntax with @@ -827,7 +827,7 @@ CAST ( 'string' AS type ) - A multiple-character operator name cannot end in + or -, + A multiple-character operator name cannot end in + or -, unless the name also contains at least one of these characters: ~ ! @ # % ^ & | ` ? @@ -981,7 +981,7 @@ CAST ( 'string' AS type ) shows the precedence and - associativity of the operators in PostgreSQL. + associativity of the operators in PostgreSQL. Most operators have the same precedence and are left-associative. The precedence and associativity of the operators is hard-wired into the parser. @@ -1085,8 +1085,8 @@ SELECT (5 !) - 6; IS ISNULL NOTNULL - IS TRUE, IS FALSE, IS - NULL, IS DISTINCT FROM, etc + IS TRUE, IS FALSE, IS + NULL, IS DISTINCT FROM, etc @@ -1121,29 +1121,29 @@ SELECT (5 !) - 6; When a schema-qualified operator name is used in the - OPERATOR syntax, as for example in: + OPERATOR syntax, as for example in: SELECT 3 OPERATOR(pg_catalog.+) 4; - the OPERATOR construct is taken to have the default precedence + the OPERATOR construct is taken to have the default precedence shown in for - any other operator. This is true no matter - which specific operator appears inside OPERATOR(). + any other operator. This is true no matter + which specific operator appears inside OPERATOR(). - PostgreSQL versions before 9.5 used slightly different + PostgreSQL versions before 9.5 used slightly different operator precedence rules. In particular, <= >= and <> used to be treated as - generic operators; IS tests used to have higher priority; - and NOT BETWEEN and related constructs acted inconsistently, - being taken in some cases as having the precedence of NOT - rather than BETWEEN. These rules were changed for better + generic operators; IS tests used to have higher priority; + and NOT BETWEEN and related constructs acted inconsistently, + being taken in some cases as having the precedence of NOT + rather than BETWEEN. These rules were changed for better compliance with the SQL standard and to reduce confusion from inconsistent treatment of logically equivalent constructs. In most cases, these changes will result in no behavioral change, or perhaps - in no such operator failures which can be resolved by adding + in no such operator failures which can be resolved by adding parentheses. However there are corner cases in which a query might change behavior without any parsing error being reported. If you are concerned about whether these changes have silently broken something, @@ -1279,7 +1279,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; Another value expression in parentheses (used to group subexpressions and override - precedenceparenthesis) + precedenceparenthesis) @@ -1376,7 +1376,7 @@ CREATE FUNCTION dept(text) RETURNS dept expression[subscript] - or multiple adjacent elements (an array slice) can be extracted + or multiple adjacent elements (an array slice) can be extracted by writing expression[lower_subscript:upper_subscript] @@ -1443,8 +1443,8 @@ $1.somecolumn The parentheses are required here to show that - compositecol is a column name not a table name, - or that mytable is a table name not a schema name + compositecol is a column name not a table name, + or that mytable is a table name not a schema name in the second case. @@ -1479,7 +1479,7 @@ $1.somecolumn key words AND, OR, and NOT, or is a qualified operator name in the form: -OPERATOR(schema.operatorname) +OPERATOR(schema.operatorname) Which particular operators exist and whether they are unary or binary depends on what operators have been @@ -1528,10 +1528,10 @@ sqrt(2) A function that takes a single argument of composite type can optionally be called using field-selection syntax, and conversely field selection can be written in functional style. That is, the - notations col(table) and table.col are + notations col(table) and table.col are interchangeable. This behavior is not SQL-standard but is provided - in PostgreSQL because it allows use of functions to - emulate computed fields. For more information see + in PostgreSQL because it allows use of functions to + emulate computed fields. For more information see . @@ -1592,7 +1592,7 @@ sqrt(2) The fourth form invokes the aggregate once for each input row; since no particular input value is specified, it is generally only useful for the count(*) aggregate function. - The last form is used with ordered-set aggregate + The last form is used with ordered-set aggregate functions, which are described below. @@ -1607,7 +1607,7 @@ sqrt(2) For example, count(*) yields the total number of input rows; count(f1) yields the number of input rows in which f1 is non-null, since - count ignores nulls; and + count ignores nulls; and count(distinct f1) yields the number of distinct non-null values of f1. @@ -1615,13 +1615,13 @@ sqrt(2) Ordinarily, the input rows are fed to the aggregate function in an unspecified order. In many cases this does not matter; for example, - min produces the same result no matter what order it + min produces the same result no matter what order it receives the inputs in. However, some aggregate functions - (such as array_agg and string_agg) produce + (such as array_agg and string_agg) produce results that depend on the ordering of the input rows. When using - such an aggregate, the optional order_by_clause can be - used to specify the desired ordering. The order_by_clause - has the same syntax as for a query-level ORDER BY clause, as + such an aggregate, the optional order_by_clause can be + used to specify the desired ordering. The order_by_clause + has the same syntax as for a query-level ORDER BY clause, as described in , except that its expressions are always just expressions and cannot be output-column names or numbers. For example: @@ -1632,7 +1632,7 @@ SELECT array_agg(a ORDER BY b DESC) FROM table; When dealing with multiple-argument aggregate functions, note that the - ORDER BY clause goes after all the aggregate arguments. + ORDER BY clause goes after all the aggregate arguments. For example, write this: SELECT string_agg(a, ',' ORDER BY a) FROM table; @@ -1642,58 +1642,58 @@ SELECT string_agg(a, ',' ORDER BY a) FROM table; SELECT string_agg(a ORDER BY a, ',') FROM table; -- incorrect The latter is syntactically valid, but it represents a call of a - single-argument aggregate function with two ORDER BY keys + single-argument aggregate function with two ORDER BY keys (the second one being rather useless since it's a constant). - If DISTINCT is specified in addition to an - order_by_clause, then all the ORDER BY + If DISTINCT is specified in addition to an + order_by_clause, then all the ORDER BY expressions must match regular arguments of the aggregate; that is, you cannot sort on an expression that is not included in the - DISTINCT list. + DISTINCT list. - The ability to specify both DISTINCT and ORDER BY - in an aggregate function is a PostgreSQL extension. + The ability to specify both DISTINCT and ORDER BY + in an aggregate function is a PostgreSQL extension. - Placing ORDER BY within the aggregate's regular argument + Placing ORDER BY within the aggregate's regular argument list, as described so far, is used when ordering the input rows for general-purpose and statistical aggregates, for which ordering is optional. There is a subclass of aggregate functions called ordered-set - aggregates for which an order_by_clause - is required, usually because the aggregate's computation is + aggregates for which an order_by_clause + is required, usually because the aggregate's computation is only sensible in terms of a specific ordering of its input rows. Typical examples of ordered-set aggregates include rank and percentile calculations. For an ordered-set aggregate, the order_by_clause is written - inside WITHIN GROUP (...), as shown in the final syntax + inside WITHIN GROUP (...), as shown in the final syntax alternative above. The expressions in the order_by_clause are evaluated once per input row just like regular aggregate arguments, sorted as per the order_by_clause's requirements, and fed to the aggregate function as input arguments. (This is unlike the case - for a non-WITHIN GROUP order_by_clause, + for a non-WITHIN GROUP order_by_clause, which is not treated as argument(s) to the aggregate function.) The - argument expressions preceding WITHIN GROUP, if any, are - called direct arguments to distinguish them from - the aggregated arguments listed in + argument expressions preceding WITHIN GROUP, if any, are + called direct arguments to distinguish them from + the aggregated arguments listed in the order_by_clause. Unlike regular aggregate arguments, direct arguments are evaluated only once per aggregate call, not once per input row. This means that they can contain variables only - if those variables are grouped by GROUP BY; this restriction + if those variables are grouped by GROUP BY; this restriction is the same as if the direct arguments were not inside an aggregate expression at all. Direct arguments are typically used for things like percentile fractions, which only make sense as a single value per aggregation calculation. The direct argument list can be empty; in this - case, write just () not (*). - (PostgreSQL will actually accept either spelling, but + case, write just () not (*). + (PostgreSQL will actually accept either spelling, but only the first way conforms to the SQL standard.) @@ -1712,8 +1712,8 @@ SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; which obtains the 50th percentile, or median, value of - the income column from table households. - Here, 0.5 is a direct argument; it would make no sense + the income column from table households. + Here, 0.5 is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows. @@ -1742,8 +1742,8 @@ FROM generate_series(1,10) AS s(i); An aggregate expression can only appear in the result list or - HAVING clause of a SELECT command. - It is forbidden in other clauses, such as WHERE, + HAVING clause of a SELECT command. + It is forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the results of aggregates are formed. @@ -1760,7 +1760,7 @@ FROM generate_series(1,10) AS s(i); as a whole is then an outer reference for the subquery it appears in, and acts as a constant over any one evaluation of that subquery. The restriction about - appearing only in the result list or HAVING clause + appearing only in the result list or HAVING clause applies with respect to the query level that the aggregate belongs to. @@ -1784,7 +1784,7 @@ FROM generate_series(1,10) AS s(i); to grouping of the selected rows into a single output row — each row remains separate in the query output. However the window function has access to all the rows that would be part of the current row's - group according to the grouping specification (PARTITION BY + group according to the grouping specification (PARTITION BY list) of the window function call. The syntax of a window function call is one of the following: @@ -1805,10 +1805,10 @@ FROM generate_series(1,10) AS s(i); and the optional frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS } frame_start +{ RANGE | ROWS } BETWEEN frame_start AND frame_end - where frame_start and frame_end can be + where frame_start and frame_end can be one of UNBOUNDED PRECEDING @@ -1831,59 +1831,59 @@ UNBOUNDED FOLLOWING be given within parentheses, using the same syntax as for defining a named window in the WINDOW clause; see the reference page for details. It's worth - pointing out that OVER wname is not exactly equivalent to - OVER (wname ...); the latter implies copying and modifying the + pointing out that OVER wname is not exactly equivalent to + OVER (wname ...); the latter implies copying and modifying the window definition, and will be rejected if the referenced window specification includes a frame clause. - The PARTITION BY clause groups the rows of the query into - partitions, which are processed separately by the window - function. PARTITION BY works similarly to a query-level - GROUP BY clause, except that its expressions are always just + The PARTITION BY clause groups the rows of the query into + partitions, which are processed separately by the window + function. PARTITION BY works similarly to a query-level + GROUP BY clause, except that its expressions are always just expressions and cannot be output-column names or numbers. - Without PARTITION BY, all rows produced by the query are + Without PARTITION BY, all rows produced by the query are treated as a single partition. - The ORDER BY clause determines the order in which the rows + The ORDER BY clause determines the order in which the rows of a partition are processed by the window function. It works similarly - to a query-level ORDER BY clause, but likewise cannot use - output-column names or numbers. Without ORDER BY, rows are + to a query-level ORDER BY clause, but likewise cannot use + output-column names or numbers. Without ORDER BY, rows are processed in an unspecified order. The frame_clause specifies - the set of rows constituting the window frame, which is a + the set of rows constituting the window frame, which is a subset of the current partition, for those window functions that act on the frame instead of the whole partition. The frame can be specified in - either RANGE or ROWS mode; in either case, it - runs from the frame_start to the - frame_end. If frame_end is omitted, - it defaults to CURRENT ROW. + either RANGE or ROWS mode; in either case, it + runs from the frame_start to the + frame_end. If frame_end is omitted, + it defaults to CURRENT ROW. - A frame_start of UNBOUNDED PRECEDING means + A frame_start of UNBOUNDED PRECEDING means that the frame starts with the first row of the partition, and similarly - a frame_end of UNBOUNDED FOLLOWING means + a frame_end of UNBOUNDED FOLLOWING means that the frame ends with the last row of the partition. - In RANGE mode, a frame_start of - CURRENT ROW means the frame starts with the current row's - first peer row (a row that ORDER BY considers - equivalent to the current row), while a frame_end of - CURRENT ROW means the frame ends with the last equivalent - ORDER BY peer. In ROWS mode, CURRENT ROW simply means + In RANGE mode, a frame_start of + CURRENT ROW means the frame starts with the current row's + first peer row (a row that ORDER BY considers + equivalent to the current row), while a frame_end of + CURRENT ROW means the frame ends with the last equivalent + ORDER BY peer. In ROWS mode, CURRENT ROW simply means the current row. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts + The value PRECEDING and + value FOLLOWING cases are currently only + allowed in ROWS mode. They indicate that the frame starts or ends the specified number of rows before or after the current row. value must be an integer expression not containing any variables, aggregate functions, or window functions. @@ -1892,22 +1892,22 @@ UNBOUNDED FOLLOWING - The default framing option is RANGE UNBOUNDED PRECEDING, + The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND - CURRENT ROW. With ORDER BY, this sets the frame to be + CURRENT ROW. With ORDER BY, this sets the frame to be all rows from the partition start up through the current row's last - ORDER BY peer. Without ORDER BY, all rows of the partition are + ORDER BY peer. Without ORDER BY, all rows of the partition are included in the window frame, since all rows become peers of the current row. Restrictions are that - frame_start cannot be UNBOUNDED FOLLOWING, - frame_end cannot be UNBOUNDED PRECEDING, - and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + frame_start cannot be UNBOUNDED FOLLOWING, + frame_end cannot be UNBOUNDED PRECEDING, + and the frame_end choice cannot appear earlier in the + above list than the frame_start choice — for example + RANGE BETWEEN CURRENT ROW AND value PRECEDING is not allowed. @@ -1928,18 +1928,18 @@ UNBOUNDED FOLLOWING - The syntaxes using * are used for calling parameter-less + The syntaxes using * are used for calling parameter-less aggregate functions as window functions, for example - count(*) OVER (PARTITION BY x ORDER BY y). - The asterisk (*) is customarily not used for + count(*) OVER (PARTITION BY x ORDER BY y). + The asterisk (*) is customarily not used for window-specific functions. Window-specific functions do not - allow DISTINCT or ORDER BY to be used within the + allow DISTINCT or ORDER BY to be used within the function argument list. Window function calls are permitted only in the SELECT - list and the ORDER BY clause of the query. + list and the ORDER BY clause of the query. @@ -1974,7 +1974,7 @@ UNBOUNDED FOLLOWING CAST ( expression AS type ) expression::type - The CAST syntax conforms to SQL; the syntax with + The CAST syntax conforms to SQL; the syntax with :: is historical PostgreSQL usage. @@ -1996,7 +1996,7 @@ CAST ( expression AS type to the type that a value expression must produce (for example, when it is assigned to a table column); the system will automatically apply a type cast in such cases. However, automatic casting is only done for - casts that are marked OK to apply implicitly + casts that are marked OK to apply implicitly in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction is intended to prevent surprising conversions from being applied silently. @@ -2011,8 +2011,8 @@ CAST ( expression AS type However, this only works for types whose names are also valid as function names. For example, double precision cannot be used this way, but the equivalent float8 - can. Also, the names interval, time, and - timestamp can only be used in this fashion if they are + can. Also, the names interval, time, and + timestamp can only be used in this fashion if they are double-quoted, because of syntactic conflicts. Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably be avoided. @@ -2025,7 +2025,7 @@ CAST ( expression AS type conversion, it will internally invoke a registered function to perform the conversion. By convention, these conversion functions have the same name as their output type, and thus the function-like - syntax is nothing more than a direct invocation of the underlying + syntax is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see . @@ -2061,7 +2061,7 @@ CAST ( expression AS type The two common uses of the COLLATE clause are - overriding the sort order in an ORDER BY clause, for + overriding the sort order in an ORDER BY clause, for example: SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C"; @@ -2071,14 +2071,14 @@ SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C"; SELECT * FROM tbl WHERE a > 'foo' COLLATE "C"; - Note that in the latter case the COLLATE clause is + Note that in the latter case the COLLATE clause is attached to an input argument of the operator we wish to affect. It doesn't matter which argument of the operator or function call the - COLLATE clause is attached to, because the collation that is + COLLATE clause is attached to, because the collation that is applied by the operator or function is derived by considering all - arguments, and an explicit COLLATE clause will override the + arguments, and an explicit COLLATE clause will override the collations of all other arguments. (Attaching non-matching - COLLATE clauses to more than one argument, however, is an + COLLATE clauses to more than one argument, however, is an error. For more details see .) Thus, this gives the same result as the previous example: @@ -2089,8 +2089,8 @@ SELECT * FROM tbl WHERE a COLLATE "C" > 'foo'; SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C"; because it attempts to apply a collation to the result of the - > operator, which is of the non-collatable data type - boolean. + > operator, which is of the non-collatable data type + boolean. @@ -2143,8 +2143,8 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) array value using values for its member elements. A simple array constructor consists of the key word ARRAY, a left square bracket - [, a list of expressions (separated by commas) for the - array element values, and finally a right square bracket ]. + [, a list of expressions (separated by commas) for the + array element values, and finally a right square bracket ]. For example: SELECT ARRAY[1,2,3+4]; @@ -2155,8 +2155,8 @@ SELECT ARRAY[1,2,3+4]; By default, the array element type is the common type of the member expressions, - determined using the same rules as for UNION or - CASE constructs (see ). + determined using the same rules as for UNION or + CASE constructs (see ). You can override this by explicitly casting the array constructor to the desired type, for example: @@ -2193,13 +2193,13 @@ SELECT ARRAY[[1,2],[3,4]]; Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-arrays of identical dimensions. - Any cast applied to the outer ARRAY constructor propagates + Any cast applied to the outer ARRAY constructor propagates automatically to all the inner constructors. Multidimensional array constructor elements can be anything yielding - an array of the proper kind, not only a sub-ARRAY construct. + an array of the proper kind, not only a sub-ARRAY construct. For example: CREATE TABLE arr(f1 int[], f2 int[]); @@ -2291,7 +2291,7 @@ SELECT ARRAY(SELECT ARRAY[i, i*2] FROM generate_series(1,5) AS a(i)); SELECT ROW(1,2.5,'this is a test'); - The key word ROW is optional when there is more than one + The key word ROW is optional when there is more than one expression in the list. @@ -2299,10 +2299,10 @@ SELECT ROW(1,2.5,'this is a test'); A row constructor can include the syntax rowvalue.*, which will be expanded to a list of the elements of the row value, - just as occurs when the .* syntax is used at the top level - of a SELECT list (see ). - For example, if table t has - columns f1 and f2, these are the same: + just as occurs when the .* syntax is used at the top level + of a SELECT list (see ). + For example, if table t has + columns f1 and f2, these are the same: SELECT ROW(t.*, 42) FROM t; SELECT ROW(t.f1, t.f2, 42) FROM t; @@ -2313,19 +2313,19 @@ SELECT ROW(t.f1, t.f2, 42) FROM t; Before PostgreSQL 8.2, the .* syntax was not expanded in row constructors, so - that writing ROW(t.*, 42) created a two-field row whose first + that writing ROW(t.*, 42) created a two-field row whose first field was another row value. The new behavior is usually more useful. If you need the old behavior of nested row values, write the inner row value without .*, for instance - ROW(t, 42). + ROW(t, 42). - By default, the value created by a ROW expression is of + By default, the value created by a ROW expression is of an anonymous record type. If necessary, it can be cast to a named composite type — either the row type of a table, or a composite type - created with CREATE TYPE AS. An explicit cast might be needed + created with CREATE TYPE AS. An explicit cast might be needed to avoid ambiguity. For example: CREATE TABLE mytable(f1 int, f2 float, f3 text); @@ -2366,7 +2366,7 @@ SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype)); in a composite-type table column, or to be passed to a function that accepts a composite parameter. Also, it is possible to compare two row values or test a row with - IS NULL or IS NOT NULL, for example: + IS NULL or IS NOT NULL, for example: SELECT ROW(1,2.5,'this is a test') = ROW(1, 3, 'not the same'); @@ -2413,18 +2413,18 @@ SELECT somefunc() OR true; As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is particularly dangerous to - rely on side effects or evaluation order in WHERE and HAVING clauses, + rely on side effects or evaluation order in WHERE and HAVING clauses, since those clauses are extensively reprocessed as part of developing an execution plan. Boolean - expressions (AND/OR/NOT combinations) in those clauses can be reorganized + expressions (AND/OR/NOT combinations) in those clauses can be reorganized in any manner allowed by the laws of Boolean algebra. - When it is essential to force evaluation order, a CASE + When it is essential to force evaluation order, a CASE construct (see ) can be used. For example, this is an untrustworthy way of trying to - avoid division by zero in a WHERE clause: + avoid division by zero in a WHERE clause: SELECT ... WHERE x > 0 AND y/x > 1.5; @@ -2432,14 +2432,14 @@ SELECT ... WHERE x > 0 AND y/x > 1.5; SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END; - A CASE construct used in this fashion will defeat optimization + A CASE construct used in this fashion will defeat optimization attempts, so it should only be done when necessary. (In this particular example, it would be better to sidestep the problem by writing - y > 1.5*x instead.) + y > 1.5*x instead.) - CASE is not a cure-all for such issues, however. + CASE is not a cure-all for such issues, however. One limitation of the technique illustrated above is that it does not prevent early evaluation of constant subexpressions. As described in , functions and @@ -2450,8 +2450,8 @@ SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab; is likely to result in a division-by-zero failure due to the planner trying to simplify the constant subexpression, - even if every row in the table has x > 0 so that the - ELSE arm would never be entered at run time. + even if every row in the table has x > 0 so that the + ELSE arm would never be entered at run time. @@ -2459,17 +2459,17 @@ SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab; obviously involve constants can occur in queries executed within functions, since the values of function arguments and local variables can be inserted into queries as constants for planning purposes. - Within PL/pgSQL functions, for example, using an - IF-THEN-ELSE statement to protect + Within PL/pgSQL functions, for example, using an + IF-THEN-ELSE statement to protect a risky computation is much safer than just nesting it in a - CASE expression. + CASE expression. - Another limitation of the same kind is that a CASE cannot + Another limitation of the same kind is that a CASE cannot prevent evaluation of an aggregate expression contained within it, because aggregate expressions are computed before other - expressions in a SELECT list or HAVING clause + expressions in a SELECT list or HAVING clause are considered. For example, the following query can cause a division-by-zero error despite seemingly having protected against it: @@ -2478,12 +2478,12 @@ SELECT CASE WHEN min(employees) > 0 END FROM departments; - The min() and avg() aggregates are computed + The min() and avg() aggregates are computed concurrently over all the input rows, so if any row - has employees equal to zero, the division-by-zero error + has employees equal to zero, the division-by-zero error will occur before there is any opportunity to test the result of - min(). Instead, use a WHERE - or FILTER clause to prevent problematic input rows from + min(). Instead, use a WHERE + or FILTER clause to prevent problematic input rows from reaching an aggregate function in the first place. @@ -2657,7 +2657,7 @@ SELECT concat_lower_or_upper('Hello', 'World', uppercase => true); In the above query, the arguments a and b are specified positionally, while - uppercase is specified by name. In this example, + uppercase is specified by name. In this example, that adds little except documentation. With a more complex function having numerous parameters that have default values, named or mixed notation can save a great deal of writing and reduce chances for error. diff --git a/doc/src/sgml/tablefunc.sgml b/doc/src/sgml/tablefunc.sgml index 90f6df9545..7cfae4d316 100644 --- a/doc/src/sgml/tablefunc.sgml +++ b/doc/src/sgml/tablefunc.sgml @@ -8,7 +8,7 @@ - The tablefunc module includes various functions that return + The tablefunc module includes various functions that return tables (that is, multiple rows). These functions are useful both in their own right and as examples of how to write C functions that return multiple rows. @@ -23,7 +23,7 @@
- <filename>tablefunc</> Functions + <filename>tablefunc</filename> Functions @@ -35,46 +35,46 @@ normal_rand(int numvals, float8 mean, float8 stddev) - setof float8 + setof float8 Produces a set of normally distributed random values crosstab(text sql) - setof record + setof record - Produces a pivot table containing - row names plus N value columns, where - N is determined by the row type specified in the calling + Produces a pivot table containing + row names plus N value columns, where + N is determined by the row type specified in the calling query - crosstabN(text sql) - setof table_crosstab_N + crosstabN(text sql) + setof table_crosstab_N - Produces a pivot table containing - row names plus N value columns. - crosstab2, crosstab3, and - crosstab4 are predefined, but you can create additional - crosstabN functions as described below + Produces a pivot table containing + row names plus N value columns. + crosstab2, crosstab3, and + crosstab4 are predefined, but you can create additional + crosstabN functions as described below crosstab(text source_sql, text category_sql) - setof record + setof record - Produces a pivot table + Produces a pivot table with the value columns specified by a second query crosstab(text sql, int N) - setof record + setof record - Obsolete version of crosstab(text). - The parameter N is now ignored, since the number of + Obsolete version of crosstab(text). + The parameter N is now ignored, since the number of value columns is always determined by the calling query @@ -88,7 +88,7 @@ connectby - setof record + setof record Produces a representation of a hierarchical tree structure @@ -109,7 +109,7 @@ normal_rand(int numvals, float8 mean, float8 stddev) returns setof float8 - normal_rand produces a set of normally distributed random + normal_rand produces a set of normally distributed random values (Gaussian distribution). @@ -157,7 +157,7 @@ crosstab(text sql, int N) - The crosstab function is used to produce pivot + The crosstab function is used to produce pivot displays, wherein data is listed across the page rather than down. For example, we might have data like @@ -176,7 +176,7 @@ row1 val11 val12 val13 ... row2 val21 val22 val23 ... ... - The crosstab function takes a text parameter that is a SQL + The crosstab function takes a text parameter that is a SQL query producing raw data formatted in the first way, and produces a table formatted in the second way. @@ -209,9 +209,9 @@ row2 val21 val22 val23 ... - The crosstab function is declared to return setof + The crosstab function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text); @@ -227,30 +227,30 @@ SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 t - The FROM clause must define the output as one - row_name column (of the same data type as the first result - column of the SQL query) followed by N value columns + The FROM clause must define the output as one + row_name column (of the same data type as the first result + column of the SQL query) followed by N value columns (all of the same data type as the third result column of the SQL query). You can set up as many output value columns as you wish. The names of the output columns are up to you. - The crosstab function produces one output row for each + The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. It fills the output - value columns, left to right, with the + value columns, left to right, with the value fields from these rows. If there - are fewer rows in a group than there are output value + are fewer rows in a group than there are output value columns, the extra output columns are filled with nulls; if there are more rows, the extra input rows are skipped. - In practice the SQL query should always specify ORDER BY 1,2 + In practice the SQL query should always specify ORDER BY 1,2 to ensure that the input rows are properly ordered, that is, values with the same row_name are brought together and - correctly ordered within the row. Notice that crosstab + correctly ordered within the row. Notice that crosstab itself does not pay any attention to the second column of the query result; it's just there to be ordered by, to control the order in which the third-column values appear across the page. @@ -286,41 +286,41 @@ AS ct(row_name text, category_1 text, category_2 text, category_3 text); - You can avoid always having to write out a FROM clause to + You can avoid always having to write out a FROM clause to define the output columns, by setting up a custom crosstab function that has the desired output row type wired into its definition. This is described in the next section. Another possibility is to embed the - required FROM clause in a view definition. + required FROM clause in a view definition. See also the \crosstabview - command in psql, which provides functionality similar - to crosstab(). + command in psql, which provides functionality similar + to crosstab(). - <function>crosstab<replaceable>N</>(text)</function> + <function>crosstab<replaceable>N</replaceable>(text)</function> crosstab -crosstabN(text sql) +crosstabN(text sql) - The crosstabN functions are examples of how - to set up custom wrappers for the general crosstab function, + The crosstabN functions are examples of how + to set up custom wrappers for the general crosstab function, so that you need not write out column names and types in the calling - SELECT query. The tablefunc module includes - crosstab2, crosstab3, and - crosstab4, whose output row types are defined as + SELECT query. The tablefunc module includes + crosstab2, crosstab3, and + crosstab4, whose output row types are defined as @@ -337,10 +337,10 @@ CREATE TYPE tablefunc_crosstab_N AS ( Thus, these functions can be used directly when the input query produces - row_name and value columns of type - text, and you want 2, 3, or 4 output values columns. + row_name and value columns of type + text, and you want 2, 3, or 4 output values columns. In all other ways they behave exactly as described above for the - general crosstab function. + general crosstab function. @@ -359,7 +359,7 @@ FROM crosstab3( These functions are provided mostly for illustration purposes. You can create your own return types and functions based on the - underlying crosstab() function. There are two ways + underlying crosstab() function. There are two ways to do it: @@ -367,13 +367,13 @@ FROM crosstab3( Create a composite type describing the desired output columns, similar to the examples in - contrib/tablefunc/tablefunc--1.0.sql. + contrib/tablefunc/tablefunc--1.0.sql. Then define a - unique function name accepting one text parameter and returning - setof your_type_name, but linking to the same underlying - crosstab C function. For example, if your source data - produces row names that are text, and values that are - float8, and you want 5 value columns: + unique function name accepting one text parameter and returning + setof your_type_name, but linking to the same underlying + crosstab C function. For example, if your source data + produces row names that are text, and values that are + float8, and you want 5 value columns: CREATE TYPE my_crosstab_float8_5_cols AS ( my_row_name text, @@ -393,7 +393,7 @@ CREATE OR REPLACE FUNCTION crosstab_float8_5_cols(text) - Use OUT parameters to define the return type implicitly. + Use OUT parameters to define the return type implicitly. The same example could also be done this way: CREATE OR REPLACE FUNCTION crosstab_float8_5_cols( @@ -426,12 +426,12 @@ crosstab(text source_sql, text category_sql) - The main limitation of the single-parameter form of crosstab + The main limitation of the single-parameter form of crosstab is that it treats all values in a group alike, inserting each value into the first available column. If you want the value columns to correspond to specific categories of data, and some groups might not have data for some of the categories, that doesn't work well. - The two-parameter form of crosstab handles this case by + The two-parameter form of crosstab handles this case by providing an explicit list of the categories corresponding to the output columns. @@ -447,7 +447,7 @@ crosstab(text source_sql, text category_sql) category and value columns must be the last two columns, in that order. Any columns between row_name and - category are treated as extra. + category are treated as extra. The extra columns are expected to be the same for all rows with the same row_name value. @@ -489,9 +489,9 @@ SELECT DISTINCT cat FROM foo ORDER BY 1; - The crosstab function is declared to return setof + The crosstab function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: @@ -512,25 +512,25 @@ row_name extra cat1 cat2 cat3 cat4 - The FROM clause must define the proper number of output - columns of the proper data types. If there are N - columns in the source_sql query's result, the first - N-2 of them must match up with the first - N-2 output columns. The remaining output columns - must have the type of the last column of the source_sql + The FROM clause must define the proper number of output + columns of the proper data types. If there are N + columns in the source_sql query's result, the first + N-2 of them must match up with the first + N-2 output columns. The remaining output columns + must have the type of the last column of the source_sql query's result, and there must be exactly as many of them as there are rows in the category_sql query's result. - The crosstab function produces one output row for each + The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. The output - row_name column, plus any extra + row_name column, plus any extra columns, are copied from the first row of the group. The output - value columns are filled with the + value columns are filled with the value fields from rows having matching - category values. If a row's category + category values. If a row's category does not match any output of the category_sql query, its value is ignored. Output columns whose matching category is not present in any input row @@ -539,7 +539,7 @@ row_name extra cat1 cat2 cat3 cat4 In practice the source_sql query should always - specify ORDER BY 1 to ensure that values with the same + specify ORDER BY 1 to ensure that values with the same row_name are brought together. However, ordering of the categories within a group is not important. Also, it is essential to be sure that the order of the @@ -619,7 +619,7 @@ AS You can create predefined functions to avoid having to write out the result column names and types in each query. See the examples in the previous section. The underlying C function for this form - of crosstab is named crosstab_hash. + of crosstab is named crosstab_hash. @@ -638,10 +638,10 @@ connectby(text relname, text keyid_fld, text parent_keyid_fld - The connectby function produces a display of hierarchical + The connectby function produces a display of hierarchical data that is stored in a table. The table must have a key field that uniquely identifies rows, and a parent-key field that references the - parent (if any) of each row. connectby can display the + parent (if any) of each row. connectby can display the sub-tree descending from any row. @@ -694,14 +694,14 @@ connectby(text relname, text keyid_fld, text parent_keyid_fld The key and parent-key fields can be any data type, but they must be - the same type. Note that the start_with value must be + the same type. Note that the start_with value must be entered as a text string, regardless of the type of the key field. - The connectby function is declared to return setof + The connectby function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: @@ -714,15 +714,15 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' The first two output columns are used for the current row's key and its parent row's key; they must match the type of the table's key field. The third output column is the depth in the tree and must be of type - integer. If a branch_delim parameter was + integer. If a branch_delim parameter was given, the next output column is the branch display and must be of type - text. Finally, if an orderby_fld + text. Finally, if an orderby_fld parameter was given, the last output column is a serial number, and must - be of type integer. + be of type integer. - The branch output column shows the path of keys taken to + The branch output column shows the path of keys taken to reach the current row. The keys are separated by the specified branch_delim string. If no branch display is wanted, omit both the branch_delim parameter @@ -740,7 +740,7 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' The parameters representing table and field names are copied as-is - into the SQL queries that connectby generates internally. + into the SQL queries that connectby generates internally. Therefore, include double quotes if the names are mixed-case or contain special characters. You may also need to schema-qualify the table name. @@ -752,10 +752,10 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' It is important that the branch_delim string - not appear in any key values, else connectby may incorrectly + not appear in any key values, else connectby may incorrectly report an infinite-recursion error. Note that if branch_delim is not provided, a default value - of ~ is used for recursion detection purposes. + of ~ is used for recursion detection purposes. diff --git a/doc/src/sgml/tablesample-method.sgml b/doc/src/sgml/tablesample-method.sgml index 22f8bbe19a..9ac28ceb4c 100644 --- a/doc/src/sgml/tablesample-method.sgml +++ b/doc/src/sgml/tablesample-method.sgml @@ -12,11 +12,11 @@ - PostgreSQL's implementation of the TABLESAMPLE + PostgreSQL's implementation of the TABLESAMPLE clause supports custom table sampling methods, in addition to - the BERNOULLI and SYSTEM methods that are required + the BERNOULLI and SYSTEM methods that are required by the SQL standard. The sampling method determines which rows of the - table will be selected when the TABLESAMPLE clause is used. + table will be selected when the TABLESAMPLE clause is used. @@ -26,18 +26,18 @@ method_name(internal) RETURNS tsm_handler The name of the function is the same method name appearing in the - TABLESAMPLE clause. The internal argument is a dummy + TABLESAMPLE clause. The internal argument is a dummy (always having value zero) that simply serves to prevent this function from being called directly from a SQL command. The result of the function must be a palloc'd struct of - type TsmRoutine, which contains pointers to support functions for + type TsmRoutine, which contains pointers to support functions for the sampling method. These support functions are plain C functions and are not visible or callable at the SQL level. The support functions are described in . - In addition to function pointers, the TsmRoutine struct must + In addition to function pointers, the TsmRoutine struct must provide these additional fields: @@ -47,9 +47,9 @@ method_name(internal) RETURNS tsm_handler This is an OID list containing the data type OIDs of the parameter(s) - that will be accepted by the TABLESAMPLE clause when this + that will be accepted by the TABLESAMPLE clause when this sampling method is used. For example, for the built-in methods, this - list contains a single item with value FLOAT4OID, which + list contains a single item with value FLOAT4OID, which represents the sampling percentage. Custom sampling methods can have more or different parameters. @@ -60,11 +60,11 @@ method_name(internal) RETURNS tsm_handler bool repeatable_across_queries - If true, the sampling method can deliver identical samples + If true, the sampling method can deliver identical samples across successive queries, if the same parameters - and REPEATABLE seed value are supplied each time and the - table contents have not changed. When this is false, - the REPEATABLE clause is not accepted for use with the + and REPEATABLE seed value are supplied each time and the + table contents have not changed. When this is false, + the REPEATABLE clause is not accepted for use with the sampling method. @@ -74,10 +74,10 @@ method_name(internal) RETURNS tsm_handler bool repeatable_across_scans - If true, the sampling method can deliver identical samples + If true, the sampling method can deliver identical samples across successive scans in the same query (assuming unchanging parameters, seed value, and snapshot). - When this is false, the planner will not select plans that + When this is false, the planner will not select plans that would require scanning the sampled table more than once, since that might result in inconsistent query output. @@ -86,16 +86,16 @@ method_name(internal) RETURNS tsm_handler - The TsmRoutine struct type is declared - in src/include/access/tsmapi.h, which see for additional + The TsmRoutine struct type is declared + in src/include/access/tsmapi.h, which see for additional details. The table sampling methods included in the standard distribution are good references when trying to write your own. Look into - the src/backend/access/tablesample subdirectory of the source - tree for the built-in sampling methods, and into the contrib + the src/backend/access/tablesample subdirectory of the source + tree for the built-in sampling methods, and into the contrib subdirectory for add-on methods. @@ -103,7 +103,7 @@ method_name(internal) RETURNS tsm_handler Sampling Method Support Functions - The TSM handler function returns a palloc'd TsmRoutine struct + The TSM handler function returns a palloc'd TsmRoutine struct containing pointers to the support functions described below. Most of the functions are required, but some are optional, and those pointers can be NULL. @@ -123,16 +123,16 @@ SampleScanGetSampleSize (PlannerInfo *root, relation pages that will be read during a sample scan, and the number of tuples that will be selected by the scan. (For example, these might be determined by estimating the sampling fraction, and then multiplying - the baserel->pages and baserel->tuples + the baserel->pages and baserel->tuples numbers by that, being sure to round the results to integral values.) - The paramexprs list holds the expression(s) that are - parameters to the TABLESAMPLE clause. It is recommended to - use estimate_expression_value() to try to reduce these + The paramexprs list holds the expression(s) that are + parameters to the TABLESAMPLE clause. It is recommended to + use estimate_expression_value() to try to reduce these expressions to constants, if their values are needed for estimation purposes; but the function must provide size estimates even if they cannot be reduced, and it should not fail even if the values appear invalid (remember that they're only estimates of what the run-time values will be). - The pages and tuples parameters are outputs. + The pages and tuples parameters are outputs. @@ -145,29 +145,29 @@ InitSampleScan (SampleScanState *node, Initialize for execution of a SampleScan plan node. This is called during executor startup. It should perform any initialization needed before processing can start. - The SampleScanState node has already been created, but - its tsm_state field is NULL. - The InitSampleScan function can palloc whatever internal + The SampleScanState node has already been created, but + its tsm_state field is NULL. + The InitSampleScan function can palloc whatever internal state data is needed by the sampling method, and store a pointer to - it in node->tsm_state. + it in node->tsm_state. Information about the table to scan is accessible through other fields - of the SampleScanState node (but note that the - node->ss.ss_currentScanDesc scan descriptor is not set + of the SampleScanState node (but note that the + node->ss.ss_currentScanDesc scan descriptor is not set up yet). - eflags contains flag bits describing the executor's + eflags contains flag bits describing the executor's operating mode for this plan node. - When (eflags & EXEC_FLAG_EXPLAIN_ONLY) is true, + When (eflags & EXEC_FLAG_EXPLAIN_ONLY) is true, the scan will not actually be performed, so this function should only do - the minimum required to make the node state valid for EXPLAIN - and EndSampleScan. + the minimum required to make the node state valid for EXPLAIN + and EndSampleScan. This function can be omitted (set the pointer to NULL), in which case - BeginSampleScan must perform all initialization needed + BeginSampleScan must perform all initialization needed by the sampling method. @@ -184,32 +184,32 @@ BeginSampleScan (SampleScanState *node, This is called just before the first attempt to fetch a tuple, and may be called again if the scan needs to be restarted. Information about the table to scan is accessible through fields - of the SampleScanState node (but note that the - node->ss.ss_currentScanDesc scan descriptor is not set + of the SampleScanState node (but note that the + node->ss.ss_currentScanDesc scan descriptor is not set up yet). - The params array, of length nparams, contains the - values of the parameters supplied in the TABLESAMPLE clause. + The params array, of length nparams, contains the + values of the parameters supplied in the TABLESAMPLE clause. These will have the number and types specified in the sampling method's parameterTypes list, and have been checked to not be null. - seed contains a seed to use for any random numbers generated + seed contains a seed to use for any random numbers generated within the sampling method; it is either a hash derived from the - REPEATABLE value if one was given, or the result - of random() if not. + REPEATABLE value if one was given, or the result + of random() if not. - This function may adjust the fields node->use_bulkread - and node->use_pagemode. - If node->use_bulkread is true, which it is by + This function may adjust the fields node->use_bulkread + and node->use_pagemode. + If node->use_bulkread is true, which it is by default, the scan will use a buffer access strategy that encourages recycling buffers after use. It might be reasonable to set this - to false if the scan will visit only a small fraction of the + to false if the scan will visit only a small fraction of the table's pages. - If node->use_pagemode is true, which it is by + If node->use_pagemode is true, which it is by default, the scan will perform visibility checking in a single pass for all tuples on each visited page. It might be reasonable to set this - to false if the scan will select only a small fraction of the + to false if the scan will select only a small fraction of the tuples on each visited page. That will result in fewer tuple visibility checks being performed, though each one will be more expensive because it will require more locking. @@ -219,8 +219,8 @@ BeginSampleScan (SampleScanState *node, If the sampling method is marked repeatable_across_scans, it must be able to select the same set of tuples during a rescan as it did originally, that is - a fresh call of BeginSampleScan must lead to selecting the - same tuples as before (if the TABLESAMPLE parameters + a fresh call of BeginSampleScan must lead to selecting the + same tuples as before (if the TABLESAMPLE parameters and seed don't change). @@ -231,7 +231,7 @@ NextSampleBlock (SampleScanState *node); Returns the block number of the next page to be scanned, or - InvalidBlockNumber if no pages remain to be scanned. + InvalidBlockNumber if no pages remain to be scanned. @@ -251,34 +251,34 @@ NextSampleTuple (SampleScanState *node, Returns the offset number of the next tuple to be sampled on the - specified page, or InvalidOffsetNumber if no tuples remain to - be sampled. maxoffset is the largest offset number in use + specified page, or InvalidOffsetNumber if no tuples remain to + be sampled. maxoffset is the largest offset number in use on the page. - NextSampleTuple is not explicitly told which of the offset - numbers in the range 1 .. maxoffset actually contain valid + NextSampleTuple is not explicitly told which of the offset + numbers in the range 1 .. maxoffset actually contain valid tuples. This is not normally a problem since the core code ignores requests to sample missing or invisible tuples; that should not result in any bias in the sample. However, if necessary, the function can - examine node->ss.ss_currentScanDesc->rs_vistuples[] + examine node->ss.ss_currentScanDesc->rs_vistuples[] to identify which tuples are valid and visible. (This - requires node->use_pagemode to be true.) + requires node->use_pagemode to be true.) - NextSampleTuple must not assume - that blockno is the same page number returned by the most - recent NextSampleBlock call. It was returned by some - previous NextSampleBlock call, but the core code is allowed - to call NextSampleBlock in advance of actually scanning + NextSampleTuple must not assume + that blockno is the same page number returned by the most + recent NextSampleBlock call. It was returned by some + previous NextSampleBlock call, but the core code is allowed + to call NextSampleBlock in advance of actually scanning pages, so as to support prefetching. It is OK to assume that once - sampling of a given page begins, successive NextSampleTuple - calls all refer to the same page until InvalidOffsetNumber is + sampling of a given page begins, successive NextSampleTuple + calls all refer to the same page until InvalidOffsetNumber is returned. diff --git a/doc/src/sgml/tcn.sgml b/doc/src/sgml/tcn.sgml index 623094183d..8cc55efd29 100644 --- a/doc/src/sgml/tcn.sgml +++ b/doc/src/sgml/tcn.sgml @@ -12,16 +12,16 @@ - The tcn module provides a trigger function that notifies + The tcn module provides a trigger function that notifies listeners of changes to any table on which it is attached. It must be - used as an AFTER trigger FOR EACH ROW. + used as an AFTER trigger FOR EACH ROW. Only one parameter may be supplied to the function in a - CREATE TRIGGER statement, and that is optional. If supplied + CREATE TRIGGER statement, and that is optional. If supplied it will be used for the channel name for the notifications. If omitted - tcn will be used for the channel name. + tcn will be used for the channel name. diff --git a/doc/src/sgml/test-decoding.sgml b/doc/src/sgml/test-decoding.sgml index 4f4fd41e32..310a2d6974 100644 --- a/doc/src/sgml/test-decoding.sgml +++ b/doc/src/sgml/test-decoding.sgml @@ -8,13 +8,13 @@ - test_decoding is an example of a logical decoding + test_decoding is an example of a logical decoding output plugin. It doesn't do anything especially useful, but can serve as a starting point for developing your own decoder. - test_decoding receives WAL through the logical decoding + test_decoding receives WAL through the logical decoding mechanism and decodes it into text representations of the operations performed. diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index d5bde5c6c0..7b4912dd5e 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -16,7 +16,7 @@ Full Text Searching (or just text search) provides - the capability to identify natural-language documents that + the capability to identify natural-language documents that satisfy a query, and optionally to sort them by relevance to the query. The most common type of search is to find all documents containing given query terms @@ -73,13 +73,13 @@ - Parsing documents into tokens. It is + Parsing documents into tokens. It is useful to identify various classes of tokens, e.g., numbers, words, complex words, email addresses, so that they can be processed differently. In principle token classes depend on the specific application, but for most purposes it is adequate to use a predefined set of classes. - PostgreSQL uses a parser to + PostgreSQL uses a parser to perform this step. A standard parser is provided, and custom parsers can be created for specific needs. @@ -87,19 +87,19 @@ - Converting tokens into lexemes. + Converting tokens into lexemes. A lexeme is a string, just like a token, but it has been - normalized so that different forms of the same word + normalized so that different forms of the same word are made alike. For example, normalization almost always includes folding upper-case letters to lower-case, and often involves removal - of suffixes (such as s or es in English). + of suffixes (such as s or es in English). This allows searches to find variant forms of the same word, without tediously entering all the possible variants. - Also, this step typically eliminates stop words, which + Also, this step typically eliminates stop words, which are words that are so common that they are useless for searching. (In short, then, tokens are raw fragments of the document text, while lexemes are words that are believed useful for indexing and searching.) - PostgreSQL uses dictionaries to + PostgreSQL uses dictionaries to perform this step. Various standard dictionaries are provided, and custom ones can be created for specific needs. @@ -112,7 +112,7 @@ as a sorted array of normalized lexemes. Along with the lexemes it is often desirable to store positional information to use for proximity ranking, so that a document that - contains a more dense region of query words is + contains a more dense region of query words is assigned a higher rank than one with scattered query words. @@ -132,7 +132,7 @@ - Map synonyms to a single word using Ispell. + Map synonyms to a single word using Ispell. @@ -145,14 +145,14 @@ Map different variations of a word to a canonical form using - an Ispell dictionary. + an Ispell dictionary. Map different variations of a word to a canonical form using - Snowball stemmer rules. + Snowball stemmer rules. @@ -178,7 +178,7 @@ - A document is the unit of searching in a full text search + A document is the unit of searching in a full text search system; for example, a magazine article or email message. The text search engine must be able to parse documents and store associations of lexemes (key words) with their parent document. Later, these associations are @@ -226,11 +226,11 @@ WHERE mid = did AND mid = 12; For text search purposes, each document must be reduced to the - preprocessed tsvector format. Searching and ranking - are performed entirely on the tsvector representation + preprocessed tsvector format. Searching and ranking + are performed entirely on the tsvector representation of a document — the original text need only be retrieved when the document has been selected for display to a user. - We therefore often speak of the tsvector as being the + We therefore often speak of the tsvector as being the document, but of course it is only a compact representation of the full document. @@ -265,11 +265,11 @@ SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::t contains search terms, which must be already-normalized lexemes, and may combine multiple terms using AND, OR, NOT, and FOLLOWED BY operators. (For syntax details see .) There are - functions to_tsquery, plainto_tsquery, - and phraseto_tsquery + functions to_tsquery, plainto_tsquery, + and phraseto_tsquery that are helpful in converting user-written text into a proper tsquery, primarily by normalizing words appearing in - the text. Similarly, to_tsvector is used to parse and + the text. Similarly, to_tsvector is used to parse and normalize a document string. So in practice a text search match would look more like this: @@ -289,15 +289,15 @@ SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat'); f - since here no normalization of the word rats will occur. - The elements of a tsvector are lexemes, which are assumed - already normalized, so rats does not match rat. + since here no normalization of the word rats will occur. + The elements of a tsvector are lexemes, which are assumed + already normalized, so rats does not match rat. The @@ operator also supports text input, allowing explicit conversion of a text - string to tsvector or tsquery to be skipped + string to tsvector or tsquery to be skipped in simple cases. The variants available are: @@ -317,19 +317,19 @@ text @@ text - Within a tsquery, the & (AND) operator + Within a tsquery, the & (AND) operator specifies that both its arguments must appear in the document to have a match. Similarly, the | (OR) operator specifies that - at least one of its arguments must appear, while the ! (NOT) - operator specifies that its argument must not appear in + at least one of its arguments must appear, while the ! (NOT) + operator specifies that its argument must not appear in order to have a match. - For example, the query fat & ! rat matches documents that - contain fat but not rat. + For example, the query fat & ! rat matches documents that + contain fat but not rat. Searching for phrases is possible with the help of - the <-> (FOLLOWED BY) tsquery operator, which + the <-> (FOLLOWED BY) tsquery operator, which matches only if its arguments have matches that are adjacent and in the given order. For example: @@ -346,13 +346,13 @@ SELECT to_tsvector('error is not fatal') @@ to_tsquery('fatal <-> error'); There is a more general version of the FOLLOWED BY operator having the - form <N>, - where N is an integer standing for the difference between + form <N>, + where N is an integer standing for the difference between the positions of the matching lexemes. <1> is - the same as <->, while <2> + the same as <->, while <2> allows exactly one other lexeme to appear between the matches, and so - on. The phraseto_tsquery function makes use of this - operator to construct a tsquery that can match a multi-word + on. The phraseto_tsquery function makes use of this + operator to construct a tsquery that can match a multi-word phrase when some of the words are stop words. For example: @@ -374,7 +374,7 @@ SELECT phraseto_tsquery('the cats ate the rats'); - Parentheses can be used to control nesting of the tsquery + Parentheses can be used to control nesting of the tsquery operators. Without parentheses, | binds least tightly, then &, then <->, and ! most tightly. @@ -384,20 +384,20 @@ SELECT phraseto_tsquery('the cats ate the rats'); It's worth noticing that the AND/OR/NOT operators mean something subtly different when they are within the arguments of a FOLLOWED BY operator than when they are not, because within FOLLOWED BY the exact position of - the match is significant. For example, normally !x matches - only documents that do not contain x anywhere. - But !x <-> y matches y if it is not - immediately after an x; an occurrence of x + the match is significant. For example, normally !x matches + only documents that do not contain x anywhere. + But !x <-> y matches y if it is not + immediately after an x; an occurrence of x elsewhere in the document does not prevent a match. Another example is - that x & y normally only requires that x - and y both appear somewhere in the document, but - (x & y) <-> z requires x - and y to match at the same place, immediately before - a z. Thus this query behaves differently from - x <-> z & y <-> z, which will match a - document containing two separate sequences x z and - y z. (This specific query is useless as written, - since x and y could not match at the same place; + that x & y normally only requires that x + and y both appear somewhere in the document, but + (x & y) <-> z requires x + and y to match at the same place, immediately before + a z. Thus this query behaves differently from + x <-> z & y <-> z, which will match a + document containing two separate sequences x z and + y z. (This specific query is useless as written, + since x and y could not match at the same place; but with more complex situations such as prefix-match patterns, a query of this form could be useful.) @@ -412,26 +412,26 @@ SELECT phraseto_tsquery('the cats ate the rats'); skip indexing certain words (stop words), process synonyms, and use sophisticated parsing, e.g., parse based on more than just white space. This functionality is controlled by text search - configurations. PostgreSQL comes with predefined + configurations. PostgreSQL comes with predefined configurations for many languages, and you can easily create your own - configurations. (psql's \dF command + configurations. (psql's \dF command shows all available configurations.) During installation an appropriate configuration is selected and is set accordingly - in postgresql.conf. If you are using the same text search + in postgresql.conf. If you are using the same text search configuration for the entire cluster you can use the value in - postgresql.conf. To use different configurations + postgresql.conf. To use different configurations throughout the cluster but the same configuration within any one database, - use ALTER DATABASE ... SET. Otherwise, you can set + use ALTER DATABASE ... SET. Otherwise, you can set default_text_search_config in each session. Each text search function that depends on a configuration has an optional - regconfig argument, so that the configuration to use can be + regconfig argument, so that the configuration to use can be specified explicitly. default_text_search_config is used only when this argument is omitted. @@ -439,28 +439,28 @@ SELECT phraseto_tsquery('the cats ate the rats'); To make it easier to build custom text search configurations, a configuration is built up from simpler database objects. - PostgreSQL's text search facility provides + PostgreSQL's text search facility provides four types of configuration-related database objects: - Text search parsers break documents into tokens + Text search parsers break documents into tokens and classify each token (for example, as words or numbers). - Text search dictionaries convert tokens to normalized + Text search dictionaries convert tokens to normalized form and reject stop words. - Text search templates provide the functions underlying + Text search templates provide the functions underlying dictionaries. (A dictionary simply specifies a template and a set of parameters for the template.) @@ -468,7 +468,7 @@ SELECT phraseto_tsquery('the cats ate the rats'); - Text search configurations select a parser and a set + Text search configurations select a parser and a set of dictionaries to use to normalize the tokens produced by the parser. @@ -478,8 +478,8 @@ SELECT phraseto_tsquery('the cats ate the rats'); Text search parsers and templates are built from low-level C functions; therefore it requires C programming ability to develop new ones, and superuser privileges to install one into a database. (There are examples - of add-on parsers and templates in the contrib/ area of the - PostgreSQL distribution.) Since dictionaries and + of add-on parsers and templates in the contrib/ area of the + PostgreSQL distribution.) Since dictionaries and configurations just parameterize and connect together some underlying parsers and templates, no special privilege is needed to create a new dictionary or configuration. Examples of creating custom dictionaries and @@ -504,8 +504,8 @@ SELECT phraseto_tsquery('the cats ate the rats'); It is possible to do a full text search without an index. A simple query - to print the title of each row that contains the word - friend in its body field is: + to print the title of each row that contains the word + friend in its body field is: SELECT title @@ -513,13 +513,13 @@ FROM pgweb WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend'); - This will also find related words such as friends - and friendly, since all these are reduced to the same + This will also find related words such as friends + and friendly, since all these are reduced to the same normalized lexeme. - The query above specifies that the english configuration + The query above specifies that the english configuration is to be used to parse and normalize the strings. Alternatively we could omit the configuration parameters: @@ -535,8 +535,8 @@ WHERE to_tsvector(body) @@ to_tsquery('friend'); A more complex example is to - select the ten most recent documents that contain create and - table in the title or body: + select the ten most recent documents that contain create and + table in the title or body: SELECT title @@ -577,7 +577,7 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); This is because the index contents must be unaffected by . If they were affected, the index contents might be inconsistent because different entries could - contain tsvectors that were created with different text search + contain tsvectors that were created with different text search configurations, and there would be no way to guess which was which. It would be impossible to dump and restore such an index correctly. @@ -587,8 +587,8 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); used in the index above, only a query reference that uses the 2-argument version of to_tsvector with the same configuration name will use that index. That is, WHERE - to_tsvector('english', body) @@ 'a & b' can use the index, - but WHERE to_tsvector(body) @@ 'a & b' cannot. + to_tsvector('english', body) @@ 'a & b' can use the index, + but WHERE to_tsvector(body) @@ 'a & b' cannot. This ensures that an index will be used only with the same configuration used to create the index entries. @@ -601,13 +601,13 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector(config_name, body)); - where config_name is a column in the pgweb + where config_name is a column in the pgweb table. This allows mixed configurations in the same index while recording which configuration was used for each index entry. This would be useful, for example, if the document collection contained documents in different languages. Again, queries that are meant to use the index must be phrased to match, e.g., - WHERE to_tsvector(config_name, body) @@ 'a & b'. + WHERE to_tsvector(config_name, body) @@ 'a & b'. @@ -619,11 +619,11 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', title || ' ' | - Another approach is to create a separate tsvector column - to hold the output of to_tsvector. This example is a + Another approach is to create a separate tsvector column + to hold the output of to_tsvector. This example is a concatenation of title and body, - using coalesce to ensure that one field will still be - indexed when the other is NULL: + using coalesce to ensure that one field will still be + indexed when the other is NULL: ALTER TABLE pgweb ADD COLUMN textsearchable_index_col tsvector; @@ -649,10 +649,10 @@ LIMIT 10; - When using a separate column to store the tsvector + When using a separate column to store the tsvector representation, - it is necessary to create a trigger to keep the tsvector - column current anytime title or body changes. + it is necessary to create a trigger to keep the tsvector + column current anytime title or body changes. explains how to do that. @@ -661,13 +661,13 @@ LIMIT 10; is that it is not necessary to explicitly specify the text search configuration in queries in order to make use of the index. As shown in the example above, the query can depend on - default_text_search_config. Another advantage is that + default_text_search_config. Another advantage is that searches will be faster, since it will not be necessary to redo the - to_tsvector calls to verify index matches. (This is more + to_tsvector calls to verify index matches. (This is more important when using a GiST index than a GIN index; see .) The expression-index approach is simpler to set up, however, and it requires less disk space since the - tsvector representation is not stored explicitly. + tsvector representation is not stored explicitly. @@ -701,7 +701,7 @@ LIMIT 10; -to_tsvector( config regconfig, document text) returns tsvector +to_tsvector( config regconfig, document text) returns tsvector @@ -734,12 +734,12 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); each token. For each token, a list of dictionaries () is consulted, where the list can vary depending on the token type. The first dictionary - that recognizes the token emits one or more normalized + that recognizes the token emits one or more normalized lexemes to represent the token. For example, rats became rat because one of the dictionaries recognized that the word rats is a plural form of rat. Some words are recognized as - stop words (), which + stop words (), which causes them to be ignored since they occur too frequently to be useful in searching. In our example these are a, on, and it. @@ -758,9 +758,9 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); The function setweight can be used to label the - entries of a tsvector with a given weight, - where a weight is one of the letters A, B, - C, or D. + entries of a tsvector with a given weight, + where a weight is one of the letters A, B, + C, or D. This is typically used to mark entries coming from different parts of a document, such as title versus body. Later, this information can be used for ranking of search results. @@ -783,8 +783,8 @@ UPDATE tt SET ti = Here we have used setweight to label the source of each lexeme in the finished tsvector, and then merged - the labeled tsvector values using the tsvector - concatenation operator ||. (tsvector values using the tsvector + concatenation operator ||. ( gives details about these operations.) @@ -811,20 +811,20 @@ UPDATE tt SET ti = -to_tsquery( config regconfig, querytext text) returns tsquery +to_tsquery( config regconfig, querytext text) returns tsquery - to_tsquery creates a tsquery value from + to_tsquery creates a tsquery value from querytext, which must consist of single tokens - separated by the tsquery operators & (AND), + separated by the tsquery operators & (AND), | (OR), ! (NOT), and <-> (FOLLOWED BY), possibly grouped using parentheses. In other words, the input to to_tsquery must already follow the general rules for - tsquery input, as described in tsquery input, as described in . The difference is that while basic - tsquery input takes the tokens at face value, + tsquery input takes the tokens at face value, to_tsquery normalizes each token into a lexeme using the specified or default configuration, and discards any tokens that are stop words according to the configuration. For example: @@ -836,8 +836,8 @@ SELECT to_tsquery('english', 'The & Fat & Rats'); 'fat' & 'rat' - As in basic tsquery input, weight(s) can be attached to each - lexeme to restrict it to match only tsvector lexemes of those + As in basic tsquery input, weight(s) can be attached to each + lexeme to restrict it to match only tsvector lexemes of those weight(s). For example: @@ -847,7 +847,7 @@ SELECT to_tsquery('english', 'Fat | Rats:AB'); 'fat' | 'rat':AB - Also, * can be attached to a lexeme to specify prefix matching: + Also, * can be attached to a lexeme to specify prefix matching: SELECT to_tsquery('supern:*A & star:A*B'); @@ -856,7 +856,7 @@ SELECT to_tsquery('supern:*A & star:A*B'); 'supern':*A & 'star':*AB - Such a lexeme will match any word in a tsvector that begins + Such a lexeme will match any word in a tsvector that begins with the given string. @@ -884,13 +884,13 @@ SELECT to_tsquery('''supernovae stars'' & !crab'); -plainto_tsquery( config regconfig, querytext text) returns tsquery +plainto_tsquery( config regconfig, querytext text) returns tsquery - plainto_tsquery transforms the unformatted text + plainto_tsquery transforms the unformatted text querytext to a tsquery value. - The text is parsed and normalized much as for to_tsvector, + The text is parsed and normalized much as for to_tsvector, then the & (AND) tsquery operator is inserted between surviving words. @@ -905,7 +905,7 @@ SELECT plainto_tsquery('english', 'The Fat Rats'); 'fat' & 'rat' - Note that plainto_tsquery will not + Note that plainto_tsquery will not recognize tsquery operators, weight labels, or prefix-match labels in its input: @@ -924,16 +924,16 @@ SELECT plainto_tsquery('english', 'The Fat & Rats:C'); -phraseto_tsquery( config regconfig, querytext text) returns tsquery +phraseto_tsquery( config regconfig, querytext text) returns tsquery - phraseto_tsquery behaves much like - plainto_tsquery, except that it inserts + phraseto_tsquery behaves much like + plainto_tsquery, except that it inserts the <-> (FOLLOWED BY) operator between surviving words instead of the & (AND) operator. Also, stop words are not simply discarded, but are accounted for by - inserting <N> operators rather + inserting <N> operators rather than <-> operators. This function is useful when searching for exact lexeme sequences, since the FOLLOWED BY operators check lexeme order not just the presence of all the lexemes. @@ -949,8 +949,8 @@ SELECT phraseto_tsquery('english', 'The Fat Rats'); 'fat' <-> 'rat' - Like plainto_tsquery, the - phraseto_tsquery function will not + Like plainto_tsquery, the + phraseto_tsquery function will not recognize tsquery operators, weight labels, or prefix-match labels in its input: @@ -994,7 +994,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1011,7 +1011,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1020,19 +1020,19 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ranking for the given document vector and query, as described in Clarke, Cormack, and Tudhope's "Relevance Ranking for One to Three Term Queries" in the journal "Information Processing and Management", - 1999. Cover density is similar to ts_rank ranking + 1999. Cover density is similar to ts_rank ranking except that the proximity of matching lexemes to each other is taken into consideration. This function requires lexeme positional information to perform - its calculation. Therefore, it ignores any stripped - lexemes in the tsvector. If there are no unstripped + its calculation. Therefore, it ignores any stripped + lexemes in the tsvector. If there are no unstripped lexemes in the input, the result will be zero. (See for more information - about the strip function and positional information - in tsvectors.) + about the strip function and positional information + in tsvectors.) @@ -1094,7 +1094,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); 4 divides the rank by the mean harmonic distance between extents - (this is implemented only by ts_rank_cd) + (this is implemented only by ts_rank_cd) @@ -1189,7 +1189,7 @@ LIMIT 10; To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of - the document with marked search terms. PostgreSQL + the document with marked search terms. PostgreSQL provides a function ts_headline that implements this functionality. @@ -1199,7 +1199,7 @@ LIMIT 10; -ts_headline( config regconfig, document text, query tsquery , options text ) returns text +ts_headline( config regconfig, document text, query tsquery , options text ) returns text @@ -1215,13 +1215,13 @@ ts_headline( config If an options string is specified it must consist of a comma-separated list of one or more - option=value pairs. + option=value pairs. The available options are: - StartSel, StopSel: the strings with + StartSel, StopSel: the strings with which to delimit query words appearing in the document, to distinguish them from other excerpted words. You must double-quote these strings if they contain spaces or commas. @@ -1229,7 +1229,7 @@ ts_headline( config - MaxWords, MinWords: these numbers + MaxWords, MinWords: these numbers determine the longest and shortest headlines to output. @@ -1256,10 +1256,10 @@ ts_headline( config MaxWords and - words of length ShortWord or less are dropped at the start + each side. Each fragment will be of at most MaxWords and + words of length ShortWord or less are dropped at the start and end of each fragment. If not all query words are found in the - document, then a single fragment of the first MinWords + document, then a single fragment of the first MinWords in the document will be displayed. @@ -1312,7 +1312,7 @@ query.', - ts_headline uses the original document, not a + ts_headline uses the original document, not a tsvector summary, so it can be slow and should be used with care. @@ -1334,10 +1334,10 @@ query.', showed how raw textual - documents can be converted into tsvector values. + documents can be converted into tsvector values. PostgreSQL also provides functions and operators that can be used to manipulate documents that are already - in tsvector form. + in tsvector form. @@ -1349,18 +1349,18 @@ query.', tsvector concatenation - tsvector || tsvector + tsvector || tsvector - The tsvector concatenation operator + The tsvector concatenation operator returns a vector which combines the lexemes and positional information of the two vectors given as arguments. Positions and weight labels are retained during the concatenation. Positions appearing in the right-hand vector are offset by the largest position mentioned in the left-hand vector, so that the result is - nearly equivalent to the result of performing to_tsvector + nearly equivalent to the result of performing to_tsvector on the concatenation of the two original document strings. (The equivalence is not exact, because any stop-words removed from the end of the left-hand argument will not affect the result, whereas @@ -1370,11 +1370,11 @@ query.', One advantage of using concatenation in the vector form, rather than - concatenating text before applying to_tsvector, is that + concatenating text before applying to_tsvector, is that you can use different configurations to parse different sections - of the document. Also, because the setweight function + of the document. Also, because the setweight function marks all lexemes of the given vector the same way, it is necessary - to parse the text and do setweight before concatenating + to parse the text and do setweight before concatenating if you want to label different parts of the document with different weights. @@ -1388,13 +1388,13 @@ query.', setweight - setweight(vector tsvector, weight "char") returns tsvector + setweight(vector tsvector, weight "char") returns tsvector - setweight returns a copy of the input vector in which every - position has been labeled with the given weight, either + setweight returns a copy of the input vector in which every + position has been labeled with the given weight, either A, B, C, or D. (D is the default for new vectors and as such is not displayed on output.) These labels are @@ -1403,9 +1403,9 @@ query.', - Note that weight labels apply to positions, not - lexemes. If the input vector has been stripped of - positions then setweight does nothing. + Note that weight labels apply to positions, not + lexemes. If the input vector has been stripped of + positions then setweight does nothing. @@ -1416,7 +1416,7 @@ query.', length(tsvector) - length(vector tsvector) returns integer + length(vector tsvector) returns integer @@ -1433,7 +1433,7 @@ query.', strip - strip(vector tsvector) returns tsvector + strip(vector tsvector) returns tsvector @@ -1443,7 +1443,7 @@ query.', smaller than an unstripped vector, but it is also less useful. Relevance ranking does not work as well on stripped vectors as unstripped ones. Also, - the <-> (FOLLOWED BY) tsquery operator + the <-> (FOLLOWED BY) tsquery operator will never match stripped input, since it cannot determine the distance between lexeme occurrences. @@ -1454,7 +1454,7 @@ query.', - A full list of tsvector-related functions is available + A full list of tsvector-related functions is available in . @@ -1465,10 +1465,10 @@ query.', showed how raw textual - queries can be converted into tsquery values. + queries can be converted into tsquery values. PostgreSQL also provides functions and operators that can be used to manipulate queries that are already - in tsquery form. + in tsquery form. @@ -1476,7 +1476,7 @@ query.', - tsquery && tsquery + tsquery && tsquery @@ -1490,7 +1490,7 @@ query.', - tsquery || tsquery + tsquery || tsquery @@ -1504,7 +1504,7 @@ query.', - !! tsquery + !! tsquery @@ -1518,15 +1518,15 @@ query.', - tsquery <-> tsquery + tsquery <-> tsquery Returns a query that searches for a match to the first given query immediately followed by a match to the second given query, using - the <-> (FOLLOWED BY) - tsquery operator. For example: + the <-> (FOLLOWED BY) + tsquery operator. For example: SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); @@ -1546,7 +1546,7 @@ SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery + tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery @@ -1554,8 +1554,8 @@ SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); Returns a query that searches for a match to the first given query followed by a match to the second given query at a distance of at distance lexemes, using - the <N> - tsquery operator. For example: + the <N> + tsquery operator. For example: SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10); @@ -1575,13 +1575,13 @@ SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10); numnode - numnode(query tsquery) returns integer + numnode(query tsquery) returns integer Returns the number of nodes (lexemes plus operators) in a - tsquery. This function is useful + tsquery. This function is useful to determine if the query is meaningful (returns > 0), or contains only stop words (returns 0). Examples: @@ -1609,12 +1609,12 @@ SELECT numnode('foo & bar'::tsquery); querytree - querytree(query tsquery) returns text + querytree(query tsquery) returns text - Returns the portion of a tsquery that can be used for + Returns the portion of a tsquery that can be used for searching an index. This function is useful for detecting unindexable queries, for example those containing only stop words or only negated terms. For example: @@ -1640,16 +1640,16 @@ SELECT querytree(to_tsquery('!defined')); The ts_rewrite family of functions search a - given tsquery for occurrences of a target + given tsquery for occurrences of a target subquery, and replace each occurrence with a substitute subquery. In essence this operation is a - tsquery-specific version of substring replacement. + tsquery-specific version of substring replacement. A target and substitute combination can be - thought of as a query rewrite rule. A collection + thought of as a query rewrite rule. A collection of such rewrite rules can be a powerful search aid. For example, you can expand the search using synonyms - (e.g., new york, big apple, nyc, - gotham) or narrow the search to direct the user to some hot + (e.g., new york, big apple, nyc, + gotham) or narrow the search to direct the user to some hot topic. There is some overlap in functionality between this feature and thesaurus dictionaries (). However, you can modify a set of rewrite rules on-the-fly without @@ -1662,12 +1662,12 @@ SELECT querytree(to_tsquery('!defined')); - ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery + ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery - This form of ts_rewrite simply applies a single + This form of ts_rewrite simply applies a single rewrite rule: target is replaced by substitute wherever it appears in - ts_rewrite (query tsquery, select text) returns tsquery + ts_rewrite (query tsquery, select text) returns tsquery - This form of ts_rewrite accepts a starting - query and a SQL select command, which - is given as a text string. The select must yield two - columns of tsquery type. For each row of the - select result, occurrences of the first column value + This form of ts_rewrite accepts a starting + query and a SQL select command, which + is given as a text string. The select must yield two + columns of tsquery type. For each row of the + select result, occurrences of the first column value (the target) are replaced by the second column value (the substitute) - within the current query value. For example: + within the current query value. For example: CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery); @@ -1713,7 +1713,7 @@ SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases'); Note that when multiple rewrite rules are applied in this way, the order of application can be important; so in practice you will - want the source query to ORDER BY some ordering key. + want the source query to ORDER BY some ordering key. @@ -1777,9 +1777,9 @@ SELECT ts_rewrite('a & b'::tsquery, - When using a separate column to store the tsvector representation + When using a separate column to store the tsvector representation of your documents, it is necessary to create a trigger to update the - tsvector column when the document content columns change. + tsvector column when the document content columns change. Two built-in trigger functions are available for this, or you can write your own. @@ -1790,9 +1790,9 @@ tsvector_update_trigger_column(tsvector_column_na - These trigger functions automatically compute a tsvector + These trigger functions automatically compute a tsvector column from one or more textual columns, under the control of - parameters specified in the CREATE TRIGGER command. + parameters specified in the CREATE TRIGGER command. An example of their use is: @@ -1819,24 +1819,24 @@ SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title & body'); title here | the body text is here - Having created this trigger, any change in title or - body will automatically be reflected into - tsv, without the application having to worry about it. + Having created this trigger, any change in title or + body will automatically be reflected into + tsv, without the application having to worry about it. - The first trigger argument must be the name of the tsvector + The first trigger argument must be the name of the tsvector column to be updated. The second argument specifies the text search configuration to be used to perform the conversion. For - tsvector_update_trigger, the configuration name is simply + tsvector_update_trigger, the configuration name is simply given as the second trigger argument. It must be schema-qualified as shown above, so that the trigger behavior will not change with changes - in search_path. For - tsvector_update_trigger_column, the second trigger argument + in search_path. For + tsvector_update_trigger_column, the second trigger argument is the name of another table column, which must be of type - regconfig. This allows a per-row selection of configuration + regconfig. This allows a per-row selection of configuration to be made. The remaining argument(s) are the names of textual columns - (of type text, varchar, or char). These + (of type text, varchar, or char). These will be included in the document in the order given. NULL values will be skipped (but the other columns will still be indexed). @@ -1865,9 +1865,9 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE Keep in mind that it is important to specify the configuration name - explicitly when creating tsvector values inside triggers, + explicitly when creating tsvector values inside triggers, so that the column's contents will not be affected by changes to - default_text_search_config. Failure to do this is likely to + default_text_search_config. Failure to do this is likely to lead to problems such as search results changing after a dump and reload. @@ -1881,38 +1881,38 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE - The function ts_stat is useful for checking your + The function ts_stat is useful for checking your configuration and for finding stop-word candidates. -ts_stat(sqlquery text, weights text, - OUT word text, OUT ndoc integer, - OUT nentry integer) returns setof record +ts_stat(sqlquery text, weights text, + OUT word text, OUT ndoc integer, + OUT nentry integer) returns setof record sqlquery is a text value containing an SQL query which must return a single tsvector column. - ts_stat executes the query and returns statistics about + ts_stat executes the query and returns statistics about each distinct lexeme (word) contained in the tsvector data. The columns returned are - word text — the value of a lexeme + word text — the value of a lexeme - ndoc integer — number of documents - (tsvectors) the word occurred in + ndoc integer — number of documents + (tsvectors) the word occurred in - nentry integer — total number of + nentry integer — total number of occurrences of the word @@ -1931,8 +1931,8 @@ ORDER BY nentry DESC, ndoc DESC, word LIMIT 10; - The same, but counting only word occurrences with weight A - or B: + The same, but counting only word occurrences with weight A + or B: SELECT * FROM ts_stat('SELECT vector FROM apod', 'ab') @@ -1950,7 +1950,7 @@ LIMIT 10; Text search parsers are responsible for splitting raw document text - into tokens and identifying each token's type, where + into tokens and identifying each token's type, where the set of possible types is defined by the parser itself. Note that a parser does not modify the text at all — it simply identifies plausible word boundaries. Because of this limited scope, @@ -1961,7 +1961,7 @@ LIMIT 10; - The built-in parser is named pg_catalog.default. + The built-in parser is named pg_catalog.default. It recognizes 23 token types, shown in . @@ -1977,119 +1977,119 @@ LIMIT 10; - asciiword + asciiword Word, all ASCII letters elephant - word + word Word, all letters mañana - numword + numword Word, letters and digits beta1 - asciihword + asciihword Hyphenated word, all ASCII up-to-date - hword + hword Hyphenated word, all letters lógico-matemática - numhword + numhword Hyphenated word, letters and digits postgresql-beta1 - hword_asciipart + hword_asciipart Hyphenated word part, all ASCII postgresql in the context postgresql-beta1 - hword_part + hword_part Hyphenated word part, all letters lógico or matemática in the context lógico-matemática - hword_numpart + hword_numpart Hyphenated word part, letters and digits beta1 in the context postgresql-beta1 - email + email Email address foo@example.com - protocol + protocol Protocol head http:// - url + url URL example.com/stuff/index.html - host + host Host example.com - url_path + url_path URL path /stuff/index.html, in the context of a URL - file + file File or path name /usr/local/foo.txt, if not within a URL - sfloat + sfloat Scientific notation -1.234e56 - float + float Decimal notation -1.234 - int + int Signed integer -1234 - uint + uint Unsigned integer 1234 - version + version Version number 8.3.0 - tag + tag XML tag <a href="dictionaries.html"> - entity + entity XML entity &amp; - blank + blank Space symbols (any whitespace or punctuation not otherwise recognized) @@ -2099,16 +2099,16 @@ LIMIT 10; - The parser's notion of a letter is determined by the database's - locale setting, specifically lc_ctype. Words containing + The parser's notion of a letter is determined by the database's + locale setting, specifically lc_ctype. Words containing only the basic ASCII letters are reported as a separate token type, since it is sometimes useful to distinguish them. In most European - languages, token types word and asciiword + languages, token types word and asciiword should be treated alike. - email does not support all valid email characters as + email does not support all valid email characters as defined by RFC 5322. Specifically, the only non-alphanumeric characters supported for email user names are period, dash, and underscore. @@ -2154,9 +2154,9 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h Dictionaries are used to eliminate words that should not be considered in a - search (stop words), and to normalize words so + search (stop words), and to normalize words so that different derived forms of the same word will match. A successfully - normalized word is called a lexeme. Aside from + normalized word is called a lexeme. Aside from improving search quality, normalization and removal of stop words reduce the size of the tsvector representation of a document, thereby improving performance. Normalization does not always have linguistic meaning @@ -2229,10 +2229,10 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h - a single lexeme with the TSL_FILTER flag set, to replace + a single lexeme with the TSL_FILTER flag set, to replace the original token with a new token to be passed to subsequent dictionaries (a dictionary that does this is called a - filtering dictionary) + filtering dictionary) @@ -2254,7 +2254,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h used to create new dictionaries with custom parameters. Each predefined dictionary template is described below. If no existing template is suitable, it is possible to create new ones; see the - contrib/ area of the PostgreSQL distribution + contrib/ area of the PostgreSQL distribution for examples. @@ -2267,7 +2267,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h until some dictionary recognizes it as a known word. If it is identified as a stop word, or if no dictionary recognizes the token, it will be discarded and not indexed or searched for. - Normally, the first dictionary that returns a non-NULL + Normally, the first dictionary that returns a non-NULL output determines the result, and any remaining dictionaries are not consulted; but a filtering dictionary can replace the given word with a modified word, which is then passed to subsequent dictionaries. @@ -2277,11 +2277,11 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h The general rule for configuring a list of dictionaries is to place first the most narrow, most specific dictionary, then the more general dictionaries, finishing with a very general dictionary, like - a Snowball stemmer or simple, which + a Snowball stemmer or simple, which recognizes everything. For example, for an astronomy-specific search (astro_en configuration) one could bind token type asciiword (ASCII word) to a synonym dictionary of astronomical - terms, a general English dictionary and a Snowball English + terms, a general English dictionary and a Snowball English stemmer: @@ -2305,7 +2305,7 @@ ALTER TEXT SEARCH CONFIGURATION astro_en Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text searching. For example, every English text - contains words like a and the, so it is + contains words like a and the, so it is useless to store them in an index. However, stop words do affect the positions in tsvector, which in turn affect ranking: @@ -2347,7 +2347,7 @@ SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list &a Simple Dictionary - The simple dictionary template operates by converting the + The simple dictionary template operates by converting the input token to lower case and checking it against a file of stop words. If it is found in the file then an empty array is returned, causing the token to be discarded. If not, the lower-cased form of the word @@ -2357,7 +2357,7 @@ SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list &a - Here is an example of a dictionary definition using the simple + Here is an example of a dictionary definition using the simple template: @@ -2369,11 +2369,11 @@ CREATE TEXT SEARCH DICTIONARY public.simple_dict ( Here, english is the base name of a file of stop words. The file's full name will be - $SHAREDIR/tsearch_data/english.stop, - where $SHAREDIR means the + $SHAREDIR/tsearch_data/english.stop, + where $SHAREDIR means the PostgreSQL installation's shared-data directory, - often /usr/local/share/postgresql (use pg_config - --sharedir to determine it if you're not sure). + often /usr/local/share/postgresql (use pg_config + --sharedir to determine it if you're not sure). The file format is simply a list of words, one per line. Blank lines and trailing spaces are ignored, and upper case is folded to lower case, but no other processing is done @@ -2397,10 +2397,10 @@ SELECT ts_lexize('public.simple_dict','The'); - We can also choose to return NULL, instead of the lower-cased + We can also choose to return NULL, instead of the lower-cased word, if it is not found in the stop words file. This behavior is - selected by setting the dictionary's Accept parameter to - false. Continuing the example: + selected by setting the dictionary's Accept parameter to + false. Continuing the example: ALTER TEXT SEARCH DICTIONARY public.simple_dict ( Accept = false ); @@ -2418,17 +2418,17 @@ SELECT ts_lexize('public.simple_dict','The'); - With the default setting of Accept = true, - it is only useful to place a simple dictionary at the end + With the default setting of Accept = true, + it is only useful to place a simple dictionary at the end of a list of dictionaries, since it will never pass on any token to - a following dictionary. Conversely, Accept = false + a following dictionary. Conversely, Accept = false is only useful when there is at least one following dictionary. Most types of dictionaries rely on configuration files, such as files of - stop words. These files must be stored in UTF-8 encoding. + stop words. These files must be stored in UTF-8 encoding. They will be translated to the actual database encoding, if that is different, when they are read into the server. @@ -2439,8 +2439,8 @@ SELECT ts_lexize('public.simple_dict','The'); Normally, a database session will read a dictionary configuration file only once, when it is first used within the session. If you modify a configuration file and want to force existing sessions to pick up the - new contents, issue an ALTER TEXT SEARCH DICTIONARY command - on the dictionary. This can be a dummy update that doesn't + new contents, issue an ALTER TEXT SEARCH DICTIONARY command + on the dictionary. This can be a dummy update that doesn't actually change any parameter values. @@ -2457,7 +2457,7 @@ SELECT ts_lexize('public.simple_dict','The'); dictionary can be used to overcome linguistic problems, for example, to prevent an English stemmer dictionary from reducing the word Paris to pari. It is enough to have a Paris paris line in the - synonym dictionary and put it before the english_stem + synonym dictionary and put it before the english_stem dictionary. For example: @@ -2483,24 +2483,24 @@ SELECT * FROM ts_debug('english', 'Paris'); - The only parameter required by the synonym template is - SYNONYMS, which is the base name of its configuration file - — my_synonyms in the above example. + The only parameter required by the synonym template is + SYNONYMS, which is the base name of its configuration file + — my_synonyms in the above example. The file's full name will be - $SHAREDIR/tsearch_data/my_synonyms.syn - (where $SHAREDIR means the - PostgreSQL installation's shared-data directory). + $SHAREDIR/tsearch_data/my_synonyms.syn + (where $SHAREDIR means the + PostgreSQL installation's shared-data directory). The file format is just one line per word to be substituted, with the word followed by its synonym, separated by white space. Blank lines and trailing spaces are ignored. - The synonym template also has an optional parameter - CaseSensitive, which defaults to false. When - CaseSensitive is false, words in the synonym file + The synonym template also has an optional parameter + CaseSensitive, which defaults to false. When + CaseSensitive is false, words in the synonym file are folded to lower case, as are input tokens. When it is - true, words and tokens are not folded to lower case, + true, words and tokens are not folded to lower case, but are compared as-is. @@ -2513,7 +2513,7 @@ SELECT * FROM ts_debug('english', 'Paris'); the prefix match marker (see ). For example, suppose we have these entries in - $SHAREDIR/tsearch_data/synonym_sample.syn: + $SHAREDIR/tsearch_data/synonym_sample.syn: postgres pgsql postgresql pgsql @@ -2573,7 +2573,7 @@ mydb=# SELECT 'indexes are very useful'::tsvector @@ to_tsquery('tst','indices') Basically a thesaurus dictionary replaces all non-preferred terms by one preferred term and, optionally, preserves the original terms for indexing - as well. PostgreSQL's current implementation of the + as well. PostgreSQL's current implementation of the thesaurus dictionary is an extension of the synonym dictionary with added phrase support. A thesaurus dictionary requires a configuration file of the following format: @@ -2597,7 +2597,7 @@ more sample word(s) : more indexed word(s) recognize a word. In that case, you should remove the use of the word or teach the subdictionary about it. You can place an asterisk (*) at the beginning of an indexed word to skip applying - the subdictionary to it, but all sample words must be known + the subdictionary to it, but all sample words must be known to the subdictionary. @@ -2609,16 +2609,16 @@ more sample word(s) : more indexed word(s) Specific stop words recognized by the subdictionary cannot be - specified; instead use ? to mark the location where any - stop word can appear. For example, assuming that a and - the are stop words according to the subdictionary: + specified; instead use ? to mark the location where any + stop word can appear. For example, assuming that a and + the are stop words according to the subdictionary: ? one ? two : swsw - matches a one the two and the one a two; - both would be replaced by swsw. + matches a one the two and the one a two; + both would be replaced by swsw. @@ -2628,7 +2628,7 @@ more sample word(s) : more indexed word(s) accumulation. The thesaurus dictionary must be configured carefully. For example, if the thesaurus dictionary is assigned to handle only the asciiword token, then a thesaurus dictionary - definition like one 7 will not work since token type + definition like one 7 will not work since token type uint is not assigned to the thesaurus dictionary. @@ -2645,7 +2645,7 @@ more sample word(s) : more indexed word(s) Thesaurus Configuration - To define a new thesaurus dictionary, use the thesaurus + To define a new thesaurus dictionary, use the thesaurus template. For example: @@ -2667,8 +2667,8 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple ( mythesaurus is the base name of the thesaurus configuration file. - (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths, - where $SHAREDIR means the installation shared-data + (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths, + where $SHAREDIR means the installation shared-data directory.) @@ -2752,7 +2752,7 @@ SELECT to_tsquery('''supernova star'''); Notice that supernova star matches supernovae stars in thesaurus_astro because we specified the english_stem stemmer in the thesaurus definition. - The stemmer removed the e and s. + The stemmer removed the e and s. @@ -2774,21 +2774,21 @@ SELECT plainto_tsquery('supernova star'); - <application>Ispell</> Dictionary + <application>Ispell</application> Dictionary - The Ispell dictionary template supports - morphological dictionaries, which can normalize many + The Ispell dictionary template supports + morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, - an English Ispell dictionary can match all declensions and + an English Ispell dictionary can match all declensions and conjugations of the search term bank, e.g., - banking, banked, banks, - banks', and bank's. + banking, banked, banks, + banks', and bank's. The standard PostgreSQL distribution does - not include any Ispell configuration files. + not include any Ispell configuration files. Dictionaries for a large number of languages are available from Ispell. Also, some more modern dictionary file formats are supported — - To create an Ispell dictionary perform these steps: + To create an Ispell dictionary perform these steps: - download dictionary configuration files. OpenOffice - extension files have the .oxt extension. It is necessary - to extract .aff and .dic files, change - extensions to .affix and .dict. For some + download dictionary configuration files. OpenOffice + extension files have the .oxt extension. It is necessary + to extract .aff and .dic files, change + extensions to .affix and .dict. For some dictionary files it is also needed to convert characters to the UTF-8 encoding with commands (for example, for a Norwegian language dictionary): @@ -2819,7 +2819,7 @@ iconv -f ISO_8859-1 -t UTF-8 -o nn_no.dict nn_NO.dic - copy files to the $SHAREDIR/tsearch_data directory + copy files to the $SHAREDIR/tsearch_data directory @@ -2837,10 +2837,10 @@ CREATE TEXT SEARCH DICTIONARY english_hunspell ( - Here, DictFile, AffFile, and StopWords + Here, DictFile, AffFile, and StopWords specify the base names of the dictionary, affixes, and stop-words files. The stop-words file has the same format explained above for the - simple dictionary type. The format of the other files is + simple dictionary type. The format of the other files is not specified here but is available from the above-mentioned web sites. @@ -2851,7 +2851,7 @@ CREATE TEXT SEARCH DICTIONARY english_hunspell ( - The .affix file of Ispell has the following + The .affix file of Ispell has the following structure: prefixes @@ -2866,7 +2866,7 @@ flag T: - And the .dict file has the following structure: + And the .dict file has the following structure: lapse/ADGRS lard/DGRS @@ -2876,14 +2876,14 @@ lark/MRS - Format of the .dict file is: + Format of the .dict file is: basic_form/affix_class_name - In the .affix file every affix flag is described in the + In the .affix file every affix flag is described in the following format: condition > [-stripping_letters,] adding_affix @@ -2892,12 +2892,12 @@ condition > [-stripping_letters,] adding_affix Here, condition has a format similar to the format of regular expressions. - It can use groupings [...] and [^...]. - For example, [AEIOU]Y means that the last letter of the word - is "y" and the penultimate letter is "a", - "e", "i", "o" or "u". - [^EY] means that the last letter is neither "e" - nor "y". + It can use groupings [...] and [^...]. + For example, [AEIOU]Y means that the last letter of the word + is "y" and the penultimate letter is "a", + "e", "i", "o" or "u". + [^EY] means that the last letter is neither "e" + nor "y". @@ -2922,8 +2922,8 @@ SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk'); - MySpell format is a subset of Hunspell. - The .affix file of Hunspell has the following + MySpell format is a subset of Hunspell. + The .affix file of Hunspell has the following structure: PFX A Y 1 @@ -2970,8 +2970,8 @@ SFX T 0 est [^ey] - The .dict file looks like the .dict file of - Ispell: + The .dict file looks like the .dict file of + Ispell: larder/M lardy/RT @@ -2982,8 +2982,8 @@ largehearted - MySpell does not support compound words. - Hunspell has sophisticated support for compound words. At + MySpell does not support compound words. + Hunspell has sophisticated support for compound words. At present, PostgreSQL implements only the basic compound word operations of Hunspell. @@ -2992,18 +2992,18 @@ largehearted - <application>Snowball</> Dictionary + <application>Snowball</application> Dictionary - The Snowball dictionary template is based on a project + The Snowball dictionary template is based on a project by Martin Porter, inventor of the popular Porter's stemming algorithm for the English language. Snowball now provides stemming algorithms for many languages (see the Snowball site for more information). Each algorithm understands how to reduce common variant forms of words to a base, or stem, spelling within - its language. A Snowball dictionary requires a language + its language. A Snowball dictionary requires a language parameter to identify which stemmer to use, and optionally can specify a - stopword file name that gives a list of words to eliminate. + stopword file name that gives a list of words to eliminate. (PostgreSQL's standard stopword lists are also provided by the Snowball project.) For example, there is a built-in definition equivalent to @@ -3020,7 +3020,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( - A Snowball dictionary recognizes everything, whether + A Snowball dictionary recognizes everything, whether or not it is able to simplify the word, so it should be placed at the end of the dictionary list. It is useless to have it before any other dictionary because a token will never pass through it to @@ -3047,7 +3047,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( one used by text search functions if an explicit configuration parameter is omitted. It can be set in postgresql.conf, or set for an - individual session using the SET command. + individual session using the SET command. @@ -3061,7 +3061,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( As an example we will create a configuration pg, starting by duplicating the built-in - english configuration: + english configuration: CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english ); @@ -3088,7 +3088,7 @@ CREATE TEXT SEARCH DICTIONARY pg_dict ( ); - Next we register the Ispell dictionary + Next we register the Ispell dictionary english_ispell, which has its own configuration files: @@ -3101,7 +3101,7 @@ CREATE TEXT SEARCH DICTIONARY english_ispell ( Now we can set up the mappings for words in configuration - pg: + pg: ALTER TEXT SEARCH CONFIGURATION pg @@ -3133,7 +3133,7 @@ version of our software. The next step is to set the session to use the new configuration, which was - created in the public schema: + created in the public schema: => \dF @@ -3177,18 +3177,18 @@ SHOW default_text_search_config; -ts_debug( config regconfig, document text, - OUT alias text, - OUT description text, - OUT token text, - OUT dictionaries regdictionary[], - OUT dictionary regdictionary, - OUT lexemes text[]) +ts_debug( config regconfig, document text, + OUT alias text, + OUT description text, + OUT token text, + OUT dictionaries regdictionary[], + OUT dictionary regdictionary, + OUT lexemes text[]) returns setof record - ts_debug displays information about every token of + ts_debug displays information about every token of document as produced by the parser and processed by the configured dictionaries. It uses the configuration specified by config re - ts_debug returns one row for each token identified in the text + ts_debug returns one row for each token identified in the text by the parser. The columns returned are - alias text — short name of the token type + alias text — short name of the token type - description text — description of the + description text — description of the token type - token text — text of the token + token text — text of the token - dictionaries regdictionary[] — the + dictionaries regdictionary[] — the dictionaries selected by the configuration for this token type - dictionary regdictionary — the dictionary - that recognized the token, or NULL if none did + dictionary regdictionary — the dictionary + that recognized the token, or NULL if none did - lexemes text[] — the lexeme(s) produced - by the dictionary that recognized the token, or NULL if - none did; an empty array ({}) means it was recognized as a + lexemes text[] — the lexeme(s) produced + by the dictionary that recognized the token, or NULL if + none did; an empty array ({}) means it was recognized as a stop word @@ -3307,10 +3307,10 @@ SELECT * FROM ts_debug('public.english','The Brightest supernovaes'); - In this example, the word Brightest was recognized by the + In this example, the word Brightest was recognized by the parser as an ASCII word (alias asciiword). For this token type the dictionary list is - english_ispell and + english_ispell and english_stem. The word was recognized by english_ispell, which reduced it to the noun bright. The word supernovaes is @@ -3360,14 +3360,14 @@ FROM ts_debug('public.english','The Brightest supernovaes'); -ts_parse(parser_name text, document text, - OUT tokid integer, OUT token text) returns setof record -ts_parse(parser_oid oid, document text, - OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_name text, document text, + OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_oid oid, document text, + OUT tokid integer, OUT token text) returns setof record - ts_parse parses the given document + ts_parse parses the given document and returns a series of records, one for each token produced by parsing. Each record includes a tokid showing the assigned token type and a token which is the text of the @@ -3391,14 +3391,14 @@ SELECT * FROM ts_parse('default', '123 - a number'); -ts_token_type(parser_name text, OUT tokid integer, - OUT alias text, OUT description text) returns setof record -ts_token_type(parser_oid oid, OUT tokid integer, - OUT alias text, OUT description text) returns setof record +ts_token_type(parser_name text, OUT tokid integer, + OUT alias text, OUT description text) returns setof record +ts_token_type(parser_oid oid, OUT tokid integer, + OUT alias text, OUT description text) returns setof record - ts_token_type returns a table which describes each type of + ts_token_type returns a table which describes each type of token the specified parser can recognize. For each token type, the table gives the integer tokid that the parser uses to label a token of that type, the alias that names the token type @@ -3441,7 +3441,7 @@ SELECT * FROM ts_token_type('default'); Dictionary Testing - The ts_lexize function facilitates dictionary testing. + The ts_lexize function facilitates dictionary testing. @@ -3449,11 +3449,11 @@ SELECT * FROM ts_token_type('default'); -ts_lexize(dict regdictionary, token text) returns text[] +ts_lexize(dict regdictionary, token text) returns text[] - ts_lexize returns an array of lexemes if the input + ts_lexize returns an array of lexemes if the input token is known to the dictionary, or an empty array if the token is known to the dictionary but it is a stop word, or @@ -3490,9 +3490,9 @@ SELECT ts_lexize('thesaurus_astro','supernovae stars') is null; The thesaurus dictionary thesaurus_astro does know the - phrase supernovae stars, but ts_lexize + phrase supernovae stars, but ts_lexize fails since it does not parse the input text but treats it as a single - token. Use plainto_tsquery or to_tsvector to + token. Use plainto_tsquery or to_tsvector to test thesaurus dictionaries, for example: @@ -3540,7 +3540,7 @@ SELECT plainto_tsquery('supernovae stars'); Creates a GIN (Generalized Inverted Index)-based index. - The column must be of tsvector type. + The column must be of tsvector type. @@ -3560,8 +3560,8 @@ SELECT plainto_tsquery('supernovae stars'); Creates a GiST (Generalized Search Tree)-based index. - The column can be of tsvector or - tsquery type. + The column can be of tsvector or + tsquery type. @@ -3575,7 +3575,7 @@ SELECT plainto_tsquery('supernovae stars'); compressed list of matching locations. Multi-word searches can find the first match, then use the index to remove rows that are lacking additional words. GIN indexes store only the words (lexemes) of - tsvector values, and not their weight labels. Thus a table + tsvector values, and not their weight labels. Thus a table row recheck is needed when using a query that involves weights. @@ -3622,7 +3622,7 @@ SELECT plainto_tsquery('supernovae stars'); - <application>psql</> Support + <application>psql</application> Support Information about text search configuration objects can be obtained @@ -3666,7 +3666,7 @@ SELECT plainto_tsquery('supernovae stars'); \dF+ PATTERN - List text search configurations (add + for more detail). + List text search configurations (add + for more detail). => \dF russian List of text search configurations @@ -3707,7 +3707,7 @@ Parser: "pg_catalog.default" \dFd+ PATTERN - List text search dictionaries (add + for more detail). + List text search dictionaries (add + for more detail). => \dFd List of text search dictionaries @@ -3738,7 +3738,7 @@ Parser: "pg_catalog.default" \dFp+ PATTERN - List text search parsers (add + for more detail). + List text search parsers (add + for more detail). => \dFp List of text search parsers @@ -3791,7 +3791,7 @@ Parser: "pg_catalog.default" \dFt+ PATTERN - List text search templates (add + for more detail). + List text search templates (add + for more detail). => \dFt List of text search templates @@ -3830,12 +3830,12 @@ Parser: "pg_catalog.default" 264 - Position values in tsvector must be greater than 0 and + Position values in tsvector must be greater than 0 and no more than 16,383 - The match distance in a <N> - (FOLLOWED BY) tsquery operator cannot be more than + The match distance in a <N> + (FOLLOWED BY) tsquery operator cannot be more than 16,384 @@ -3851,7 +3851,7 @@ Parser: "pg_catalog.default" For comparison, the PostgreSQL 8.1 documentation contained 10,441 unique words, a total of 335,420 words, and the most - frequent word postgresql was mentioned 6,127 times in 655 + frequent word postgresql was mentioned 6,127 times in 655 documents. diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index f5f74af5a1..b0e160acf6 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -53,7 +53,7 @@ On views, triggers can be defined to execute instead of INSERT, UPDATE, or - DELETE operations. INSTEAD OF triggers + DELETE operations. INSTEAD OF triggers are fired once for each row that needs to be modified in the view. It is the responsibility of the trigger's function to perform the necessary modifications to the @@ -67,9 +67,9 @@ The trigger function must be defined before the trigger itself can be created. The trigger function must be declared as a - function taking no arguments and returning type trigger. + function taking no arguments and returning type trigger. (The trigger function receives its input through a specially-passed - TriggerData structure, not in the form of ordinary function + TriggerData structure, not in the form of ordinary function arguments.) @@ -81,8 +81,8 @@ - PostgreSQL offers both per-row - triggers and per-statement triggers. With a per-row + PostgreSQL offers both per-row + triggers and per-statement triggers. With a per-row trigger, the trigger function is invoked once for each row that is affected by the statement that fired the trigger. In contrast, a per-statement trigger is @@ -90,27 +90,27 @@ regardless of the number of rows affected by that statement. In particular, a statement that affects zero rows will still result in the execution of any applicable per-statement triggers. These - two types of triggers are sometimes called row-level - triggers and statement-level triggers, + two types of triggers are sometimes called row-level + triggers and statement-level triggers, respectively. Triggers on TRUNCATE may only be defined at statement level, not per-row. Triggers are also classified according to whether they fire - before, after, or - instead of the operation. These are referred to - as BEFORE triggers, AFTER triggers, and - INSTEAD OF triggers respectively. - Statement-level BEFORE triggers naturally fire before the - statement starts to do anything, while statement-level AFTER + before, after, or + instead of the operation. These are referred to + as BEFORE triggers, AFTER triggers, and + INSTEAD OF triggers respectively. + Statement-level BEFORE triggers naturally fire before the + statement starts to do anything, while statement-level AFTER triggers fire at the very end of the statement. These types of triggers may be defined on tables, views, or foreign tables. Row-level - BEFORE triggers fire immediately before a particular row is - operated on, while row-level AFTER triggers fire at the end of - the statement (but before any statement-level AFTER triggers). + BEFORE triggers fire immediately before a particular row is + operated on, while row-level AFTER triggers fire at the end of + the statement (but before any statement-level AFTER triggers). These types of triggers may only be defined on non-partitioned tables and - foreign tables, not views. INSTEAD OF triggers may only be + foreign tables, not views. INSTEAD OF triggers may only be defined on views, and only at row level; they fire immediately as each row in the view is identified as needing to be operated on. @@ -125,31 +125,31 @@ If an INSERT contains an ON CONFLICT - DO UPDATE clause, it is possible that the effects of - row-level BEFORE INSERT triggers and + DO UPDATE clause, it is possible that the effects of + row-level BEFORE INSERT triggers and row-level BEFORE UPDATE triggers can both be applied in a way that is apparent from the final state of - the updated row, if an EXCLUDED column is referenced. - There need not be an EXCLUDED column reference for + the updated row, if an EXCLUDED column is referenced. + There need not be an EXCLUDED column reference for both sets of row-level BEFORE triggers to execute, though. The possibility of surprising outcomes should be considered when there - are both BEFORE INSERT and - BEFORE UPDATE row-level triggers + are both BEFORE INSERT and + BEFORE UPDATE row-level triggers that change a row being inserted/updated (this can be problematic even if the modifications are more or less equivalent, if they're not also idempotent). Note that statement-level UPDATE triggers are executed when ON - CONFLICT DO UPDATE is specified, regardless of whether or not + CONFLICT DO UPDATE is specified, regardless of whether or not any rows were affected by the UPDATE (and regardless of whether the alternative UPDATE path was ever taken). An INSERT with an - ON CONFLICT DO UPDATE clause will execute - statement-level BEFORE INSERT - triggers first, then statement-level BEFORE + ON CONFLICT DO UPDATE clause will execute + statement-level BEFORE INSERT + triggers first, then statement-level BEFORE UPDATE triggers, followed by statement-level - AFTER UPDATE triggers and finally - statement-level AFTER INSERT + AFTER UPDATE triggers and finally + statement-level AFTER INSERT triggers. @@ -164,7 +164,7 @@ - It can return NULL to skip the operation for the + It can return NULL to skip the operation for the current row. This instructs the executor to not perform the row-level operation that invoked the trigger (the insertion, modification, or deletion of a particular table row). @@ -182,7 +182,7 @@ - A row-level BEFORE trigger that does not intend to cause + A row-level BEFORE trigger that does not intend to cause either of these behaviors must be careful to return as its result the same row that was passed in (that is, the NEW row for INSERT and UPDATE @@ -191,8 +191,8 @@ - A row-level INSTEAD OF trigger should either return - NULL to indicate that it did not modify any data from + A row-level INSTEAD OF trigger should either return + NULL to indicate that it did not modify any data from the view's underlying base tables, or it should return the view row that was passed in (the NEW row for INSERT and UPDATE @@ -201,66 +201,66 @@ used to signal that the trigger performed the necessary data modifications in the view. This will cause the count of the number of rows affected by the command to be incremented. For - INSERT and UPDATE operations, the trigger - may modify the NEW row before returning it. This will + INSERT and UPDATE operations, the trigger + may modify the NEW row before returning it. This will change the data returned by - INSERT RETURNING or UPDATE RETURNING, + INSERT RETURNING or UPDATE RETURNING, and is useful when the view will not show exactly the same data that was provided. The return value is ignored for row-level triggers fired after an - operation, and so they can return NULL. + operation, and so they can return NULL. If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by - trigger name. In the case of BEFORE and - INSTEAD OF triggers, the possibly-modified row returned by + trigger name. In the case of BEFORE and + INSTEAD OF triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger. If any - BEFORE or INSTEAD OF trigger returns - NULL, the operation is abandoned for that row and subsequent + BEFORE or INSTEAD OF trigger returns + NULL, the operation is abandoned for that row and subsequent triggers are not fired (for that row). - A trigger definition can also specify a Boolean WHEN + A trigger definition can also specify a Boolean WHEN condition, which will be tested to see whether the trigger should - be fired. In row-level triggers the WHEN condition can + be fired. In row-level triggers the WHEN condition can examine the old and/or new values of columns of the row. (Statement-level - triggers can also have WHEN conditions, although the feature - is not so useful for them.) In a BEFORE trigger, the - WHEN + triggers can also have WHEN conditions, although the feature + is not so useful for them.) In a BEFORE trigger, the + WHEN condition is evaluated just before the function is or would be executed, - so using WHEN is not materially different from testing the + so using WHEN is not materially different from testing the same condition at the beginning of the trigger function. However, in - an AFTER trigger, the WHEN condition is evaluated + an AFTER trigger, the WHEN condition is evaluated just after the row update occurs, and it determines whether an event is queued to fire the trigger at the end of statement. So when an - AFTER trigger's - WHEN condition does not return true, it is not necessary + AFTER trigger's + WHEN condition does not return true, it is not necessary to queue an event nor to re-fetch the row at end of statement. This can result in significant speedups in statements that modify many rows, if the trigger only needs to be fired for a few of the rows. - INSTEAD OF triggers do not support - WHEN conditions. + INSTEAD OF triggers do not support + WHEN conditions. - Typically, row-level BEFORE triggers are used for checking or + Typically, row-level BEFORE triggers are used for checking or modifying the data that will be inserted or updated. For example, - a BEFORE trigger might be used to insert the current time into a + a BEFORE trigger might be used to insert the current time into a timestamp column, or to check that two elements of the row are - consistent. Row-level AFTER triggers are most sensibly + consistent. Row-level AFTER triggers are most sensibly used to propagate the updates to other tables, or make consistency checks against other tables. The reason for this division of labor is - that an AFTER trigger can be certain it is seeing the final - value of the row, while a BEFORE trigger cannot; there might - be other BEFORE triggers firing after it. If you have no - specific reason to make a trigger BEFORE or - AFTER, the BEFORE case is more efficient, since + that an AFTER trigger can be certain it is seeing the final + value of the row, while a BEFORE trigger cannot; there might + be other BEFORE triggers firing after it. If you have no + specific reason to make a trigger BEFORE or + AFTER, the BEFORE case is more efficient, since the information about the operation doesn't have to be saved until end of statement. @@ -279,8 +279,8 @@ - trigger - arguments for trigger functions + trigger + arguments for trigger functions When a trigger is being defined, arguments can be specified for it. The purpose of including arguments in the @@ -303,7 +303,7 @@ for making the trigger input data available to the trigger function. This input data includes the type of trigger event (e.g., INSERT or UPDATE) as well as any - arguments that were listed in CREATE TRIGGER. + arguments that were listed in CREATE TRIGGER. For a row-level trigger, the input data also includes the NEW row for INSERT and UPDATE triggers, and/or the OLD row @@ -313,9 +313,9 @@ By default, statement-level triggers do not have any way to examine the individual row(s) modified by the statement. But an AFTER - STATEMENT trigger can request that transition tables + STATEMENT trigger can request that transition tables be created to make the sets of affected rows available to the trigger. - AFTER ROW triggers can also request transition tables, so + AFTER ROW triggers can also request transition tables, so that they can see the total changes in the table as well as the change in the individual row they are currently being fired for. The method for examining the transition tables again depends on the programming language @@ -343,7 +343,7 @@ Statement-level triggers follow simple visibility rules: none of the changes made by a statement are visible to statement-level BEFORE triggers, whereas all - modifications are visible to statement-level AFTER + modifications are visible to statement-level AFTER triggers. @@ -352,14 +352,14 @@ The data change (insertion, update, or deletion) causing the trigger to fire is naturally not visible - to SQL commands executed in a row-level BEFORE trigger, + to SQL commands executed in a row-level BEFORE trigger, because it hasn't happened yet. - However, SQL commands executed in a row-level BEFORE + However, SQL commands executed in a row-level BEFORE trigger will see the effects of data changes for rows previously processed in the same outer command. This requires caution, since the ordering of these @@ -370,15 +370,15 @@ - Similarly, a row-level INSTEAD OF trigger will see the + Similarly, a row-level INSTEAD OF trigger will see the effects of data changes made by previous firings of INSTEAD - OF triggers in the same outer command. + OF triggers in the same outer command. - When a row-level AFTER trigger is fired, all data + When a row-level AFTER trigger is fired, all data changes made by the outer command are already complete, and are visible to the invoked trigger function. @@ -390,8 +390,8 @@ If your trigger function is written in any of the standard procedural languages, then the above statements apply only if the function is - declared VOLATILE. Functions that are declared - STABLE or IMMUTABLE will not see changes made by + declared VOLATILE. Functions that are declared + STABLE or IMMUTABLE will not see changes made by the calling command in any case. @@ -426,14 +426,14 @@ - Trigger functions must use the version 1 function manager + Trigger functions must use the version 1 function manager interface. When a function is called by the trigger manager, it is not passed - any normal arguments, but it is passed a context - pointer pointing to a TriggerData structure. C + any normal arguments, but it is passed a context + pointer pointing to a TriggerData structure. C functions can check whether they were called from the trigger manager or not by executing the macro: @@ -444,10 +444,10 @@ CALLED_AS_TRIGGER(fcinfo) ((fcinfo)->context != NULL && IsA((fcinfo)->context, TriggerData)) If this returns true, then it is safe to cast - fcinfo->context to type TriggerData + fcinfo->context to type TriggerData * and make use of the pointed-to - TriggerData structure. The function must - not alter the TriggerData + TriggerData structure. The function must + not alter the TriggerData structure or any of the data it points to. @@ -475,7 +475,7 @@ typedef struct TriggerData - type + type Always T_TriggerData. @@ -484,7 +484,7 @@ typedef struct TriggerData - tg_event + tg_event Describes the event for which the function is called. You can use the @@ -577,24 +577,24 @@ typedef struct TriggerData - tg_relation + tg_relation A pointer to a structure describing the relation that the trigger fired for. - Look at utils/rel.h for details about + Look at utils/rel.h for details about this structure. The most interesting things are - tg_relation->rd_att (descriptor of the relation - tuples) and tg_relation->rd_rel->relname - (relation name; the type is not char* but - NameData; use - SPI_getrelname(tg_relation) to get a char* if you + tg_relation->rd_att (descriptor of the relation + tuples) and tg_relation->rd_rel->relname + (relation name; the type is not char* but + NameData; use + SPI_getrelname(tg_relation) to get a char* if you need a copy of the name). - tg_trigtuple + tg_trigtuple A pointer to the row for which the trigger was fired. This is @@ -610,11 +610,11 @@ typedef struct TriggerData - tg_newtuple + tg_newtuple A pointer to the new version of the row, if the trigger was - fired for an UPDATE, and NULL if + fired for an UPDATE, and NULL if it is for an INSERT or a DELETE. This is what you have to return from the function if the event is an UPDATE @@ -626,11 +626,11 @@ typedef struct TriggerData - tg_trigger + tg_trigger - A pointer to a structure of type Trigger, - defined in utils/reltrigger.h: + A pointer to a structure of type Trigger, + defined in utils/reltrigger.h: typedef struct Trigger @@ -656,9 +656,9 @@ typedef struct Trigger } Trigger; - where tgname is the trigger's name, - tgnargs is the number of arguments in - tgargs, and tgargs is an array of + where tgname is the trigger's name, + tgnargs is the number of arguments in + tgargs, and tgargs is an array of pointers to the arguments specified in the CREATE TRIGGER statement. The other members are for internal use only. @@ -667,7 +667,7 @@ typedef struct Trigger - tg_trigtuplebuf + tg_trigtuplebuf The buffer containing tg_trigtuple, or InvalidBuffer if there @@ -677,7 +677,7 @@ typedef struct Trigger - tg_newtuplebuf + tg_newtuplebuf The buffer containing tg_newtuple, or InvalidBuffer if there @@ -687,24 +687,24 @@ typedef struct Trigger - tg_oldtable + tg_oldtable A pointer to a structure of type Tuplestorestate containing zero or more rows in the format specified by - tg_relation, or a NULL pointer + tg_relation, or a NULL pointer if there is no OLD TABLE transition relation. - tg_newtable + tg_newtable A pointer to a structure of type Tuplestorestate containing zero or more rows in the format specified by - tg_relation, or a NULL pointer + tg_relation, or a NULL pointer if there is no NEW TABLE transition relation. @@ -720,10 +720,10 @@ typedef struct Trigger A trigger function must return either a - HeapTuple pointer or a NULL pointer - (not an SQL null value, that is, do not set isNull true). + HeapTuple pointer or a NULL pointer + (not an SQL null value, that is, do not set isNull true). Be careful to return either - tg_trigtuple or tg_newtuple, + tg_trigtuple or tg_newtuple, as appropriate, if you don't want to modify the row being operated on. @@ -738,10 +738,10 @@ typedef struct Trigger - The function trigf reports the number of rows in the - table ttest and skips the actual operation if the + The function trigf reports the number of rows in the + table ttest and skips the actual operation if the command attempts to insert a null value into the column - x. (So the trigger acts as a not-null constraint but + x. (So the trigger acts as a not-null constraint but doesn't abort the transaction.) @@ -838,7 +838,7 @@ trigf(PG_FUNCTION_ARGS) linkend="dfunc">), declare the function and the triggers: CREATE FUNCTION trigf() RETURNS trigger - AS 'filename' + AS 'filename' LANGUAGE C; CREATE TRIGGER tbefore BEFORE INSERT OR UPDATE OR DELETE ON ttest diff --git a/doc/src/sgml/tsm-system-rows.sgml b/doc/src/sgml/tsm-system-rows.sgml index 93aa536664..8504ee1281 100644 --- a/doc/src/sgml/tsm-system-rows.sgml +++ b/doc/src/sgml/tsm-system-rows.sgml @@ -8,9 +8,9 @@ - The tsm_system_rows module provides the table sampling method + The tsm_system_rows module provides the table sampling method SYSTEM_ROWS, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. @@ -38,7 +38,7 @@ Here is an example of selecting a sample of a table with - SYSTEM_ROWS. First install the extension: + SYSTEM_ROWS. First install the extension: @@ -55,7 +55,7 @@ SELECT * FROM my_table TABLESAMPLE SYSTEM_ROWS(100); This command will return a sample of 100 rows from the - table my_table (unless the table does not have 100 + table my_table (unless the table does not have 100 visible rows, in which case all its rows are returned). diff --git a/doc/src/sgml/tsm-system-time.sgml b/doc/src/sgml/tsm-system-time.sgml index 3f8ff1a026..525292bb7c 100644 --- a/doc/src/sgml/tsm-system-time.sgml +++ b/doc/src/sgml/tsm-system-time.sgml @@ -8,9 +8,9 @@ - The tsm_system_time module provides the table sampling method + The tsm_system_time module provides the table sampling method SYSTEM_TIME, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. @@ -40,7 +40,7 @@ Here is an example of selecting a sample of a table with - SYSTEM_TIME. First install the extension: + SYSTEM_TIME. First install the extension: @@ -56,7 +56,7 @@ SELECT * FROM my_table TABLESAMPLE SYSTEM_TIME(1000); - This command will return as large a sample of my_table as + This command will return as large a sample of my_table as it can read in 1 second (1000 milliseconds). Of course, if the whole table can be read in under 1 second, all its rows will be returned. diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 63d41f03f3..5c99e3adaf 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -40,7 +40,7 @@ has an associated data type which determines its behavior and allowed usage. PostgreSQL has an extensible type system that is more general and flexible than other SQL implementations. Hence, most type conversion behavior in PostgreSQL -is governed by general rules rather than by ad hoc +is governed by general rules rather than by ad hoc heuristics. This allows the use of mixed-type expressions even with user-defined types. @@ -124,11 +124,11 @@ with, and perhaps converted to, the types of the target columns. Since all query results from a unionized SELECT statement must appear in a single set of columns, the types of the results of each -SELECT clause must be matched up and converted to a uniform set. -Similarly, the result expressions of a CASE construct must be -converted to a common type so that the CASE expression as a whole -has a known output type. The same holds for ARRAY constructs, -and for the GREATEST and LEAST functions. +SELECT clause must be matched up and converted to a uniform set. +Similarly, the result expressions of a CASE construct must be +converted to a common type so that the CASE expression as a whole +has a known output type. The same holds for ARRAY constructs, +and for the GREATEST and LEAST functions. @@ -345,7 +345,7 @@ Some examples follow. Factorial Operator Type Resolution -There is only one factorial operator (postfix !) +There is only one factorial operator (postfix !) defined in the standard catalog, and it takes an argument of type bigint. The scanner assigns an initial type of integer to the argument @@ -423,11 +423,11 @@ type to resolve the unknown-type literals as. The PostgreSQL operator catalog has several -entries for the prefix operator @, all of which implement +entries for the prefix operator @, all of which implement absolute-value operations for various numeric data types. One of these entries is for type float8, which is the preferred type in the numeric category. Therefore, PostgreSQL -will use that entry when faced with an unknown input: +will use that entry when faced with an unknown input: SELECT @ '-4.5' AS "abs"; abs @@ -446,9 +446,9 @@ ERROR: "-4.5e500" is out of range for type double precision -On the other hand, the prefix operator ~ (bitwise negation) +On the other hand, the prefix operator ~ (bitwise negation) is defined only for integer data types, not for float8. So, if we -try a similar case with ~, we get: +try a similar case with ~, we get: SELECT ~ '20' AS "negation"; @@ -457,7 +457,7 @@ HINT: Could not choose a best candidate operator. You might need to add explicit type casts. This happens because the system cannot decide which of the several -possible ~ operators should be preferred. We can help +possible ~ operators should be preferred. We can help it out with an explicit cast: SELECT ~ CAST('20' AS int8) AS "negation"; @@ -485,10 +485,10 @@ SELECT array[1,2] <@ '{1,2,3}' as "is subset"; (1 row) The PostgreSQL operator catalog has several -entries for the infix operator <@, but the only two that +entries for the infix operator <@, but the only two that could possibly accept an integer array on the left-hand side are -array inclusion (anyarray <@ anyarray) -and range inclusion (anyelement <@ anyrange). +array inclusion (anyarray <@ anyarray) +and range inclusion (anyelement <@ anyrange). Since none of these polymorphic pseudo-types (see ) are considered preferred, the parser cannot resolve the ambiguity on that basis. @@ -518,19 +518,19 @@ CREATE TABLE mytable (val mytext); SELECT * FROM mytable WHERE val = 'foo'; This query will not use the custom operator. The parser will first see if -there is a mytext = mytext operator +there is a mytext = mytext operator (), which there is not; -then it will consider the domain's base type text, and see if -there is a text = text operator +then it will consider the domain's base type text, and see if +there is a text = text operator (), which there is; -so it resolves the unknown-type literal as text and -uses the text = text operator. +so it resolves the unknown-type literal as text and +uses the text = text operator. The only way to get the custom operator to be used is to explicitly cast the literal: SELECT * FROM mytable WHERE val = text 'foo'; -so that the mytext = text operator is found +so that the mytext = text operator is found immediately according to the exact-match rule. If the best-match rules are reached, they actively discriminate against operators on domain types. If they did not, such an operator would create too many ambiguous-operator @@ -580,8 +580,8 @@ search path position. -If a function is declared with a VARIADIC array parameter, and -the call does not use the VARIADIC keyword, then the function +If a function is declared with a VARIADIC array parameter, and +the call does not use the VARIADIC keyword, then the function is treated as if the array parameter were replaced by one or more occurrences of its element type, as needed to match the call. After such expansion the function might have effective argument types identical to some non-variadic @@ -599,7 +599,7 @@ search path is used. If there are two or more such functions in the same schema with identical parameter types in the non-defaulted positions (which is possible if they have different sets of defaultable parameters), the system will not be able to determine which to prefer, and so an ambiguous -function call error will result if no better match to the call can be +function call error will result if no better match to the call can be found. @@ -626,7 +626,7 @@ an unknown-type literal, or a type that is binary-coercible to the named data type, or a type that could be converted to the named data type by applying that type's I/O functions (that is, the conversion is either to or from one of the standard string types). When these conditions are met, -the function call is treated as a form of CAST specification. +the function call is treated as a form of CAST specification. The reason for this step is to support function-style cast specifications @@ -709,7 +709,7 @@ Otherwise, fail. -Note that the best match rules are identical for operator and +Note that the best match rules are identical for operator and function type resolution. Some examples follow. @@ -790,7 +790,7 @@ SELECT substr(CAST (varchar '1234' AS text), 3); -The parser learns from the pg_cast catalog that +The parser learns from the pg_cast catalog that text and varchar are binary-compatible, meaning that one can be passed to a function that accepts the other without doing any physical conversion. Therefore, no @@ -809,8 +809,8 @@ HINT: No function matches the given name and argument types. You might need to add explicit type casts. -This does not work because integer does not have an implicit cast -to text. An explicit cast will work, however: +This does not work because integer does not have an implicit cast +to text. An explicit cast will work, however: SELECT substr(CAST (1234 AS text), 3); @@ -845,8 +845,8 @@ Check for an exact match with the target. Otherwise, try to convert the expression to the target type. This is possible -if an assignment cast between the two types is registered in the -pg_cast catalog (see ). +if an assignment cast between the two types is registered in the +pg_cast catalog (see ). Alternatively, if the expression is an unknown-type literal, the contents of the literal string will be fed to the input conversion routine for the target type. @@ -857,12 +857,12 @@ type. Check to see if there is a sizing cast for the target type. A sizing cast is a cast from that type to itself. If one is found in the -pg_cast catalog, apply it to the expression before storing +pg_cast catalog, apply it to the expression before storing into the destination column. The implementation function for such a cast always takes an extra parameter of type integer, which receives -the destination column's atttypmod value (typically its -declared length, although the interpretation of atttypmod -varies for different data types), and it may take a third boolean +the destination column's atttypmod value (typically its +declared length, although the interpretation of atttypmod +varies for different data types), and it may take a third boolean parameter that says whether the cast is explicit or implicit. The cast function is responsible for applying any length-dependent semantics such as size @@ -896,11 +896,11 @@ What has really happened here is that the two unknown literals are resolved to text by default, allowing the || operator to be resolved as text concatenation. Then the text result of the operator is converted to bpchar (blank-padded -char, the internal name of the character data type) to match the target +char, the internal name of the character data type) to match the target column type. (Since the conversion from text to bpchar is binary-coercible, this conversion does not insert any real function call.) Finally, the sizing function -bpchar(bpchar, integer, boolean) is found in the system catalog +bpchar(bpchar, integer, boolean) is found in the system catalog and applied to the operator's result and the stored column length. This type-specific function performs the required length check and addition of padding spaces. @@ -942,13 +942,13 @@ padding spaces. -SQL UNION constructs must match up possibly dissimilar +SQL UNION constructs must match up possibly dissimilar types to become a single result set. The resolution algorithm is applied separately to each output column of a union query. The -INTERSECT and EXCEPT constructs resolve -dissimilar types in the same way as UNION. The -CASE, ARRAY, VALUES, -GREATEST and LEAST constructs use the identical +INTERSECT and EXCEPT constructs resolve +dissimilar types in the same way as UNION. The +CASE, ARRAY, VALUES, +GREATEST and LEAST constructs use the identical algorithm to match up their component expressions and select a result data type. @@ -972,7 +972,7 @@ domain's base type for all subsequent steps. Somewhat like the treatment of domain inputs for operators and functions, this behavior allows a domain type to be preserved through - a UNION or similar construct, so long as the user is + a UNION or similar construct, so long as the user is careful to ensure that all inputs are implicitly or explicitly of that exact type. Otherwise the domain's base type will be preferred. @@ -1053,9 +1053,9 @@ SELECT 1.2 AS "numeric" UNION SELECT 1; 1.2 (2 rows) -The literal 1.2 is of type numeric, -and the integer value 1 can be cast implicitly to -numeric, so that type is used. +The literal 1.2 is of type numeric, +and the integer value 1 can be cast implicitly to +numeric, so that type is used. @@ -1072,9 +1072,9 @@ SELECT 1 AS "real" UNION SELECT CAST('2.2' AS REAL); 2.2 (2 rows) -Here, since type real cannot be implicitly cast to integer, -but integer can be implicitly cast to real, the union -result type is resolved as real. +Here, since type real cannot be implicitly cast to integer, +but integer can be implicitly cast to real, the union +result type is resolved as real. @@ -1089,38 +1089,38 @@ result type is resolved as real. The rules given in the preceding sections will result in assignment -of non-unknown data types to all expressions in a SQL query, +of non-unknown data types to all expressions in a SQL query, except for unspecified-type literals that appear as simple output -columns of a SELECT command. For example, in +columns of a SELECT command. For example, in SELECT 'Hello World'; there is nothing to identify what type the string literal should be -taken as. In this situation PostgreSQL will fall back -to resolving the literal's type as text. +taken as. In this situation PostgreSQL will fall back +to resolving the literal's type as text. -When the SELECT is one arm of a UNION -(or INTERSECT or EXCEPT) construct, or when it -appears within INSERT ... SELECT, this rule is not applied +When the SELECT is one arm of a UNION +(or INTERSECT or EXCEPT) construct, or when it +appears within INSERT ... SELECT, this rule is not applied since rules given in preceding sections take precedence. The type of an -unspecified-type literal can be taken from the other UNION arm +unspecified-type literal can be taken from the other UNION arm in the first case, or from the destination column in the second case. -RETURNING lists are treated the same as SELECT +RETURNING lists are treated the same as SELECT output lists for this purpose. - Prior to PostgreSQL 10, this rule did not exist, and - unspecified-type literals in a SELECT output list were - left as type unknown. That had assorted bad consequences, + Prior to PostgreSQL 10, this rule did not exist, and + unspecified-type literals in a SELECT output list were + left as type unknown. That had assorted bad consequences, so it's been changed. diff --git a/doc/src/sgml/unaccent.sgml b/doc/src/sgml/unaccent.sgml index d5cf98f6c1..a7f5f53041 100644 --- a/doc/src/sgml/unaccent.sgml +++ b/doc/src/sgml/unaccent.sgml @@ -8,7 +8,7 @@ - unaccent is a text search dictionary that removes accents + unaccent is a text search dictionary that removes accents (diacritic signs) from lexemes. It's a filtering dictionary, which means its output is always passed to the next dictionary (if any), unlike the normal @@ -17,7 +17,7 @@ - The current implementation of unaccent cannot be used as a + The current implementation of unaccent cannot be used as a normalizing dictionary for the thesaurus dictionary. @@ -25,17 +25,17 @@ Configuration - An unaccent dictionary accepts the following options: + An unaccent dictionary accepts the following options: - RULES is the base name of the file containing the list of + RULES is the base name of the file containing the list of translation rules. This file must be stored in - $SHAREDIR/tsearch_data/ (where $SHAREDIR means - the PostgreSQL installation's shared-data directory). - Its name must end in .rules (which is not to be included in - the RULES parameter). + $SHAREDIR/tsearch_data/ (where $SHAREDIR means + the PostgreSQL installation's shared-data directory). + Its name must end in .rules (which is not to be included in + the RULES parameter). @@ -72,15 +72,15 @@ - Actually, each character can be any string not containing - whitespace, so unaccent dictionaries could be used for + Actually, each character can be any string not containing + whitespace, so unaccent dictionaries could be used for other sorts of substring substitutions besides diacritic removal. - As with other PostgreSQL text search configuration files, + As with other PostgreSQL text search configuration files, the rules file must be stored in UTF-8 encoding. The data is automatically translated into the current database's encoding when loaded. Any lines containing untranslatable characters are silently @@ -92,8 +92,8 @@ A more complete example, which is directly useful for most European - languages, can be found in unaccent.rules, which is installed - in $SHAREDIR/tsearch_data/ when the unaccent + languages, can be found in unaccent.rules, which is installed + in $SHAREDIR/tsearch_data/ when the unaccent module is installed. This rules file translates characters with accents to the same characters without accents, and it also expands ligatures into the equivalent series of simple characters (for example, Æ to @@ -105,11 +105,11 @@ Usage - Installing the unaccent extension creates a text - search template unaccent and a dictionary unaccent - based on it. The unaccent dictionary has the default - parameter setting RULES='unaccent', which makes it immediately - usable with the standard unaccent.rules file. + Installing the unaccent extension creates a text + search template unaccent and a dictionary unaccent + based on it. The unaccent dictionary has the default + parameter setting RULES='unaccent', which makes it immediately + usable with the standard unaccent.rules file. If you wish, you can alter the parameter, for example @@ -132,7 +132,7 @@ mydb=# select ts_lexize('unaccent','Hôtel'); Here is an example showing how to insert the - unaccent dictionary into a text search configuration: + unaccent dictionary into a text search configuration: mydb=# CREATE TEXT SEARCH CONFIGURATION fr ( COPY = french ); mydb=# ALTER TEXT SEARCH CONFIGURATION fr @@ -163,9 +163,9 @@ mydb=# select ts_headline('fr','Hôtel de la Mer',to_tsquery('fr','Hotels') Functions - The unaccent() function removes accents (diacritic signs) from + The unaccent() function removes accents (diacritic signs) from a given string. Basically, it's a wrapper around - unaccent-type dictionaries, but it can be used outside normal + unaccent-type dictionaries, but it can be used outside normal text search contexts. @@ -179,7 +179,7 @@ unaccent(dictionary, If the dictionary argument is - omitted, unaccent is assumed. + omitted, unaccent is assumed. diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index 46989f0169..2416bfd03d 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -5,18 +5,18 @@ PostgreSQL manages database access permissions - using the concept of roles. A role can be thought of as + using the concept of roles. A role can be thought of as either a database user, or a group of database users, depending on how the role is set up. Roles can own database objects (for example, tables and functions) and can assign privileges on those objects to other roles to control who has access to which objects. Furthermore, it is possible - to grant membership in a role to another role, thus + to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role. - The concept of roles subsumes the concepts of users and - groups. In PostgreSQL versions + The concept of roles subsumes the concepts of users and + groups. In PostgreSQL versions before 8.1, users and groups were distinct kinds of entities, but now there are only roles. Any role can act as a user, a group, or both. @@ -59,7 +59,7 @@ CREATE ROLE name; name follows the rules for SQL identifiers: either unadorned without special characters, or double-quoted. (In practice, you will usually want to add additional - options, such as LOGIN, to the command. More details appear + options, such as LOGIN, to the command. More details appear below.) To remove an existing role, use the analogous command: @@ -87,19 +87,19 @@ dropuser name - To determine the set of existing roles, examine the pg_roles + To determine the set of existing roles, examine the pg_roles system catalog, for example SELECT rolname FROM pg_roles; - The program's \du meta-command + The program's \du meta-command is also useful for listing the existing roles. In order to bootstrap the database system, a freshly initialized system always contains one predefined role. This role is always - a superuser, and by default (unless altered when running + a superuser, and by default (unless altered when running initdb) it will have the same name as the operating system user that initialized the database cluster. Customarily, this role will be named @@ -118,7 +118,7 @@ SELECT rolname FROM pg_roles; command line option to indicate the role to connect as. Many applications assume the name of the current operating system user by default (including - createuser and psql). Therefore it + createuser and psql). Therefore it is often convenient to maintain a naming correspondence between roles and operating system users. @@ -145,27 +145,27 @@ SELECT rolname FROM pg_roles; - login privilegelogin privilege + login privilegelogin privilege - Only roles that have the LOGIN attribute can be used + Only roles that have the LOGIN attribute can be used as the initial role name for a database connection. A role with - the LOGIN attribute can be considered the same - as a database user. To create a role with login privilege, + the LOGIN attribute can be considered the same + as a database user. To create a role with login privilege, use either: CREATE ROLE name LOGIN; CREATE USER name; - (CREATE USER is equivalent to CREATE ROLE - except that CREATE USER assumes LOGIN by - default, while CREATE ROLE does not.) + (CREATE USER is equivalent to CREATE ROLE + except that CREATE USER assumes LOGIN by + default, while CREATE ROLE does not.) - superuser statussuperuser + superuser statussuperuser A database superuser bypasses all permission checks, except the right @@ -179,7 +179,7 @@ CREATE USER name; - database creationdatabaseprivilege to create + database creationdatabaseprivilege to create A role must be explicitly given permission to create databases @@ -191,30 +191,30 @@ CREATE USER name; - role creationroleprivilege to create + role creationroleprivilege to create A role must be explicitly given permission to create more roles (except for superusers, since those bypass all permission checks). To create such a role, use CREATE ROLE name CREATEROLE. - A role with CREATEROLE privilege can alter and drop + A role with CREATEROLE privilege can alter and drop other roles, too, as well as grant or revoke membership in them. However, to create, alter, drop, or change membership of a superuser role, superuser status is required; - CREATEROLE is insufficient for that. + CREATEROLE is insufficient for that. - initiating replicationroleprivilege to initiate replication + initiating replicationroleprivilege to initiate replication A role must explicitly be given permission to initiate streaming replication (except for superusers, since those bypass all permission checks). A role used for streaming replication must - have LOGIN permission as well. To create such a role, use + have LOGIN permission as well. To create such a role, use CREATE ROLE name REPLICATION LOGIN. @@ -222,32 +222,32 @@ CREATE USER name; - passwordpassword + passwordpassword A password is only significant if the client authentication method requires the user to supply a password when connecting - to the database. The and + authentication methods make use of passwords. Database passwords are separate from operating system passwords. Specify a password upon role creation with CREATE ROLE - name PASSWORD 'string'. + name PASSWORD 'string'. A role's attributes can be modified after creation with - ALTER ROLE.ALTER ROLE + ALTER ROLE.ALTER ROLE See the reference pages for the and commands for details. - It is good practice to create a role that has the CREATEDB - and CREATEROLE privileges, but is not a superuser, and then + It is good practice to create a role that has the CREATEDB + and CREATEROLE privileges, but is not a superuser, and then use this role for all routine management of databases and roles. This approach avoids the dangers of operating as a superuser for tasks that do not really require it. @@ -269,9 +269,9 @@ ALTER ROLE myname SET enable_indexscan TO off; just before the session started. You can still alter this setting during the session; it will only be the default. To remove a role-specific default setting, use - ALTER ROLE rolename RESET varname. + ALTER ROLE rolename RESET varname. Note that role-specific defaults attached to roles without - LOGIN privilege are fairly useless, since they will never + LOGIN privilege are fairly useless, since they will never be invoked. @@ -280,7 +280,7 @@ ALTER ROLE myname SET enable_indexscan TO off; Role Membership - rolemembership in + rolemembership in @@ -288,7 +288,7 @@ ALTER ROLE myname SET enable_indexscan TO off; management of privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In PostgreSQL this is done by creating a role that represents the group, and then - granting membership in the group role to individual user + granting membership in the group role to individual user roles. @@ -297,7 +297,7 @@ ALTER ROLE myname SET enable_indexscan TO off; CREATE ROLE name; - Typically a role being used as a group would not have the LOGIN + Typically a role being used as a group would not have the LOGIN attribute, though you can set it if you wish. @@ -320,11 +320,11 @@ REVOKE group_role FROM role1 to - temporarily become the group role. In this state, the + temporarily become the group role. In this state, the database session has access to the privileges of the group role rather than the original login role, and any database objects created are considered owned by the group role not the login role. Second, member - roles that have the INHERIT attribute automatically have use + roles that have the INHERIT attribute automatically have use of the privileges of roles of which they are members, including any privileges inherited by those roles. As an example, suppose we have done: @@ -335,25 +335,25 @@ CREATE ROLE wheel NOINHERIT; GRANT admin TO joe; GRANT wheel TO admin; - Immediately after connecting as role joe, a database - session will have use of privileges granted directly to joe - plus any privileges granted to admin, because joe - inherits admin's privileges. However, privileges - granted to wheel are not available, because even though - joe is indirectly a member of wheel, the - membership is via admin which has the NOINHERIT + Immediately after connecting as role joe, a database + session will have use of privileges granted directly to joe + plus any privileges granted to admin, because joe + inherits admin's privileges. However, privileges + granted to wheel are not available, because even though + joe is indirectly a member of wheel, the + membership is via admin which has the NOINHERIT attribute. After: SET ROLE admin; the session would have use of only those privileges granted to - admin, and not those granted to joe. After: + admin, and not those granted to joe. After: SET ROLE wheel; the session would have use of only those privileges granted to - wheel, and not those granted to either joe - or admin. The original privilege state can be restored + wheel, and not those granted to either joe + or admin. The original privilege state can be restored with any of: SET ROLE joe; @@ -364,10 +364,10 @@ RESET ROLE; - The SET ROLE command always allows selecting any role + The SET ROLE command always allows selecting any role that the original login role is directly or indirectly a member of. Thus, in the above example, it is not necessary to become - admin before becoming wheel. + admin before becoming wheel. @@ -376,26 +376,26 @@ RESET ROLE; In the SQL standard, there is a clear distinction between users and roles, and users do not automatically inherit privileges while roles do. This behavior can be obtained in PostgreSQL by giving - roles being used as SQL roles the INHERIT attribute, while - giving roles being used as SQL users the NOINHERIT attribute. + roles being used as SQL roles the INHERIT attribute, while + giving roles being used as SQL users the NOINHERIT attribute. However, PostgreSQL defaults to giving all roles - the INHERIT attribute, for backward compatibility with pre-8.1 + the INHERIT attribute, for backward compatibility with pre-8.1 releases in which users always had use of permissions granted to groups they were members of. - The role attributes LOGIN, SUPERUSER, - CREATEDB, and CREATEROLE can be thought of as + The role attributes LOGIN, SUPERUSER, + CREATEDB, and CREATEROLE can be thought of as special privileges, but they are never inherited as ordinary privileges - on database objects are. You must actually SET ROLE to a + on database objects are. You must actually SET ROLE to a specific role having one of these attributes in order to make use of the attribute. Continuing the above example, we might choose to - grant CREATEDB and CREATEROLE to the - admin role. Then a session connecting as role joe + grant CREATEDB and CREATEROLE to the + admin role. Then a session connecting as role joe would not have these privileges immediately, only after doing - SET ROLE admin. + SET ROLE admin. @@ -425,16 +425,16 @@ DROP ROLE name; Ownership of objects can be transferred one at a time - using ALTER commands, for example: + using ALTER commands, for example: ALTER TABLE bobs_table OWNER TO alice; Alternatively, the command can be used to reassign ownership of all objects owned by the role-to-be-dropped - to a single other role. Because REASSIGN OWNED cannot access + to a single other role. Because REASSIGN OWNED cannot access objects in other databases, it is necessary to run it in each database that contains objects owned by the role. (Note that the first - such REASSIGN OWNED will change the ownership of any + such REASSIGN OWNED will change the ownership of any shared-across-databases objects, that is databases or tablespaces, that are owned by the role-to-be-dropped.) @@ -445,17 +445,17 @@ ALTER TABLE bobs_table OWNER TO alice; the command. Again, this command cannot access objects in other databases, so it is necessary to run it in each database that contains objects owned by the role. Also, DROP - OWNED will not drop entire databases or tablespaces, so it is + OWNED will not drop entire databases or tablespaces, so it is necessary to do that manually if the role owns any databases or tablespaces that have not been transferred to new owners. - DROP OWNED also takes care of removing any privileges granted + DROP OWNED also takes care of removing any privileges granted to the target role for objects that do not belong to it. - Because REASSIGN OWNED does not touch such objects, it's - typically necessary to run both REASSIGN OWNED - and DROP OWNED (in that order!) to fully remove the + Because REASSIGN OWNED does not touch such objects, it's + typically necessary to run both REASSIGN OWNED + and DROP OWNED (in that order!) to fully remove the dependencies of a role to be dropped. @@ -477,7 +477,7 @@ DROP ROLE doomed_role; - If DROP ROLE is attempted while dependent objects still + If DROP ROLE is attempted while dependent objects still remain, it will issue messages identifying which objects need to be reassigned or dropped. @@ -487,7 +487,7 @@ DROP ROLE doomed_role; Default Roles - role + role @@ -589,7 +589,7 @@ GRANT pg_signal_backend TO admin_user; possible to change the server's internal data structures. Hence, among many other things, such functions can circumvent any system access controls. Function languages that allow such access - are considered untrusted, and + are considered untrusted, and PostgreSQL allows only superusers to create functions written in those languages. diff --git a/doc/src/sgml/uuid-ossp.sgml b/doc/src/sgml/uuid-ossp.sgml index 227d4a839c..b1c1cd6f0a 100644 --- a/doc/src/sgml/uuid-ossp.sgml +++ b/doc/src/sgml/uuid-ossp.sgml @@ -8,7 +8,7 @@ - The uuid-ossp module provides functions to generate universally + The uuid-ossp module provides functions to generate universally unique identifiers (UUIDs) using one of several standard algorithms. There are also functions to produce certain special UUID constants. @@ -63,7 +63,7 @@ This function generates a version 3 UUID in the given namespace using the specified input name. The namespace should be one of the special - constants produced by the uuid_ns_*() functions shown + constants produced by the uuid_ns_*() functions shown in . (It could be any UUID in theory.) The name is an identifier in the selected namespace. @@ -114,7 +114,7 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); uuid_nil() - A nil UUID constant, which does not occur as a real UUID. + A nil UUID constant, which does not occur as a real UUID. @@ -140,7 +140,7 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); Constant designating the ISO object identifier (OID) namespace for UUIDs. (This pertains to ASN.1 OIDs, which are unrelated to the OIDs - used in PostgreSQL.) + used in PostgreSQL.) @@ -159,33 +159,33 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); - Building <filename>uuid-ossp</> + Building <filename>uuid-ossp</filename> Historically this module depended on the OSSP UUID library, which accounts for the module's name. While the OSSP UUID library can still be found at , it is not well maintained, and is becoming increasingly difficult to port to newer - platforms. uuid-ossp can now be built without the OSSP + platforms. uuid-ossp can now be built without the OSSP library on some platforms. On FreeBSD, NetBSD, and some other BSD-derived platforms, suitable UUID creation functions are included in the - core libc library. On Linux, macOS, and some other - platforms, suitable functions are provided in the libuuid - library, which originally came from the e2fsprogs project + core libc library. On Linux, macOS, and some other + platforms, suitable functions are provided in the libuuid + library, which originally came from the e2fsprogs project (though on modern Linux it is considered part - of util-linux-ng). When invoking configure, + of util-linux-ng). When invoking configure, specify to use the BSD functions, or to - use e2fsprogs' libuuid, or + use e2fsprogs' libuuid, or to use the OSSP UUID library. More than one of these libraries might be available on a particular - machine, so configure does not automatically choose one. + machine, so configure does not automatically choose one. If you only need randomly-generated (version 4) UUIDs, - consider using the gen_random_uuid() function + consider using the gen_random_uuid() function from the module instead. diff --git a/doc/src/sgml/vacuumlo.sgml b/doc/src/sgml/vacuumlo.sgml index 9da61c93fe..190ed9880b 100644 --- a/doc/src/sgml/vacuumlo.sgml +++ b/doc/src/sgml/vacuumlo.sgml @@ -28,17 +28,17 @@ Description - vacuumlo is a simple utility program that will remove any - orphaned large objects from a - PostgreSQL database. An orphaned large object (LO) is - considered to be any LO whose OID does not appear in any oid or - lo data column of the database. + vacuumlo is a simple utility program that will remove any + orphaned large objects from a + PostgreSQL database. An orphaned large object (LO) is + considered to be any LO whose OID does not appear in any oid or + lo data column of the database. - If you use this, you may also be interested in the lo_manage + If you use this, you may also be interested in the lo_manage trigger in the module. - lo_manage is useful to try + lo_manage is useful to try to avoid creating orphaned LOs in the first place. @@ -55,10 +55,10 @@ - limit + limit - Remove no more than limit large objects per + Remove no more than limit large objects per transaction (default 1000). Since the server acquires a lock per LO removed, removing too many LOs in one transaction risks exceeding . Set the limit to @@ -82,8 +82,8 @@ - - + + Print the vacuumlo version and exit. @@ -92,8 +92,8 @@ - - + + Show help about vacuumlo command line @@ -110,29 +110,29 @@ - hostname + hostname Database server's host. - port + port Database server's port. - username + username User name to connect as. - - + + Never issue a password prompt. If the server requires password @@ -158,7 +158,7 @@ for a password if the server demands password authentication. However, vacuumlo will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -172,10 +172,10 @@ vacuumlo works by the following method: - First, vacuumlo builds a temporary table which contains all + First, vacuumlo builds a temporary table which contains all of the OIDs of the large objects in the selected database. It then scans through all columns in the database that are of type - oid or lo, and removes matching entries from the temporary + oid or lo, and removes matching entries from the temporary table. (Note: Only types with these names are considered; in particular, domains over them are not considered.) The remaining entries in the temporary table identify orphaned LOs. These are removed. diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index ddcef5fbf5..f9febe916f 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -13,7 +13,7 @@ Reliability is an important property of any serious database - system, and PostgreSQL does everything possible to + system, and PostgreSQL does everything possible to guarantee reliable operation. One aspect of reliable operation is that all data recorded by a committed transaction should be stored in a nonvolatile area that is safe from power loss, operating @@ -34,21 +34,21 @@ First, there is the operating system's buffer cache, which caches frequently requested disk blocks and combines disk writes. Fortunately, all operating systems give applications a way to force writes from - the buffer cache to disk, and PostgreSQL uses those + the buffer cache to disk, and PostgreSQL uses those features. (See the parameter to adjust how this is done.) Next, there might be a cache in the disk drive controller; this is - particularly common on RAID controller cards. Some of - these caches are write-through, meaning writes are sent + particularly common on RAID controller cards. Some of + these caches are write-through, meaning writes are sent to the drive as soon as they arrive. Others are - write-back, meaning data is sent to the drive at + write-back, meaning data is sent to the drive at some later time. Such caches can be a reliability hazard because the memory in the disk controller cache is volatile, and will lose its contents in a power failure. Better controller cards have - battery-backup units (BBUs), meaning + battery-backup units (BBUs), meaning the card has a battery that maintains power to the cache in case of system power loss. After power is restored the data will be written to the disk drives. @@ -71,22 +71,22 @@ - On Linux, IDE and SATA drives can be queried using + On Linux, IDE and SATA drives can be queried using hdparm -I; write caching is enabled if there is - a * next to Write cache. hdparm -W 0 + a * next to Write cache. hdparm -W 0 can be used to turn off write caching. SCSI drives can be queried - using sdparm. + using sdparm. Use sdparm --get=WCE to check - whether the write cache is enabled and sdparm --clear=WCE + whether the write cache is enabled and sdparm --clear=WCE to disable it. - On FreeBSD, IDE drives can be queried using + On FreeBSD, IDE drives can be queried using atacontrol and write caching turned off using - hw.ata.wc=0 in /boot/loader.conf; + hw.ata.wc=0 in /boot/loader.conf; SCSI drives can be queried using camcontrol identify, and the write cache both queried and changed using sdparm when available. @@ -95,20 +95,20 @@ - On Solaris, the disk write cache is controlled by - format -e. - (The Solaris ZFS file system is safe with disk write-cache + On Solaris, the disk write cache is controlled by + format -e. + (The Solaris ZFS file system is safe with disk write-cache enabled because it issues its own disk cache flush commands.) - On Windows, if wal_sync_method is - open_datasync (the default), write caching can be disabled - by unchecking My Computer\Open\disk drive\Properties\Hardware\Properties\Policies\Enable write caching on the disk. + On Windows, if wal_sync_method is + open_datasync (the default), write caching can be disabled + by unchecking My Computer\Open\disk drive\Properties\Hardware\Properties\Policies\Enable write caching on the disk. Alternatively, set wal_sync_method to - fsync or fsync_writethrough, which prevent + fsync or fsync_writethrough, which prevent write caching. @@ -116,21 +116,21 @@ On macOS, write caching can be prevented by - setting wal_sync_method to fsync_writethrough. + setting wal_sync_method to fsync_writethrough. - Recent SATA drives (those following ATAPI-6 or later) - offer a drive cache flush command (FLUSH CACHE EXT), + Recent SATA drives (those following ATAPI-6 or later) + offer a drive cache flush command (FLUSH CACHE EXT), while SCSI drives have long supported a similar command - SYNCHRONIZE CACHE. These commands are not directly - accessible to PostgreSQL, but some file systems - (e.g., ZFS, ext4) can use them to flush + SYNCHRONIZE CACHE. These commands are not directly + accessible to PostgreSQL, but some file systems + (e.g., ZFS, ext4) can use them to flush data to the platters on write-back-enabled drives. Unfortunately, such file systems behave suboptimally when combined with battery-backup unit - (BBU) disk controllers. In such setups, the synchronize + (BBU) disk controllers. In such setups, the synchronize command forces all data from the controller cache to the disks, eliminating much of the benefit of the BBU. You can run the program to see @@ -164,13 +164,13 @@ commonly 512 bytes each. Every physical read or write operation processes a whole sector. When a write request arrives at the drive, it might be for some multiple - of 512 bytes (PostgreSQL typically writes 8192 bytes, or + of 512 bytes (PostgreSQL typically writes 8192 bytes, or 16 sectors, at a time), and the process of writing could fail due to power loss at any time, meaning some of the 512-byte sectors were written while others were not. To guard against such failures, - PostgreSQL periodically writes full page images to - permanent WAL storage before modifying the actual page on - disk. By doing this, during crash recovery PostgreSQL can + PostgreSQL periodically writes full page images to + permanent WAL storage before modifying the actual page on + disk. By doing this, during crash recovery PostgreSQL can restore partially-written pages from WAL. If you have file-system software that prevents partial page writes (e.g., ZFS), you can turn off this page imaging by turning off the - PostgreSQL also protects against some kinds of data corruption + PostgreSQL also protects against some kinds of data corruption on storage devices that may occur because of hardware errors or media failure over time, such as reading/writing garbage data. @@ -195,7 +195,7 @@ Data pages are not currently checksummed by default, though full page images recorded in WAL records will be protected; see initdb + linkend="app-initdb-data-checksums">initdb for details about enabling data page checksums. @@ -224,7 +224,7 @@ - PostgreSQL does not protect against correctable memory errors + PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. @@ -267,7 +267,7 @@ causes file system data to be flushed to disk. Fortunately, data flushing during journaling can often be disabled with a file system mount option, e.g. - data=writeback on a Linux ext3 file system. + data=writeback on a Linux ext3 file system. Journaled file systems do improve boot speed after a crash. @@ -313,7 +313,7 @@ - Asynchronous commit is an option that allows transactions + Asynchronous commit is an option that allows transactions to complete more quickly, at the cost that the most recent transactions may be lost if the database should crash. In many applications this is an acceptable trade-off. @@ -321,7 +321,7 @@ As described in the previous section, transaction commit is normally - synchronous: the server waits for the transaction's + synchronous: the server waits for the transaction's WAL records to be flushed to permanent storage before returning a success indication to the client. The client is therefore guaranteed that a transaction reported to be committed will @@ -374,22 +374,22 @@ - Certain utility commands, for instance DROP TABLE, are + Certain utility commands, for instance DROP TABLE, are forced to commit synchronously regardless of the setting of synchronous_commit. This is to ensure consistency between the server's file system and the logical state of the database. The commands supporting two-phase commit, such as PREPARE - TRANSACTION, are also always synchronous. + TRANSACTION, are also always synchronous. If the database crashes during the risk window between an asynchronous commit and the writing of the transaction's WAL records, - then changes made during that transaction will be lost. + then changes made during that transaction will be lost. The duration of the risk window is limited because a background process (the WAL - writer) flushes unwritten WAL records to disk + writer) flushes unwritten WAL records to disk every milliseconds. The actual maximum duration of the risk window is three times wal_writer_delay because the WAL writer is @@ -408,10 +408,10 @@ = off. fsync is a server-wide setting that will alter the behavior of all transactions. It disables - all logic within PostgreSQL that attempts to synchronize + all logic within PostgreSQL that attempts to synchronize writes to different portions of the database, and therefore a system crash (that is, a hardware or operating system crash, not a failure of - PostgreSQL itself) could result in arbitrarily bad + PostgreSQL itself) could result in arbitrarily bad corruption of the database state. In many scenarios, asynchronous commit provides most of the performance improvement that could be obtained by turning off fsync, but without the risk @@ -437,14 +437,14 @@ <acronym>WAL</acronym> Configuration - There are several WAL-related configuration parameters that + There are several WAL-related configuration parameters that affect database performance. This section explains their use. Consult for general information about setting server configuration parameters. - Checkpointscheckpoint + Checkpointscheckpoint are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before that checkpoint. At checkpoint time, all @@ -477,7 +477,7 @@ whichever comes first. The default settings are 5 minutes and 1 GB, respectively. If no WAL has been written since the previous checkpoint, new checkpoints - will be skipped even if checkpoint_timeout has passed. + will be skipped even if checkpoint_timeout has passed. (If WAL archiving is being used and you want to put a lower limit on how often files are archived in order to bound potential data loss, you should adjust the parameter rather than the @@ -509,13 +509,13 @@ don't happen too often. As a simple sanity check on your checkpointing parameters, you can set the parameter. If checkpoints happen closer together than - checkpoint_warning seconds, + checkpoint_warning seconds, a message will be output to the server log recommending increasing max_wal_size. Occasional appearance of such a message is not cause for alarm, but if it appears often then the checkpoint control parameters should be increased. Bulk operations such - as large COPY transfers might cause a number of such warnings - to appear if you have not set max_wal_size high + as large COPY transfers might cause a number of such warnings + to appear if you have not set max_wal_size high enough. @@ -530,7 +530,7 @@ checkpoint_timeout seconds have elapsed, or before max_wal_size is exceeded, whichever is sooner. With the default value of 0.5, - PostgreSQL can be expected to complete each checkpoint + PostgreSQL can be expected to complete each checkpoint in about half the time before the next checkpoint starts. On a system that's very close to maximum I/O throughput during normal operation, you might want to increase checkpoint_completion_target @@ -550,19 +550,19 @@ allows to force the OS that pages written by the checkpoint should be flushed to disk after a configurable number of bytes. Otherwise, these pages may be kept in the OS's page cache, inducing a stall when - fsync is issued at the end of a checkpoint. This setting will + fsync is issued at the end of a checkpoint. This setting will often help to reduce transaction latency, but it also can an adverse effect on performance; particularly for workloads that are bigger than , but smaller than the OS's page cache. - The number of WAL segment files in pg_wal directory depends on - min_wal_size, max_wal_size and + The number of WAL segment files in pg_wal directory depends on + min_wal_size, max_wal_size and the amount of WAL generated in previous checkpoint cycles. When old log segment files are no longer needed, they are removed or recycled (that is, renamed to become future segments in the numbered sequence). If, due to a - short-term peak of log output rate, max_wal_size is + short-term peak of log output rate, max_wal_size is exceeded, the unneeded segment files will be removed until the system gets back under this limit. Below that limit, the system recycles enough WAL files to cover the estimated need until the next checkpoint, and @@ -570,7 +570,7 @@ of WAL files used in previous checkpoint cycles. The moving average is increased immediately if the actual usage exceeds the estimate, so it accommodates peak usage rather than average usage to some extent. - min_wal_size puts a minimum on the amount of WAL files + min_wal_size puts a minimum on the amount of WAL files recycled for future usage; that much WAL is always recycled for future use, even if the system is idle and the WAL usage estimate suggests that little WAL is needed. @@ -582,7 +582,7 @@ kept at all times. Also, if WAL archiving is used, old segments can not be removed or recycled until they are archived. If WAL archiving cannot keep up with the pace that WAL is generated, or if archive_command - fails repeatedly, old WAL files will accumulate in pg_wal + fails repeatedly, old WAL files will accumulate in pg_wal until the situation is resolved. A slow or failed standby server that uses a replication slot will have the same effect (see ). @@ -590,21 +590,21 @@ In archive recovery or standby mode, the server periodically performs - restartpoints,restartpoint + restartpoints,restartpoint which are similar to checkpoints in normal operation: the server forces - all its state to disk, updates the pg_control file to + all its state to disk, updates the pg_control file to indicate that the already-processed WAL data need not be scanned again, - and then recycles any old log segment files in the pg_wal + and then recycles any old log segment files in the pg_wal directory. Restartpoints can't be performed more frequently than checkpoints in the master because restartpoints can only be performed at checkpoint records. A restartpoint is triggered when a checkpoint record is reached if at - least checkpoint_timeout seconds have passed since the last + least checkpoint_timeout seconds have passed since the last restartpoint, or if WAL size is about to exceed - max_wal_size. However, because of limitations on when a - restartpoint can be performed, max_wal_size is often exceeded + max_wal_size. However, because of limitations on when a + restartpoint can be performed, max_wal_size is often exceeded during recovery, by up to one checkpoint cycle's worth of WAL. - (max_wal_size is never a hard limit anyway, so you should + (max_wal_size is never a hard limit anyway, so you should always leave plenty of headroom to avoid running out of disk space.) @@ -631,7 +631,7 @@ one should increase the number of WAL buffers by modifying the parameter. When is set and the system is very busy, - setting wal_buffers higher will help smooth response times + setting wal_buffers higher will help smooth response times during the period immediately following each checkpoint. @@ -686,7 +686,7 @@ will consist only of sessions that reach the point where they need to flush their commit records during the window in which the previous flush operation (if any) is occurring. At higher client counts a - gangway effect tends to occur, so that the effects of group + gangway effect tends to occur, so that the effects of group commit become significant even when commit_delay is zero, and thus explicitly setting commit_delay tends to help less. Setting commit_delay can only help @@ -702,7 +702,7 @@ PostgreSQL will ask the kernel to force WAL updates out to disk. All the options should be the same in terms of reliability, with - the exception of fsync_writethrough, which can sometimes + the exception of fsync_writethrough, which can sometimes force a flush of the disk cache even when other options do not do so. However, it's quite platform-specific which one will be the fastest. You can test the speeds of different options using the LSN) that is a byte offset into the logs, increasing monotonically with each new record. LSN values are returned as the datatype - pg_lsn. Values can be + pg_lsn. Values can be compared to calculate the volume of WAL data that separates them, so they are used to measure the progress of replication and recovery. @@ -752,9 +752,9 @@ WAL logs are stored in the directory pg_wal under the data directory, as a set of segment files, normally each 16 MB in size (but the size can be changed - by altering the initdb option). Each segment is divided into pages, normally 8 kB each (this size can be changed via the - configure option). The log record headers are described in access/xlogrecord.h; the record content is dependent on the type of event that is being logged. Segment files are given ever-increasing numbers as names, starting at @@ -774,7 +774,7 @@ The aim of WAL is to ensure that the log is written before database records are altered, but this can be subverted by - disk drivesdisk drive that falsely report a + disk drivesdisk drive that falsely report a successful write to the kernel, when in fact they have only cached the data and not yet stored it on the disk. A power failure in such a situation might lead to diff --git a/doc/src/sgml/xaggr.sgml b/doc/src/sgml/xaggr.sgml index 9e6a6648dc..f99dbb6510 100644 --- a/doc/src/sgml/xaggr.sgml +++ b/doc/src/sgml/xaggr.sgml @@ -41,10 +41,10 @@ If we define an aggregate that does not use a final function, we have an aggregate that computes a running function of - the column values from each row. sum is an - example of this kind of aggregate. sum starts at + the column values from each row. sum is an + example of this kind of aggregate. sum starts at zero and always adds the current row's value to - its running total. For example, if we want to make a sum + its running total. For example, if we want to make a sum aggregate to work on a data type for complex numbers, we only need the addition function for that data type. The aggregate definition would be: @@ -69,7 +69,7 @@ SELECT sum(a) FROM test_complex; (Notice that we are relying on function overloading: there is more than - one aggregate named sum, but + one aggregate named sum, but PostgreSQL can figure out which kind of sum applies to a column of type complex.) @@ -83,17 +83,17 @@ SELECT sum(a) FROM test_complex; value is null. Ordinarily this would mean that the sfunc would need to check for a null state-value input. But for sum and some other simple aggregates like - max and min, + max and min, it is sufficient to insert the first nonnull input value into the state variable and then start applying the transition function at the second nonnull input value. PostgreSQL will do that automatically if the initial state value is null and - the transition function is marked strict (i.e., not to be called + the transition function is marked strict (i.e., not to be called for null inputs). - Another bit of default behavior for a strict transition function + Another bit of default behavior for a strict transition function is that the previous state value is retained unchanged whenever a null input value is encountered. Thus, null values are ignored. If you need some other behavior for null inputs, do not declare your @@ -102,7 +102,7 @@ SELECT sum(a) FROM test_complex; - avg (average) is a more complex example of an aggregate. + avg (average) is a more complex example of an aggregate. It requires two pieces of running state: the sum of the inputs and the count of the number of inputs. The final result is obtained by dividing @@ -124,16 +124,16 @@ CREATE AGGREGATE avg (float8) - float8_accum requires a three-element array, not just + float8_accum requires a three-element array, not just two elements, because it accumulates the sum of squares as well as the sum and count of the inputs. This is so that it can be used for - some other aggregates as well as avg. + some other aggregates as well as avg. - Aggregate function calls in SQL allow DISTINCT - and ORDER BY options that control which rows are fed + Aggregate function calls in SQL allow DISTINCT + and ORDER BY options that control which rows are fed to the aggregate's transition function and in what order. These options are implemented behind the scenes and are not the concern of the aggregate's support functions. @@ -159,16 +159,16 @@ CREATE AGGREGATE avg (float8) Aggregate functions can optionally support moving-aggregate - mode, which allows substantially faster execution of aggregate + mode, which allows substantially faster execution of aggregate functions within windows with moving frame starting points. (See and for information about use of aggregate functions as window functions.) - The basic idea is that in addition to a normal forward + The basic idea is that in addition to a normal forward transition function, the aggregate provides an inverse - transition function, which allows rows to be removed from the + transition function, which allows rows to be removed from the aggregate's running state value when they exit the window frame. - For example a sum aggregate, which uses addition as the + For example a sum aggregate, which uses addition as the forward transition function, would use subtraction as the inverse transition function. Without an inverse transition function, the window function mechanism must recalculate the aggregate from scratch each time @@ -193,7 +193,7 @@ CREATE AGGREGATE avg (float8) - As an example, we could extend the sum aggregate given above + As an example, we could extend the sum aggregate given above to support moving-aggregate mode like this: @@ -209,10 +209,10 @@ CREATE AGGREGATE sum (complex) ); - The parameters whose names begin with m define the + The parameters whose names begin with m define the moving-aggregate implementation. Except for the inverse transition - function minvfunc, they correspond to the plain-aggregate - parameters without m. + function minvfunc, they correspond to the plain-aggregate + parameters without m. @@ -224,10 +224,10 @@ CREATE AGGREGATE sum (complex) current frame starting position. This convention allows moving-aggregate mode to be used in situations where there are some infrequent cases that are impractical to reverse out of the running state value. The inverse - transition function can punt on these cases, and yet still come + transition function can punt on these cases, and yet still come out ahead so long as it can work for most cases. As an example, an aggregate working with floating-point numbers might choose to punt when - a NaN (not a number) input has to be removed from the running + a NaN (not a number) input has to be removed from the running state value. @@ -238,8 +238,8 @@ CREATE AGGREGATE sum (complex) in results depending on whether the moving-aggregate mode is used. An example of an aggregate for which adding an inverse transition function seems easy at first, yet where this requirement cannot be met - is sum over float4 or float8 inputs. A - naive declaration of sum(float8) could be + is sum over float4 or float8 inputs. A + naive declaration of sum(float8) could be CREATE AGGREGATE unsafe_sum (float8) @@ -262,13 +262,13 @@ FROM (VALUES (1, 1.0e20::float8), (2, 1.0::float8)) AS v (n,x); - This query returns 0 as its second result, rather than the - expected answer of 1. The cause is the limited precision of - floating-point values: adding 1 to 1e20 results - in 1e20 again, and so subtracting 1e20 from that - yields 0, not 1. Note that this is a limitation + This query returns 0 as its second result, rather than the + expected answer of 1. The cause is the limited precision of + floating-point values: adding 1 to 1e20 results + in 1e20 again, and so subtracting 1e20 from that + yields 0, not 1. Note that this is a limitation of floating-point arithmetic in general, not a limitation - of PostgreSQL. + of PostgreSQL. @@ -309,7 +309,7 @@ CREATE AGGREGATE array_accum (anyelement) Here, the actual state type for any given aggregate call is the array type having the actual input type as elements. The behavior of the aggregate is to concatenate all the inputs into an array of that type. - (Note: the built-in aggregate array_agg provides similar + (Note: the built-in aggregate array_agg provides similar functionality, with better performance than this definition would have.) @@ -344,19 +344,19 @@ SELECT attrelid::regclass, array_accum(atttypid::regtype) polymorphic state type, as in the above example. This is necessary because otherwise the final function cannot be declared sensibly: it would need to have a polymorphic result type but no polymorphic argument - type, which CREATE FUNCTION will reject on the grounds that + type, which CREATE FUNCTION will reject on the grounds that the result type cannot be deduced from a call. But sometimes it is inconvenient to use a polymorphic state type. The most common case is where the aggregate support functions are to be written in C and the - state type should be declared as internal because there is + state type should be declared as internal because there is no SQL-level equivalent for it. To address this case, it is possible to - declare the final function as taking extra dummy arguments + declare the final function as taking extra dummy arguments that match the input arguments of the aggregate. Such dummy arguments are always passed as null values since no specific value is available when the final function is called. Their only use is to allow a polymorphic final function's result type to be connected to the aggregate's input type(s). For example, the definition of the built-in - aggregate array_agg is equivalent to + aggregate array_agg is equivalent to CREATE FUNCTION array_agg_transfn(internal, anynonarray) @@ -373,30 +373,30 @@ CREATE AGGREGATE array_agg (anynonarray) ); - Here, the finalfunc_extra option specifies that the final + Here, the finalfunc_extra option specifies that the final function receives, in addition to the state value, extra dummy argument(s) corresponding to the aggregate's input argument(s). - The extra anynonarray argument allows the declaration - of array_agg_finalfn to be valid. + The extra anynonarray argument allows the declaration + of array_agg_finalfn to be valid. An aggregate function can be made to accept a varying number of arguments - by declaring its last argument as a VARIADIC array, in much + by declaring its last argument as a VARIADIC array, in much the same fashion as for regular functions; see . The aggregate's transition function(s) must have the same array type as their last argument. The - transition function(s) typically would also be marked VARIADIC, + transition function(s) typically would also be marked VARIADIC, but this is not strictly required. Variadic aggregates are easily misused in connection with - the ORDER BY option (see ), + the ORDER BY option (see ), since the parser cannot tell whether the wrong number of actual arguments have been given in such a combination. Keep in mind that everything to - the right of ORDER BY is a sort key, not an argument to the + the right of ORDER BY is a sort key, not an argument to the aggregate. For example, in SELECT myaggregate(a ORDER BY a, b, c) FROM ... @@ -406,7 +406,7 @@ SELECT myaggregate(a ORDER BY a, b, c) FROM ... SELECT myaggregate(a, b, c ORDER BY a) FROM ... - If myaggregate is variadic, both these calls could be + If myaggregate is variadic, both these calls could be perfectly valid. @@ -427,19 +427,19 @@ SELECT myaggregate(a, b, c ORDER BY a) FROM ... - The aggregates we have been describing so far are normal - aggregates. PostgreSQL also - supports ordered-set aggregates, which differ from + The aggregates we have been describing so far are normal + aggregates. PostgreSQL also + supports ordered-set aggregates, which differ from normal aggregates in two key ways. First, in addition to ordinary aggregated arguments that are evaluated once per input row, an - ordered-set aggregate can have direct arguments that are + ordered-set aggregate can have direct arguments that are evaluated only once per aggregation operation. Second, the syntax for the ordinary aggregated arguments specifies a sort ordering for them explicitly. An ordered-set aggregate is usually used to implement a computation that depends on a specific row ordering, for instance rank or percentile, so that the sort ordering is a required aspect of any call. For example, the built-in - definition of percentile_disc is equivalent to: + definition of percentile_disc is equivalent to: CREATE FUNCTION ordered_set_transition(internal, anyelement) @@ -456,7 +456,7 @@ CREATE AGGREGATE percentile_disc (float8 ORDER BY anyelement) ); - This aggregate takes a float8 direct argument (the percentile + This aggregate takes a float8 direct argument (the percentile fraction) and an aggregated input that can be of any sortable data type. It could be used to obtain a median household income like this: @@ -467,31 +467,31 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; 50489 - Here, 0.5 is a direct argument; it would make no sense + Here, 0.5 is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows. Unlike the case for normal aggregates, the sorting of input rows for - an ordered-set aggregate is not done behind the scenes, + an ordered-set aggregate is not done behind the scenes, but is the responsibility of the aggregate's support functions. The typical implementation approach is to keep a reference to - a tuplesort object in the aggregate's state value, feed the + a tuplesort object in the aggregate's state value, feed the incoming rows into that object, and then complete the sorting and read out the data in the final function. This design allows the final function to perform special operations such as injecting - additional hypothetical rows into the data to be sorted. + additional hypothetical rows into the data to be sorted. While normal aggregates can often be implemented with support functions written in PL/pgSQL or another PL language, ordered-set aggregates generally have to be written in C, since their state values aren't definable as any SQL data type. (In the above example, notice that the state value is declared as - type internal — this is typical.) + type internal — this is typical.) Also, because the final function performs the sort, it is not possible to continue adding input rows by executing the transition function again - later. This means the final function is not READ_ONLY; + later. This means the final function is not READ_ONLY; it must be declared in - as READ_WRITE, or as SHARABLE if it's + as READ_WRITE, or as SHARABLE if it's possible for additional final-function calls to make use of the already-sorted state. @@ -503,9 +503,9 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; same definition as for normal aggregates, but note that the direct arguments (if any) are not provided. The final function receives the last state value, the values of the direct arguments if any, - and (if finalfunc_extra is specified) null values + and (if finalfunc_extra is specified) null values corresponding to the aggregated input(s). As with normal - aggregates, finalfunc_extra is only really useful if the + aggregates, finalfunc_extra is only really useful if the aggregate is polymorphic; then the extra dummy argument(s) are needed to connect the final function's result type to the aggregate's input type(s). @@ -528,7 +528,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; Optionally, an aggregate function can support partial - aggregation. The idea of partial aggregation is to run the aggregate's + aggregation. The idea of partial aggregation is to run the aggregate's state transition function over different subsets of the input data independently, and then to combine the state values resulting from those subsets to produce the same state value that would have resulted from @@ -543,7 +543,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; To support partial aggregation, the aggregate definition must provide - a combine function, which takes two values of the + a combine function, which takes two values of the aggregate's state type (representing the results of aggregating over two subsets of the input rows) and produces a new value of the state type, representing what the state would have been after aggregating over the @@ -554,10 +554,10 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; - As simple examples, MAX and MIN aggregates can be + As simple examples, MAX and MIN aggregates can be made to support partial aggregation by specifying the combine function as the same greater-of-two or lesser-of-two comparison function that is used - as their transition function. SUM aggregates just need an + as their transition function. SUM aggregates just need an addition function as combine function. (Again, this is the same as their transition function, unless the state value is wider than the input data type.) @@ -568,26 +568,26 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; happens to take a value of the state type, not of the underlying input type, as its second argument. In particular, the rules for dealing with null values and strict functions are similar. Also, if the aggregate - definition specifies a non-null initcond, keep in mind that + definition specifies a non-null initcond, keep in mind that that will be used not only as the initial state for each partial aggregation run, but also as the initial state for the combine function, which will be called to combine each partial result into that state. - If the aggregate's state type is declared as internal, it is + If the aggregate's state type is declared as internal, it is the combine function's responsibility that its result is allocated in the correct memory context for aggregate state values. This means in - particular that when the first input is NULL it's invalid + particular that when the first input is NULL it's invalid to simply return the second input, as that value will be in the wrong context and will not have sufficient lifespan. - When the aggregate's state type is declared as internal, it is + When the aggregate's state type is declared as internal, it is usually also appropriate for the aggregate definition to provide a - serialization function and a deserialization - function, which allow such a state value to be copied from one process + serialization function and a deserialization + function, which allow such a state value to be copied from one process to another. Without these functions, parallel aggregation cannot be performed, and future applications such as local/remote aggregation will probably not work either. @@ -595,11 +595,11 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; A serialization function must take a single argument of - type internal and return a result of type bytea, which + type internal and return a result of type bytea, which represents the state value packaged up into a flat blob of bytes. Conversely, a deserialization function reverses that conversion. It must - take two arguments of types bytea and internal, and - return a result of type internal. (The second argument is unused + take two arguments of types bytea and internal, and + return a result of type internal. (The second argument is unused and is always zero, but it is required for type-safety reasons.) The result of the deserialization function should simply be allocated in the current memory context, as unlike the combine function's result, it is not @@ -608,7 +608,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; Worth noting also is that for an aggregate to be executed in parallel, - the aggregate itself must be marked PARALLEL SAFE. The + the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted. @@ -625,14 +625,14 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; A function written in C can detect that it is being called as an aggregate support function by calling - AggCheckCallContext, for example: + AggCheckCallContext, for example: if (AggCheckCallContext(fcinfo, NULL)) One reason for checking this is that when it is true, the first input must be a temporary state value and can therefore safely be modified in-place rather than allocating a new copy. - See int8inc() for an example. + See int8inc() for an example. (While aggregate transition functions are always allowed to modify the transition value in-place, aggregate final functions are generally discouraged from doing so; if they do so, the behavior must be declared @@ -641,14 +641,14 @@ if (AggCheckCallContext(fcinfo, NULL)) - The second argument of AggCheckCallContext can be used to + The second argument of AggCheckCallContext can be used to retrieve the memory context in which aggregate state values are being kept. - This is useful for transition functions that wish to use expanded + This is useful for transition functions that wish to use expanded objects (see ) as their state values. On first call, the transition function should return an expanded object whose memory context is a child of the aggregate state context, and then keep returning the same expanded object on subsequent calls. See - array_append() for an example. (array_append() + array_append() for an example. (array_append() is not the transition function of any built-in aggregate, but it is written to behave efficiently when used as transition function of a custom aggregate.) @@ -656,12 +656,12 @@ if (AggCheckCallContext(fcinfo, NULL)) Another support routine available to aggregate functions written in C - is AggGetAggref, which returns the Aggref + is AggGetAggref, which returns the Aggref parse node that defines the aggregate call. This is mainly useful for ordered-set aggregates, which can inspect the substructure of - the Aggref node to find out what sort ordering they are + the Aggref node to find out what sort ordering they are supposed to implement. Examples can be found - in orderedsetaggs.c in the PostgreSQL + in orderedsetaggs.c in the PostgreSQL source code. diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index 7475288354..b6f33037ff 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -22,7 +22,7 @@ procedural language functions (functions written in, for - example, PL/pgSQL or PL/Tcl) + example, PL/pgSQL or PL/Tcl) () @@ -66,7 +66,7 @@ page of the command to understand the examples better. Some examples from this chapter can be found in funcs.sql and - funcs.c in the src/tutorial + funcs.c in the src/tutorial directory in the PostgreSQL source distribution. @@ -87,7 +87,7 @@ In the simple (non-set) case, the first row of the last query's result will be returned. (Bear in mind that the first row of a multirow - result is not well-defined unless you use ORDER BY.) + result is not well-defined unless you use ORDER BY.) If the last query happens to return no rows at all, the null value will be returned. @@ -95,8 +95,8 @@ Alternatively, an SQL function can be declared to return a set (that is, multiple rows) by specifying the function's return type as SETOF - sometype, or equivalently by declaring it as - RETURNS TABLE(columns). In this case + sometype, or equivalently by declaring it as + RETURNS TABLE(columns). In this case all rows of the last query's result are returned. Further details appear below. @@ -105,9 +105,9 @@ The body of an SQL function must be a list of SQL statements separated by semicolons. A semicolon after the last statement is optional. Unless the function is declared to return - void, the last statement must be a SELECT, - or an INSERT, UPDATE, or DELETE - that has a RETURNING clause. + void, the last statement must be a SELECT, + or an INSERT, UPDATE, or DELETE + that has a RETURNING clause. @@ -117,16 +117,16 @@ modification queries (INSERT, UPDATE, and DELETE), as well as other SQL commands. (You cannot use transaction control commands, e.g. - COMMIT, SAVEPOINT, and some utility - commands, e.g. VACUUM, in SQL functions.) + COMMIT, SAVEPOINT, and some utility + commands, e.g. VACUUM, in SQL functions.) However, the final command - must be a SELECT or have a RETURNING + must be a SELECT or have a RETURNING clause that returns whatever is specified as the function's return type. Alternatively, if you want to define a SQL function that performs actions but has no - useful value to return, you can define it as returning void. + useful value to return, you can define it as returning void. For example, this function removes rows with negative salaries from - the emp table: + the emp table: CREATE FUNCTION clean_emp() RETURNS void AS ' @@ -147,13 +147,13 @@ SELECT clean_emp(); The entire body of a SQL function is parsed before any of it is executed. While a SQL function can contain commands that alter - the system catalogs (e.g., CREATE TABLE), the effects + the system catalogs (e.g., CREATE TABLE), the effects of such commands will not be visible during parse analysis of later commands in the function. Thus, for example, CREATE TABLE foo (...); INSERT INTO foo VALUES(...); will not work as desired if packaged up into a single SQL function, - since foo won't exist yet when the INSERT - command is parsed. It's recommended to use PL/pgSQL + since foo won't exist yet when the INSERT + command is parsed. It's recommended to use PL/pgSQL instead of a SQL function in this type of situation. @@ -164,8 +164,8 @@ SELECT clean_emp(); most convenient to use dollar quoting (see ) for the string constant. If you choose to use regular single-quoted string constant syntax, - you must double single quote marks (') and backslashes - (\) (assuming escape string syntax) in the body of + you must double single quote marks (') and backslashes + (\) (assuming escape string syntax) in the body of the function (see ). @@ -189,7 +189,7 @@ SELECT clean_emp(); is the same as any column name in the current SQL command within the function, the column name will take precedence. To override this, qualify the argument name with the name of the function itself, that is - function_name.argument_name. + function_name.argument_name. (If this would conflict with a qualified column name, again the column name wins. You can avoid the ambiguity by choosing a different alias for the table within the SQL command.) @@ -197,15 +197,15 @@ SELECT clean_emp(); In the older numeric approach, arguments are referenced using the syntax - $n: $1 refers to the first input - argument, $2 to the second, and so on. This will work + $n: $1 refers to the first input + argument, $2 to the second, and so on. This will work whether or not the particular argument was declared with a name. If an argument is of a composite type, then the dot notation, - e.g., argname.fieldname or - $1.fieldname, can be used to access attributes of the + e.g., argname.fieldname or + $1.fieldname, can be used to access attributes of the argument. Again, you might need to qualify the argument's name with the function name to make the form with an argument name unambiguous. @@ -226,7 +226,7 @@ INSERT INTO $1 VALUES (42); The ability to use names to reference SQL function arguments was added in PostgreSQL 9.2. Functions to be used in - older servers must use the $n notation. + older servers must use the $n notation. @@ -258,9 +258,9 @@ SELECT one(); Notice that we defined a column alias within the function body for the result of the function - (with the name result), but this column alias is not visible - outside the function. Hence, the result is labeled one - instead of result. + (with the name result), but this column alias is not visible + outside the function. Hence, the result is labeled one + instead of result. @@ -319,11 +319,11 @@ SELECT tf1(17, 100.0); - In this example, we chose the name accountno for the first + In this example, we chose the name accountno for the first argument, but this is the same as the name of a column in the - bank table. Within the UPDATE command, - accountno refers to the column bank.accountno, - so tf1.accountno must be used to refer to the argument. + bank table. Within the UPDATE command, + accountno refers to the column bank.accountno, + so tf1.accountno must be used to refer to the argument. We could of course avoid this by using a different name for the argument. @@ -342,7 +342,7 @@ $$ LANGUAGE SQL; which adjusts the balance and returns the new balance. - The same thing could be done in one command using RETURNING: + The same thing could be done in one command using RETURNING: CREATE FUNCTION tf1 (accountno integer, debit numeric) RETURNS numeric AS $$ @@ -394,8 +394,8 @@ SELECT name, double_salary(emp.*) AS dream Notice the use of the syntax $1.salary to select one field of the argument row value. Also notice - how the calling SELECT command - uses table_name.* to select + how the calling SELECT command + uses table_name.* to select the entire current row of a table as a composite value. The table row can alternatively be referenced using just the table name, like this: @@ -411,7 +411,7 @@ SELECT name, double_salary(emp) AS dream Sometimes it is handy to construct a composite argument value - on-the-fly. This can be done with the ROW construct. + on-the-fly. This can be done with the ROW construct. For example, we could adjust the data being passed to the function: SELECT name, double_salary(ROW(name, salary*1.1, age, cubicle)) AS dream @@ -473,7 +473,7 @@ CREATE FUNCTION new_emp() RETURNS emp AS $$ $$ LANGUAGE SQL; - Here we wrote a SELECT that returns just a single + Here we wrote a SELECT that returns just a single column of the correct composite type. This isn't really better in this situation, but it is a handy alternative in some cases — for example, if we need to compute the result by calling @@ -564,7 +564,7 @@ SELECT getname(new_emp()); - <acronym>SQL</> Functions with Output Parameters + <acronym>SQL</acronym> Functions with Output Parameters function @@ -573,7 +573,7 @@ SELECT getname(new_emp()); An alternative way of describing a function's results is to define it - with output parameters, as in this example: + with output parameters, as in this example: CREATE FUNCTION add_em (IN x int, IN y int, OUT sum int) @@ -587,7 +587,7 @@ SELECT add_em(3,7); (1 row) - This is not essentially different from the version of add_em + This is not essentially different from the version of add_em shown in . The real value of output parameters is that they provide a convenient way of defining functions that return several columns. For example, @@ -639,18 +639,18 @@ DROP FUNCTION sum_n_product (int, int); - Parameters can be marked as IN (the default), - OUT, INOUT, or VARIADIC. - An INOUT + Parameters can be marked as IN (the default), + OUT, INOUT, or VARIADIC. + An INOUT parameter serves as both an input parameter (part of the calling argument list) and an output parameter (part of the result record type). - VARIADIC parameters are input parameters, but are treated + VARIADIC parameters are input parameters, but are treated specially as described next. - <acronym>SQL</> Functions with Variable Numbers of Arguments + <acronym>SQL</acronym> Functions with Variable Numbers of Arguments function @@ -663,10 +663,10 @@ DROP FUNCTION sum_n_product (int, int); SQL functions can be declared to accept - variable numbers of arguments, so long as all the optional + variable numbers of arguments, so long as all the optional arguments are of the same data type. The optional arguments will be passed to the function as an array. The function is declared by - marking the last parameter as VARIADIC; this parameter + marking the last parameter as VARIADIC; this parameter must be declared as being of an array type. For example: @@ -682,7 +682,7 @@ SELECT mleast(10, -1, 5, 4.4); Effectively, all the actual arguments at or beyond the - VARIADIC position are gathered up into a one-dimensional + VARIADIC position are gathered up into a one-dimensional array, as if you had written @@ -691,7 +691,7 @@ SELECT mleast(ARRAY[10, -1, 5, 4.4]); -- doesn't work You can't actually write that, though — or at least, it will not match this function definition. A parameter marked - VARIADIC matches one or more occurrences of its element + VARIADIC matches one or more occurrences of its element type, not of its own type. @@ -699,7 +699,7 @@ SELECT mleast(ARRAY[10, -1, 5, 4.4]); -- doesn't work Sometimes it is useful to be able to pass an already-constructed array to a variadic function; this is particularly handy when one variadic function wants to pass on its array parameter to another one. You can - do that by specifying VARIADIC in the call: + do that by specifying VARIADIC in the call: SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]); @@ -707,21 +707,21 @@ SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]); This prevents expansion of the function's variadic parameter into its element type, thereby allowing the array argument value to match - normally. VARIADIC can only be attached to the last + normally. VARIADIC can only be attached to the last actual argument of a function call. - Specifying VARIADIC in the call is also the only way to + Specifying VARIADIC in the call is also the only way to pass an empty array to a variadic function, for example: SELECT mleast(VARIADIC ARRAY[]::numeric[]); - Simply writing SELECT mleast() does not work because a + Simply writing SELECT mleast() does not work because a variadic parameter must match at least one actual argument. - (You could define a second function also named mleast, + (You could define a second function also named mleast, with no parameters, if you wanted to allow such calls.) @@ -730,7 +730,7 @@ SELECT mleast(VARIADIC ARRAY[]::numeric[]); treated as not having any names of their own. This means it is not possible to call a variadic function using named arguments (), except when you specify - VARIADIC. For example, this will work: + VARIADIC. For example, this will work: SELECT mleast(VARIADIC arr => ARRAY[10, -1, 5, 4.4]); @@ -746,7 +746,7 @@ SELECT mleast(arr => ARRAY[10, -1, 5, 4.4]); - <acronym>SQL</> Functions with Default Values for Arguments + <acronym>SQL</acronym> Functions with Default Values for Arguments function @@ -804,7 +804,7 @@ ERROR: function foo() does not exist <acronym>SQL</acronym> Functions as Table Sources - All SQL functions can be used in the FROM clause of a query, + All SQL functions can be used in the FROM clause of a query, but it is particularly useful for functions returning composite types. If the function is defined to return a base type, the table function produces a one-column table. If the function is defined to return @@ -839,7 +839,7 @@ SELECT *, upper(fooname) FROM getfoo(1) AS t1; Note that we only got one row out of the function. This is because - we did not use SETOF. That is described in the next section. + we did not use SETOF. That is described in the next section. @@ -853,16 +853,16 @@ SELECT *, upper(fooname) FROM getfoo(1) AS t1; When an SQL function is declared as returning SETOF - sometype, the function's final + sometype, the function's final query is executed to completion, and each row it outputs is returned as an element of the result set. - This feature is normally used when calling the function in the FROM + This feature is normally used when calling the function in the FROM clause. In this case each row returned by the function becomes a row of the table seen by the query. For example, assume that - table foo has the same contents as above, and we say: + table foo has the same contents as above, and we say: CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$ @@ -906,17 +906,17 @@ SELECT * FROM sum_n_product_with_tab(10); (4 rows) - The key point here is that you must write RETURNS SETOF record + The key point here is that you must write RETURNS SETOF record to indicate that the function returns multiple rows instead of just one. If there is only one output parameter, write that parameter's type - instead of record. + instead of record. It is frequently useful to construct a query's result by invoking a set-returning function multiple times, with the parameters for each invocation coming from successive rows of a table or subquery. The - preferred way to do this is to use the LATERAL key word, + preferred way to do this is to use the LATERAL key word, which is described in . Here is an example using a set-returning function to enumerate elements of a tree structure: @@ -990,17 +990,17 @@ SELECT name, listchildren(name) FROM nodes; In the last SELECT, - notice that no output row appears for Child2, Child3, etc. + notice that no output row appears for Child2, Child3, etc. This happens because listchildren returns an empty set for those arguments, so no result rows are generated. This is the same behavior as we got from an inner join to the function result when using - the LATERAL syntax. + the LATERAL syntax. - PostgreSQL's behavior for a set-returning function in a + PostgreSQL's behavior for a set-returning function in a query's select list is almost exactly the same as if the set-returning - function had been written in a LATERAL FROM-clause item + function had been written in a LATERAL FROM-clause item instead. For example, SELECT x, generate_series(1,5) AS g FROM tab; @@ -1010,20 +1010,20 @@ SELECT x, generate_series(1,5) AS g FROM tab; SELECT x, g FROM tab, LATERAL generate_series(1,5) AS g; It would be exactly the same, except that in this specific example, - the planner could choose to put g on the outside of the - nestloop join, since g has no actual lateral dependency - on tab. That would result in a different output row + the planner could choose to put g on the outside of the + nestloop join, since g has no actual lateral dependency + on tab. That would result in a different output row order. Set-returning functions in the select list are always evaluated as though they are on the inside of a nestloop join with the rest of - the FROM clause, so that the function(s) are run to - completion before the next row from the FROM clause is + the FROM clause, so that the function(s) are run to + completion before the next row from the FROM clause is considered. If there is more than one set-returning function in the query's select list, the behavior is similar to what you get from putting the functions - into a single LATERAL ROWS FROM( ... ) FROM-clause + into a single LATERAL ROWS FROM( ... ) FROM-clause item. For each row from the underlying query, there is an output row using the first result from each function, then an output row using the second result, and so on. If some of the set-returning functions @@ -1031,48 +1031,48 @@ SELECT x, g FROM tab, LATERAL generate_series(1,5) AS g; missing data, so that the total number of rows emitted for one underlying row is the same as for the set-returning function that produced the most outputs. Thus the set-returning functions - run in lockstep until they are all exhausted, and then + run in lockstep until they are all exhausted, and then execution continues with the next underlying row. Set-returning functions can be nested in a select list, although that is - not allowed in FROM-clause items. In such cases, each level + not allowed in FROM-clause items. In such cases, each level of nesting is treated separately, as though it were - a separate LATERAL ROWS FROM( ... ) item. For example, in + a separate LATERAL ROWS FROM( ... ) item. For example, in SELECT srf1(srf2(x), srf3(y)), srf4(srf5(z)) FROM tab; - the set-returning functions srf2, srf3, - and srf5 would be run in lockstep for each row - of tab, and then srf1 and srf4 + the set-returning functions srf2, srf3, + and srf5 would be run in lockstep for each row + of tab, and then srf1 and srf4 would be applied in lockstep to each row produced by the lower functions. Set-returning functions cannot be used within conditional-evaluation - constructs, such as CASE or COALESCE. For + constructs, such as CASE or COALESCE. For example, consider SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; It might seem that this should produce five repetitions of input rows - that have x > 0, and a single repetition of those that do - not; but actually, because generate_series(1, 5) would be - run in an implicit LATERAL FROM item before - the CASE expression is ever evaluated, it would produce five + that have x > 0, and a single repetition of those that do + not; but actually, because generate_series(1, 5) would be + run in an implicit LATERAL FROM item before + the CASE expression is ever evaluated, it would produce five repetitions of every input row. To reduce confusion, such cases produce a parse-time error instead. - If a function's last command is INSERT, UPDATE, - or DELETE with RETURNING, that command will + If a function's last command is INSERT, UPDATE, + or DELETE with RETURNING, that command will always be executed to completion, even if the function is not declared - with SETOF or the calling query does not fetch all the - result rows. Any extra rows produced by the RETURNING + with SETOF or the calling query does not fetch all the + result rows. Any extra rows produced by the RETURNING clause are silently dropped, but the commanded table modifications still happen (and are all completed before returning from the function). @@ -1080,7 +1080,7 @@ SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; - Before PostgreSQL 10, putting more than one + Before PostgreSQL 10, putting more than one set-returning function in the same select list did not behave very sensibly unless they always produced equal numbers of rows. Otherwise, what you got was a number of output rows equal to the least common @@ -1089,10 +1089,10 @@ SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; described above; instead, a set-returning function could have at most one set-returning argument, and each nest of set-returning functions was run independently. Also, conditional execution (set-returning - functions inside CASE etc) was previously allowed, + functions inside CASE etc) was previously allowed, complicating things even more. - Use of the LATERAL syntax is recommended when writing - queries that need to work in older PostgreSQL versions, + Use of the LATERAL syntax is recommended when writing + queries that need to work in older PostgreSQL versions, because that will give consistent results across different versions. If you have a query that is relying on conditional execution of a set-returning function, you may be able to fix it by moving the @@ -1115,13 +1115,13 @@ END$$ LANGUAGE plpgsql; SELECT x, case_generate_series(y > 0, 1, z, 5) FROM tab; This formulation will work the same in all versions - of PostgreSQL. + of PostgreSQL. - <acronym>SQL</acronym> Functions Returning <literal>TABLE</> + <acronym>SQL</acronym> Functions Returning <literal>TABLE</literal> function @@ -1131,12 +1131,12 @@ SELECT x, case_generate_series(y > 0, 1, z, 5) FROM tab; There is another way to declare a function as returning a set, which is to use the syntax - RETURNS TABLE(columns). - This is equivalent to using one or more OUT parameters plus - marking the function as returning SETOF record (or - SETOF a single output parameter's type, as appropriate). + RETURNS TABLE(columns). + This is equivalent to using one or more OUT parameters plus + marking the function as returning SETOF record (or + SETOF a single output parameter's type, as appropriate). This notation is specified in recent versions of the SQL standard, and - thus may be more portable than using SETOF. + thus may be more portable than using SETOF. @@ -1150,9 +1150,9 @@ RETURNS TABLE(sum int, product int) AS $$ $$ LANGUAGE SQL; - It is not allowed to use explicit OUT or INOUT - parameters with the RETURNS TABLE notation — you must - put all the output columns in the TABLE list. + It is not allowed to use explicit OUT or INOUT + parameters with the RETURNS TABLE notation — you must + put all the output columns in the TABLE list. @@ -1270,8 +1270,8 @@ SELECT concat_values('|', 1, 4, 2); <acronym>SQL</acronym> Functions with Collations - collation - in SQL functions + collation + in SQL functions @@ -1283,21 +1283,21 @@ SELECT concat_values('|', 1, 4, 2); then all the collatable parameters are treated as having that collation implicitly. This will affect the behavior of collation-sensitive operations within the function. For example, using the - anyleast function described above, the result of + anyleast function described above, the result of SELECT anyleast('abc'::text, 'ABC'); - will depend on the database's default collation. In C locale - the result will be ABC, but in many other locales it will - be abc. The collation to use can be forced by adding - a COLLATE clause to any of the arguments, for example + will depend on the database's default collation. In C locale + the result will be ABC, but in many other locales it will + be abc. The collation to use can be forced by adding + a COLLATE clause to any of the arguments, for example SELECT anyleast('abc'::text, 'ABC' COLLATE "C"); Alternatively, if you wish a function to operate with a particular collation regardless of what it is called with, insert - COLLATE clauses as needed in the function definition. - This version of anyleast would always use en_US + COLLATE clauses as needed in the function definition. + This version of anyleast would always use en_US locale to compare strings: CREATE FUNCTION anyleast (VARIADIC anyarray) RETURNS anyelement AS $$ @@ -1358,24 +1358,24 @@ CREATE FUNCTION test(smallint, double precision) RETURNS ... A function that takes a single argument of a composite type should generally not have the same name as any attribute (field) of that type. - Recall that attribute(table) + Recall that attribute(table) is considered equivalent - to table.attribute. + to table.attribute. In the case that there is an ambiguity between a function on a composite type and an attribute of the composite type, the attribute will always be used. It is possible to override that choice by schema-qualifying the function name - (that is, schema.func(table) + (that is, schema.func(table) ) but it's better to avoid the problem by not choosing conflicting names. Another possible conflict is between variadic and non-variadic functions. - For instance, it is possible to create both foo(numeric) and - foo(VARIADIC numeric[]). In this case it is unclear which one + For instance, it is possible to create both foo(numeric) and + foo(VARIADIC numeric[]). In this case it is unclear which one should be matched to a call providing a single numeric argument, such as - foo(10.1). The rule is that the function appearing + foo(10.1). The rule is that the function appearing earlier in the search path is used, or if the two functions are in the same schema, the non-variadic one is preferred. @@ -1388,15 +1388,15 @@ CREATE FUNCTION test(smallint, double precision) RETURNS ... rule is violated, the behavior is not portable. You might get a run-time linker error, or one of the functions will get called (usually the internal one). The alternative form of the - AS clause for the SQL CREATE + AS clause for the SQL CREATE FUNCTION command decouples the SQL function name from the function name in the C source code. For instance: CREATE FUNCTION test(int) RETURNS int - AS 'filename', 'test_1arg' + AS 'filename', 'test_1arg' LANGUAGE C; CREATE FUNCTION test(int, int) RETURNS int - AS 'filename', 'test_2arg' + AS 'filename', 'test_2arg' LANGUAGE C; The names of the C functions here reflect one of many possible conventions. @@ -1421,9 +1421,9 @@ CREATE FUNCTION test(int, int) RETURNS int - Every function has a volatility classification, with - the possibilities being VOLATILE, STABLE, or - IMMUTABLE. VOLATILE is the default if the + Every function has a volatility classification, with + the possibilities being VOLATILE, STABLE, or + IMMUTABLE. VOLATILE is the default if the command does not specify a category. The volatility category is a promise to the optimizer about the behavior of the function: @@ -1431,7 +1431,7 @@ CREATE FUNCTION test(int, int) RETURNS int - A VOLATILE function can do anything, including modifying + A VOLATILE function can do anything, including modifying the database. It can return different results on successive calls with the same arguments. The optimizer makes no assumptions about the behavior of such functions. A query using a volatile function will @@ -1440,26 +1440,26 @@ CREATE FUNCTION test(int, int) RETURNS int - A STABLE function cannot modify the database and is + A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call. In particular, it is safe to use an expression containing such a function in an index scan condition. (Since an index scan will evaluate the comparison value only once, not once at each - row, it is not valid to use a VOLATILE function in an + row, it is not valid to use a VOLATILE function in an index scan condition.) - An IMMUTABLE function cannot modify the database and is + An IMMUTABLE function cannot modify the database and is guaranteed to return the same results given the same arguments forever. This category allows the optimizer to pre-evaluate the function when a query calls it with constant arguments. For example, a query like - SELECT ... WHERE x = 2 + 2 can be simplified on sight to - SELECT ... WHERE x = 4, because the function underlying - the integer addition operator is marked IMMUTABLE. + SELECT ... WHERE x = 2 + 2 can be simplified on sight to + SELECT ... WHERE x = 4, because the function underlying + the integer addition operator is marked IMMUTABLE. @@ -1471,32 +1471,32 @@ CREATE FUNCTION test(int, int) RETURNS int - Any function with side-effects must be labeled - VOLATILE, so that calls to it cannot be optimized away. + Any function with side-effects must be labeled + VOLATILE, so that calls to it cannot be optimized away. Even a function with no side-effects needs to be labeled - VOLATILE if its value can change within a single query; - some examples are random(), currval(), - timeofday(). + VOLATILE if its value can change within a single query; + some examples are random(), currval(), + timeofday(). - Another important example is that the current_timestamp - family of functions qualify as STABLE, since their values do + Another important example is that the current_timestamp + family of functions qualify as STABLE, since their values do not change within a transaction. - There is relatively little difference between STABLE and - IMMUTABLE categories when considering simple interactive + There is relatively little difference between STABLE and + IMMUTABLE categories when considering simple interactive queries that are planned and immediately executed: it doesn't matter a lot whether a function is executed once during planning or once during query execution startup. But there is a big difference if the plan is - saved and reused later. Labeling a function IMMUTABLE when + saved and reused later. Labeling a function IMMUTABLE when it really isn't might allow it to be prematurely folded to a constant during planning, resulting in a stale value being re-used during subsequent uses of the plan. This is a hazard when using prepared statements or when using function languages that cache plans (such as - PL/pgSQL). + PL/pgSQL). @@ -1504,12 +1504,12 @@ CREATE FUNCTION test(int, int) RETURNS int languages, there is a second important property determined by the volatility category, namely the visibility of any data changes that have been made by the SQL command that is calling the function. A - VOLATILE function will see such changes, a STABLE - or IMMUTABLE function will not. This behavior is implemented + VOLATILE function will see such changes, a STABLE + or IMMUTABLE function will not. This behavior is implemented using the snapshotting behavior of MVCC (see ): - STABLE and IMMUTABLE functions use a snapshot + STABLE and IMMUTABLE functions use a snapshot established as of the start of the calling query, whereas - VOLATILE functions obtain a fresh snapshot at the start of + VOLATILE functions obtain a fresh snapshot at the start of each query they execute. @@ -1522,41 +1522,41 @@ CREATE FUNCTION test(int, int) RETURNS int Because of this snapshotting behavior, - a function containing only SELECT commands can safely be - marked STABLE, even if it selects from tables that might be + a function containing only SELECT commands can safely be + marked STABLE, even if it selects from tables that might be undergoing modifications by concurrent queries. PostgreSQL will execute all commands of a - STABLE function using the snapshot established for the + STABLE function using the snapshot established for the calling query, and so it will see a fixed view of the database throughout that query. - The same snapshotting behavior is used for SELECT commands - within IMMUTABLE functions. It is generally unwise to select - from database tables within an IMMUTABLE function at all, + The same snapshotting behavior is used for SELECT commands + within IMMUTABLE functions. It is generally unwise to select + from database tables within an IMMUTABLE function at all, since the immutability will be broken if the table contents ever change. However, PostgreSQL does not enforce that you do not do that. - A common error is to label a function IMMUTABLE when its + A common error is to label a function IMMUTABLE when its results depend on a configuration parameter. For example, a function that manipulates timestamps might well have results that depend on the setting. For safety, such functions should - be labeled STABLE instead. + be labeled STABLE instead. - PostgreSQL requires that STABLE - and IMMUTABLE functions contain no SQL commands other - than SELECT to prevent data modification. + PostgreSQL requires that STABLE + and IMMUTABLE functions contain no SQL commands other + than SELECT to prevent data modification. (This is not a completely bulletproof test, since such functions could - still call VOLATILE functions that modify the database. - If you do that, you will find that the STABLE or - IMMUTABLE function does not notice the database changes + still call VOLATILE functions that modify the database. + If you do that, you will find that the STABLE or + IMMUTABLE function does not notice the database changes applied by the called function, since they are hidden from its snapshot.) @@ -1569,7 +1569,7 @@ CREATE FUNCTION test(int, int) RETURNS int PostgreSQL allows user-defined functions to be written in other languages besides SQL and C. These other languages are generically called procedural - languages (PLs). + languages (PLs). Procedural languages aren't built into the PostgreSQL server; they are offered by loadable modules. @@ -1581,7 +1581,7 @@ CREATE FUNCTION test(int, int) RETURNS int Internal Functions - functioninternal + functioninternal Internal functions are functions written in C that have been statically @@ -1635,8 +1635,8 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision be made compatible with C, such as C++). Such functions are compiled into dynamically loadable objects (also called shared libraries) and are loaded by the server on demand. The dynamic - loading feature is what distinguishes C language functions - from internal functions — the actual coding conventions + loading feature is what distinguishes C language functions + from internal functions — the actual coding conventions are essentially the same for both. (Hence, the standard internal function library is a rich source of coding examples for user-defined C functions.) @@ -1683,9 +1683,9 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision If the name starts with the string $libdir, - that part is replaced by the PostgreSQL package + that part is replaced by the PostgreSQL package library directory - name, which is determined at build time.$libdir + name, which is determined at build time.$libdir @@ -1693,7 +1693,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision If the name does not contain a directory part, the file is searched for in the path specified by the configuration variable - .dynamic_library_path + .dynamic_library_path @@ -1742,7 +1742,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision PostgreSQL will not compile a C function automatically. The object file must be compiled before it is referenced in a CREATE - FUNCTION command. See for additional + FUNCTION command. See for additional information. @@ -1754,12 +1754,12 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision To ensure that a dynamically loaded object file is not loaded into an incompatible server, PostgreSQL checks that the - file contains a magic block with the appropriate contents. + file contains a magic block with the appropriate contents. This allows the server to detect obvious incompatibilities, such as code compiled for a different major version of PostgreSQL. To include a magic block, write this in one (and only one) of the module source files, after having - included the header fmgr.h: + included the header fmgr.h: PG_MODULE_MAGIC; @@ -1790,12 +1790,12 @@ PG_MODULE_MAGIC; Optionally, a dynamically loaded file can contain initialization and finalization functions. If the file includes a function named - _PG_init, that function will be called immediately after + _PG_init, that function will be called immediately after loading the file. The function receives no parameters and should return void. If the file includes a function named - _PG_fini, that function will be called immediately before + _PG_fini, that function will be called immediately before unloading the file. Likewise, the function receives no parameters and - should return void. Note that _PG_fini will only be called + should return void. Note that _PG_fini will only be called during an unload of the file, not during process termination. (Presently, unloads are disabled and will never occur, but this may change in the future.) @@ -1915,7 +1915,7 @@ typedef struct - Never modify the contents of a pass-by-reference input + Never modify the contents of a pass-by-reference input value. If you do so you are likely to corrupt on-disk data, since the pointer you are given might point directly into a disk buffer. The sole exception to this rule is explained in @@ -1934,7 +1934,7 @@ typedef struct { } text; - The [FLEXIBLE_ARRAY_MEMBER] notation means that the actual + The [FLEXIBLE_ARRAY_MEMBER] notation means that the actual length of the data part is not specified by this declaration. @@ -1942,7 +1942,7 @@ typedef struct { When manipulating variable-length types, we must be careful to allocate the correct amount of memory and set the length field correctly. - For example, if we wanted to store 40 bytes in a text + For example, if we wanted to store 40 bytes in a text structure, we might use a code fragment like this: data, buffer, 40); ]]> - VARHDRSZ is the same as sizeof(int32), but - it's considered good style to use the macro VARHDRSZ + VARHDRSZ is the same as sizeof(int32), but + it's considered good style to use the macro VARHDRSZ to refer to the size of the overhead for a variable-length type. - Also, the length field must be set using the - SET_VARSIZE macro, not by simple assignment. + Also, the length field must be set using the + SET_VARSIZE macro, not by simple assignment. specifies which C type corresponds to which SQL type when writing a C-language function - that uses a built-in type of PostgreSQL. + that uses a built-in type of PostgreSQL. The Defined In column gives the header file that needs to be included to get the type definition. (The actual definition might be in a different file that is included by the @@ -2175,8 +2175,8 @@ PG_FUNCTION_INFO_V1(funcname); must appear in the same source file. (Conventionally, it's written just before the function itself.) This macro call is not - needed for internal-language functions, since - PostgreSQL assumes that all internal functions + needed for internal-language functions, since + PostgreSQL assumes that all internal functions use the version-1 convention. It is, however, required for dynamically-loaded functions. @@ -2332,8 +2332,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text directory of the shared library file (for instance the PostgreSQL tutorial directory, which contains the code for the examples used in this section). - (Better style would be to use just 'funcs' in the - AS clause, after having added + (Better style would be to use just 'funcs' in the + AS clause, after having added DIRECTORY to the search path. In any case, we can omit the system-specific extension for a shared library, commonly .so.) @@ -2350,16 +2350,16 @@ CREATE FUNCTION concat_text(text, text) RETURNS text At first glance, the version-1 coding conventions might appear to be just - pointless obscurantism, over using plain C calling - conventions. They do however allow to deal with NULLable + pointless obscurantism, over using plain C calling + conventions. They do however allow to deal with NULLable arguments/return values, and toasted (compressed or out-of-line) values. - The macro PG_ARGISNULL(n) + The macro PG_ARGISNULL(n) allows a function to test whether each input is null. (Of course, doing - this is only necessary in functions not declared strict.) + this is only necessary in functions not declared strict.) As with the PG_GETARG_xxx() macros, the input arguments are counted beginning at zero. Note that one @@ -2394,8 +2394,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text ALTER TABLE tablename ALTER COLUMN colname SET STORAGE storagetype. storagetype is one of - plain, external, extended, - or main.) + plain, external, extended, + or main.) @@ -2433,8 +2433,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text Use pg_config - --includedir-serverpg_configwith user-defined C functions - to find out where the PostgreSQL server header + --includedir-serverpg_configwith user-defined C functions + to find out where the PostgreSQL server header files are installed on your system (or the system that your users will be running on). @@ -2452,7 +2452,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text - Remember to define a magic block for your shared library, + Remember to define a magic block for your shared library, as described in . @@ -2461,7 +2461,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text When allocating memory, use the PostgreSQL functions - pallocpalloc and pfreepfree + pallocpalloc and pfreepfree instead of the corresponding C library functions malloc and free. The memory allocated by palloc will be @@ -2472,8 +2472,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text - Always zero the bytes of your structures using memset - (or allocate them with palloc0 in the first place). + Always zero the bytes of your structures using memset + (or allocate them with palloc0 in the first place). Even if you assign to each field of your structure, there might be alignment padding (holes in the structure) that contain garbage values. Without this, it's difficult to @@ -2493,7 +2493,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text (PG_FUNCTION_ARGS, etc.) are in fmgr.h, so you will need to include at least these two files. For portability reasons it's best to - include postgres.h first, + include postgres.h first, before any other system or user header files. Including postgres.h will also include elog.h and palloc.h @@ -2539,7 +2539,7 @@ SELECT name, c_overpaid(emp, 1500) AS overpaid Using the version-1 calling conventions, we can define - c_overpaid as: + c_overpaid as: - Notice we have used STRICT so that we did not have to + Notice we have used STRICT so that we did not have to check whether the input arguments were NULL. @@ -2619,87 +2619,87 @@ CREATE FUNCTION c_overpaid(emp, integer) RETURNS boolean There are two ways you can build a composite data value (henceforth - a tuple): you can build it from an array of Datum values, + a tuple): you can build it from an array of Datum values, or from an array of C strings that can be passed to the input conversion functions of the tuple's column data types. In either - case, you first need to obtain or construct a TupleDesc + case, you first need to obtain or construct a TupleDesc descriptor for the tuple structure. When working with Datums, you - pass the TupleDesc to BlessTupleDesc, - and then call heap_form_tuple for each row. When working - with C strings, you pass the TupleDesc to - TupleDescGetAttInMetadata, and then call - BuildTupleFromCStrings for each row. In the case of a + pass the TupleDesc to BlessTupleDesc, + and then call heap_form_tuple for each row. When working + with C strings, you pass the TupleDesc to + TupleDescGetAttInMetadata, and then call + BuildTupleFromCStrings for each row. In the case of a function returning a set of tuples, the setup steps can all be done once during the first call of the function. Several helper functions are available for setting up the needed - TupleDesc. The recommended way to do this in most + TupleDesc. The recommended way to do this in most functions returning composite values is to call: TypeFuncClass get_call_result_type(FunctionCallInfo fcinfo, Oid *resultTypeId, TupleDesc *resultTupleDesc) - passing the same fcinfo struct passed to the calling function + passing the same fcinfo struct passed to the calling function itself. (This of course requires that you use the version-1 - calling conventions.) resultTypeId can be specified - as NULL or as the address of a local variable to receive the - function's result type OID. resultTupleDesc should be the - address of a local TupleDesc variable. Check that the - result is TYPEFUNC_COMPOSITE; if so, - resultTupleDesc has been filled with the needed - TupleDesc. (If it is not, you can report an error along + calling conventions.) resultTypeId can be specified + as NULL or as the address of a local variable to receive the + function's result type OID. resultTupleDesc should be the + address of a local TupleDesc variable. Check that the + result is TYPEFUNC_COMPOSITE; if so, + resultTupleDesc has been filled with the needed + TupleDesc. (If it is not, you can report an error along the lines of function returning record called in context that cannot accept type record.) - get_call_result_type can resolve the actual type of a + get_call_result_type can resolve the actual type of a polymorphic function result; so it is useful in functions that return scalar polymorphic results, not only functions that return composites. - The resultTypeId output is primarily useful for functions + The resultTypeId output is primarily useful for functions returning polymorphic scalars. - get_call_result_type has a sibling - get_expr_result_type, which can be used to resolve the + get_call_result_type has a sibling + get_expr_result_type, which can be used to resolve the expected output type for a function call represented by an expression tree. This can be used when trying to determine the result type from outside the function itself. There is also - get_func_result_type, which can be used when only the + get_func_result_type, which can be used when only the function's OID is available. However these functions are not able - to deal with functions declared to return record, and - get_func_result_type cannot resolve polymorphic types, - so you should preferentially use get_call_result_type. + to deal with functions declared to return record, and + get_func_result_type cannot resolve polymorphic types, + so you should preferentially use get_call_result_type. Older, now-deprecated functions for obtaining - TupleDescs are: + TupleDescs are: TupleDesc RelationNameGetTupleDesc(const char *relname) - to get a TupleDesc for the row type of a named relation, + to get a TupleDesc for the row type of a named relation, and: TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases) - to get a TupleDesc based on a type OID. This can - be used to get a TupleDesc for a base or + to get a TupleDesc based on a type OID. This can + be used to get a TupleDesc for a base or composite type. It will not work for a function that returns - record, however, and it cannot resolve polymorphic + record, however, and it cannot resolve polymorphic types. - Once you have a TupleDesc, call: + Once you have a TupleDesc, call: TupleDesc BlessTupleDesc(TupleDesc tupdesc) @@ -2709,8 +2709,8 @@ AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc) if you plan to work with C strings. If you are writing a function returning set, you can save the results of these functions in the - FuncCallContext structure — use the - tuple_desc or attinmeta field + FuncCallContext structure — use the + tuple_desc or attinmeta field respectively. @@ -2719,7 +2719,7 @@ AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc) HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull) - to build a HeapTuple given user data in Datum form. + to build a HeapTuple given user data in Datum form. @@ -2727,24 +2727,24 @@ HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull) HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values) - to build a HeapTuple given user data + to build a HeapTuple given user data in C string form. values is an array of C strings, one for each attribute of the return row. Each C string should be in the form expected by the input function of the attribute data type. In order to return a null value for one of the attributes, - the corresponding pointer in the values array - should be set to NULL. This function will need to + the corresponding pointer in the values array + should be set to NULL. This function will need to be called again for each row you return. Once you have built a tuple to return from your function, it - must be converted into a Datum. Use: + must be converted into a Datum. Use: HeapTupleGetDatum(HeapTuple tuple) - to convert a HeapTuple into a valid Datum. This - Datum can be returned directly if you intend to return + to convert a HeapTuple into a valid Datum. This + Datum can be returned directly if you intend to return just a single row, or it can be used as the current return value in a set-returning function. @@ -2767,13 +2767,13 @@ HeapTupleGetDatum(HeapTuple tuple) - A set-returning function (SRF) is called - once for each item it returns. The SRF must + A set-returning function (SRF) is called + once for each item it returns. The SRF must therefore save enough state to remember what it was doing and return the next item on each call. - The structure FuncCallContext is provided to help - control this process. Within a function, fcinfo->flinfo->fn_extra - is used to hold a pointer to FuncCallContext + The structure FuncCallContext is provided to help + control this process. Within a function, fcinfo->flinfo->fn_extra + is used to hold a pointer to FuncCallContext across calls. typedef struct FuncCallContext @@ -2847,9 +2847,9 @@ typedef struct FuncCallContext - An SRF uses several functions and macros that - automatically manipulate the FuncCallContext - structure (and expect to find it via fn_extra). Use: + An SRF uses several functions and macros that + automatically manipulate the FuncCallContext + structure (and expect to find it via fn_extra). Use: SRF_IS_FIRSTCALL() @@ -2858,12 +2858,12 @@ SRF_IS_FIRSTCALL() SRF_FIRSTCALL_INIT() - to initialize the FuncCallContext. On every function call, + to initialize the FuncCallContext. On every function call, including the first, use: SRF_PERCALL_SETUP() - to properly set up for using the FuncCallContext + to properly set up for using the FuncCallContext and clearing any previously returned data left over from the previous pass. @@ -2873,27 +2873,27 @@ SRF_PERCALL_SETUP() SRF_RETURN_NEXT(funcctx, result) - to return it to the caller. (result must be of type - Datum, either a single value or a tuple prepared as + to return it to the caller. (result must be of type + Datum, either a single value or a tuple prepared as described above.) Finally, when your function is finished returning data, use: SRF_RETURN_DONE(funcctx) - to clean up and end the SRF. + to clean up and end the SRF. - The memory context that is current when the SRF is called is + The memory context that is current when the SRF is called is a transient context that will be cleared between calls. This means - that you do not need to call pfree on everything - you allocated using palloc; it will go away anyway. However, if you want to allocate + that you do not need to call pfree on everything + you allocated using palloc; it will go away anyway. However, if you want to allocate any data structures to live across calls, you need to put them somewhere else. The memory context referenced by - multi_call_memory_ctx is a suitable location for any - data that needs to survive until the SRF is finished running. In most + multi_call_memory_ctx is a suitable location for any + data that needs to survive until the SRF is finished running. In most cases, this means that you should switch into - multi_call_memory_ctx while doing the first-call setup. + multi_call_memory_ctx while doing the first-call setup. @@ -2904,8 +2904,8 @@ SRF_RETURN_DONE(funcctx) PG_GETARG_xxx macro) in the transient context then the detoasted copies will be freed on each cycle. Accordingly, if you keep references to such values in - your user_fctx, you must either copy them into the - multi_call_memory_ctx after detoasting, or ensure + your user_fctx, you must either copy them into the + multi_call_memory_ctx after detoasting, or ensure that you detoast the values only in that context. @@ -2959,7 +2959,7 @@ my_set_returning_function(PG_FUNCTION_ARGS) - A complete example of a simple SRF returning a composite type + A complete example of a simple SRF returning a composite type looks like: filename', 'retcomposite' + AS 'filename', 'retcomposite' LANGUAGE C IMMUTABLE STRICT; A different way is to use OUT parameters: @@ -3067,15 +3067,15 @@ CREATE OR REPLACE FUNCTION retcomposite(integer, integer) CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer, OUT f1 integer, OUT f2 integer, OUT f3 integer) RETURNS SETOF record - AS 'filename', 'retcomposite' + AS 'filename', 'retcomposite' LANGUAGE C IMMUTABLE STRICT; Notice that in this method the output type of the function is formally - an anonymous record type. + an anonymous record type. - The directory contrib/tablefunc + The directory contrib/tablefunc module in the source distribution contains more examples of set-returning functions. @@ -3093,20 +3093,20 @@ CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer, of polymorphic functions. When function arguments or return types are defined as polymorphic types, the function author cannot know in advance what data type it will be called with, or - need to return. There are two routines provided in fmgr.h + need to return. There are two routines provided in fmgr.h to allow a version-1 C function to discover the actual data types of its arguments and the type it is expected to return. The routines are - called get_fn_expr_rettype(FmgrInfo *flinfo) and - get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). + called get_fn_expr_rettype(FmgrInfo *flinfo) and + get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). They return the result or argument type OID, or InvalidOid if the information is not available. - The structure flinfo is normally accessed as - fcinfo->flinfo. The parameter argnum - is zero based. get_call_result_type can also be used - as an alternative to get_fn_expr_rettype. - There is also get_fn_expr_variadic, which can be used to + The structure flinfo is normally accessed as + fcinfo->flinfo. The parameter argnum + is zero based. get_call_result_type can also be used + as an alternative to get_fn_expr_rettype. + There is also get_fn_expr_variadic, which can be used to find out whether variadic arguments have been merged into an array. - This is primarily useful for VARIADIC "any" functions, + This is primarily useful for VARIADIC "any" functions, since such merging will always have occurred for variadic functions taking ordinary array types. @@ -3174,23 +3174,23 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray There is a variant of polymorphism that is only available to C-language functions: they can be declared to take parameters of type - "any". (Note that this type name must be double-quoted, + "any". (Note that this type name must be double-quoted, since it's also a SQL reserved word.) This works like - anyelement except that it does not constrain different - "any" arguments to be the same type, nor do they help + anyelement except that it does not constrain different + "any" arguments to be the same type, nor do they help determine the function's result type. A C-language function can also - declare its final parameter to be VARIADIC "any". This will + declare its final parameter to be VARIADIC "any". This will match one or more actual arguments of any type (not necessarily the same - type). These arguments will not be gathered into an array + type). These arguments will not be gathered into an array as happens with normal variadic functions; they will just be passed to - the function separately. The PG_NARGS() macro and the + the function separately. The PG_NARGS() macro and the methods described above must be used to determine the number of actual arguments and their types when using this feature. Also, users of such - a function might wish to use the VARIADIC keyword in their + a function might wish to use the VARIADIC keyword in their function call, with the expectation that the function would treat the array elements as separate arguments. The function itself must implement - that behavior if wanted, after using get_fn_expr_variadic to - detect that the actual argument was marked with VARIADIC. + that behavior if wanted, after using get_fn_expr_variadic to + detect that the actual argument was marked with VARIADIC. @@ -3200,22 +3200,22 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray Some function calls can be simplified during planning based on properties specific to the function. For example, - int4mul(n, 1) could be simplified to just n. + int4mul(n, 1) could be simplified to just n. To define such function-specific optimizations, write a - transform function and place its OID in the - protransform field of the primary function's - pg_proc entry. The transform function must have the SQL - signature protransform(internal) RETURNS internal. The - argument, actually FuncExpr *, is a dummy node representing a + transform function and place its OID in the + protransform field of the primary function's + pg_proc entry. The transform function must have the SQL + signature protransform(internal) RETURNS internal. The + argument, actually FuncExpr *, is a dummy node representing a call to the primary function. If the transform function's study of the expression tree proves that a simplified expression tree can substitute for all possible concrete calls represented thereby, build and return - that simplified expression. Otherwise, return a NULL - pointer (not a SQL null). + that simplified expression. Otherwise, return a NULL + pointer (not a SQL null). - We make no guarantee that PostgreSQL will never call the + We make no guarantee that PostgreSQL will never call the primary function in cases that the transform function could simplify. Ensure rigorous equivalence between the simplified expression and an actual call to the primary function. @@ -3235,26 +3235,26 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray Add-ins can reserve LWLocks and an allocation of shared memory on server startup. The add-in's shared library must be preloaded by specifying it in - shared_preload_libraries. + shared_preload_libraries. Shared memory is reserved by calling: void RequestAddinShmemSpace(int size) - from your _PG_init function. + from your _PG_init function. LWLocks are reserved by calling: void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks) - from _PG_init. This will ensure that an array of - num_lwlocks LWLocks is available under the name - tranche_name. Use GetNamedLWLockTranche + from _PG_init. This will ensure that an array of + num_lwlocks LWLocks is available under the name + tranche_name. Use GetNamedLWLockTranche to get a pointer to this array. To avoid possible race-conditions, each backend should use the LWLock - AddinShmemInitLock when connecting to and initializing + AddinShmemInitLock when connecting to and initializing its allocation of shared memory, as shown here: static mystruct *ptr = NULL; @@ -3294,7 +3294,7 @@ if (!ptr) All functions accessed by the backend must present a C interface to the backend; these C functions can then call C++ functions. - For example, extern C linkage is required for + For example, extern C linkage is required for backend-accessed functions. This is also necessary for any functions that are passed as pointers between the backend and C++ code. @@ -3303,30 +3303,30 @@ if (!ptr) Free memory using the appropriate deallocation method. For example, - most backend memory is allocated using palloc(), so use - pfree() to free it. Using C++ - delete in such cases will fail. + most backend memory is allocated using palloc(), so use + pfree() to free it. Using C++ + delete in such cases will fail. Prevent exceptions from propagating into the C code (use a catch-all - block at the top level of all extern C functions). This + block at the top level of all extern C functions). This is necessary even if the C++ code does not explicitly throw any exceptions, because events like out-of-memory can still throw exceptions. Any exceptions must be caught and appropriate errors passed back to the C interface. If possible, compile C++ with - to eliminate exceptions entirely; in such cases, you must check for failures in your C++ code, e.g. check for - NULL returned by new(). + NULL returned by new(). If calling backend functions from C++ code, be sure that the C++ call stack contains only plain old data structures - (POD). This is necessary because backend errors - generate a distant longjmp() that does not properly + (POD). This is necessary because backend errors + generate a distant longjmp() that does not properly unroll a C++ call stack with non-POD objects. @@ -3335,7 +3335,7 @@ if (!ptr) In summary, it is best to place C++ code behind a wall of - extern C functions that interface to the backend, + extern C functions that interface to the backend, and avoid exception, memory, and call stack leakage. diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index b951a58e0a..520eab8e99 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -12,14 +12,14 @@ The procedures described thus far let you define new types, new functions, and new operators. However, we cannot yet define an index on a column of a new data type. To do this, we must define an - operator class for the new data type. Later in this + operator class for the new data type. Later in this section, we will illustrate this concept in an example: a new operator class for the B-tree index method that stores and sorts complex numbers in ascending absolute value order. - Operator classes can be grouped into operator families + Operator classes can be grouped into operator families to show the relationships between semantically compatible classes. When only a single data type is involved, an operator class is sufficient, so we'll focus on that case first and then return to operator families. @@ -43,16 +43,16 @@ The routines for an index method do not directly know anything about the data types that the index method will operate on. Instead, an operator - classoperator class + classoperator class identifies the set of operations that the index method needs to use to work with a particular data type. Operator classes are so called because one thing they specify is the set of - WHERE-clause operators that can be used with an index + WHERE-clause operators that can be used with an index (i.e., can be converted into an index-scan qualification). An operator class can also specify some support - procedures that are needed by the internal operations of the + procedures that are needed by the internal operations of the index method, but do not directly correspond to any - WHERE-clause operator that can be used with the index. + WHERE-clause operator that can be used with the index. @@ -83,17 +83,17 @@ The operators associated with an operator class are identified by - strategy numbers, which serve to identify the semantics of + strategy numbers, which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, - and so operators like less than and greater than or equal - to are interesting with respect to a B-tree. + and so operators like less than and greater than or equal + to are interesting with respect to a B-tree. Because PostgreSQL allows the user to define operators, PostgreSQL cannot look at the name of an operator - (e.g., < or >=) and tell what kind of + (e.g., < or >=) and tell what kind of comparison it is. Instead, the index method defines a set of - strategies, which can be thought of as generalized operators. + strategies, which can be thought of as generalized operators. Each operator class specifies which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics. @@ -163,11 +163,11 @@ GiST indexes are more flexible: they do not have a fixed set of - strategies at all. Instead, the consistency support routine + strategies at all. Instead, the consistency support routine of each particular GiST operator class interprets the strategy numbers however it likes. As an example, several of the built-in GiST index operator classes index two-dimensional geometric objects, providing - the R-tree strategies shown in + the R-tree strategies shown in . Four of these are true two-dimensional tests (overlaps, same, contains, contained by); four of them consider only the X direction; and the other four @@ -175,7 +175,7 @@
- GiST Two-Dimensional <quote>R-tree</> Strategies + GiST Two-Dimensional <quote>R-tree</quote> Strategies @@ -327,7 +327,7 @@ don't have a fixed set of strategies either. Instead the support routines of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by - the built-in Minmax operator classes are shown in + the built-in Minmax operator classes are shown in . @@ -369,8 +369,8 @@ Notice that all the operators listed above return Boolean values. In practice, all operators defined as index method search operators must return type boolean, since they must appear at the top - level of a WHERE clause to be used with an index. - (Some index access methods also support ordering operators, + level of a WHERE clause to be used with an index. + (Some index access methods also support ordering operators, which typically don't return Boolean values; that feature is discussed in .) @@ -396,7 +396,7 @@ functions should play each of these roles for a given data type and semantic interpretation. The index method defines the set of functions it needs, and the operator class identifies the correct - functions to use by assigning them to the support function numbers + functions to use by assigning them to the support function numbers specified by the index method. @@ -427,7 +427,7 @@ Return the addresses of C-callable sort support function(s), - as documented in utils/sortsupport.h (optional) + as documented in utils/sortsupport.h (optional) 2 @@ -485,52 +485,52 @@ - consistent + consistent determine whether key satisfies the query qualifier 1 - union + union compute union of a set of keys 2 - compress + compress compute a compressed representation of a key or value to be indexed 3 - decompress + decompress compute a decompressed representation of a compressed key 4 - penalty + penalty compute penalty for inserting new key into subtree with given subtree's key 5 - picksplit + picksplit determine which entries of a page are to be moved to the new page and compute the union keys for resulting pages 6 - equal + equal compare two keys and return true if they are equal 7 - distance + distance determine distance from key to query value (optional) 8 - fetch + fetch compute original representation of a compressed key for index-only scans (optional) 9 @@ -557,28 +557,28 @@ - config + config provide basic information about the operator class 1 - choose + choose determine how to insert a new value into an inner tuple 2 - picksplit + picksplit determine how to partition a set of values 3 - inner_consistent + inner_consistent determine which sub-partitions need to be searched for a query 4 - leaf_consistent + leaf_consistent determine whether key satisfies the query qualifier 5 @@ -605,7 +605,7 @@ - compare + compare compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, @@ -614,17 +614,17 @@ 1 - extractValue + extractValue extract keys from a value to be indexed 2 - extractQuery + extractQuery extract keys from a query condition 3 - consistent + consistent determine whether value matches query condition (Boolean variant) (optional if support function 6 is present) @@ -632,7 +632,7 @@ 4 - comparePartial + comparePartial compare partial key from query and key from index, and return an integer less than zero, zero, @@ -642,7 +642,7 @@ 5 - triConsistent + triConsistent determine whether value matches query condition (ternary variant) (optional if support function 4 is present) @@ -672,7 +672,7 @@ - opcInfo + opcInfo return internal information describing the indexed columns' summary data @@ -680,17 +680,17 @@ 1 - add_value + add_value add a new value to an existing summary index tuple 2 - consistent + consistent determine whether value matches query condition 3 - union + union compute union of two summary tuples @@ -730,11 +730,11 @@ B-trees, the operators we require are: - absolute-value less-than (strategy 1) - absolute-value less-than-or-equal (strategy 2) - absolute-value equal (strategy 3) - absolute-value greater-than-or-equal (strategy 4) - absolute-value greater-than (strategy 5) + absolute-value less-than (strategy 1) + absolute-value less-than-or-equal (strategy 2) + absolute-value equal (strategy 3) + absolute-value greater-than-or-equal (strategy 4) + absolute-value greater-than (strategy 5) @@ -817,7 +817,7 @@ CREATE OPERATOR < ( type we'd probably want = to be the ordinary equality operation for complex numbers (and not the equality of the absolute values). In that case, we'd need to use some other - operator name for complex_abs_eq. + operator name for complex_abs_eq. @@ -894,7 +894,7 @@ CREATE OPERATOR CLASS complex_abs_ops The above example assumes that you want to make this new operator class the default B-tree operator class for the complex data type. - If you don't, just leave out the word DEFAULT. + If you don't, just leave out the word DEFAULT. @@ -917,11 +917,11 @@ CREATE OPERATOR CLASS complex_abs_ops To handle these needs, PostgreSQL uses the concept of an operator - familyoperator family. + familyoperator family. An operator family contains one or more operator classes, and can also contain indexable operators and corresponding support functions that belong to the family as a whole but not to any single class within the - family. We say that such operators and functions are loose + family. We say that such operators and functions are loose within the family, as opposed to being bound into a specific class. Typically each operator class contains single-data-type operators while cross-data-type operators are loose in the family. @@ -947,10 +947,10 @@ CREATE OPERATOR CLASS complex_abs_ops As an example, PostgreSQL has a built-in - B-tree operator family integer_ops, which includes operator - classes int8_ops, int4_ops, and - int2_ops for indexes on bigint (int8), - integer (int4), and smallint (int2) + B-tree operator family integer_ops, which includes operator + classes int8_ops, int4_ops, and + int2_ops for indexes on bigint (int8), + integer (int4), and smallint (int2) columns respectively. The family also contains cross-data-type comparison operators allowing any two of these types to be compared, so that an index on one of these types can be searched using a comparison value of another @@ -1043,7 +1043,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD ]]> - Notice that this definition overloads the operator strategy and + Notice that this definition overloads the operator strategy and support function numbers: each number occurs multiple times within the family. This is allowed so long as each instance of a particular number has distinct input data types. The instances that have @@ -1056,8 +1056,8 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In a B-tree operator family, all the operators in the family must sort compatibly, meaning that the transitive laws hold across all the data types - supported by the family: if A = B and B = C, then A = C, - and if A < B and B < C, then A < C. Moreover, implicit + supported by the family: if A = B and B = C, then A = C, + and if A < B and B < C, then A < C. Moreover, implicit or binary coercion casts between types represented in the operator family must not change the associated sort ordering. For each operator in the family there must be a support function having the same @@ -1094,7 +1094,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In BRIN, the requirements depends on the framework that provides the - operator classes. For operator classes based on minmax, + operator classes. For operator classes based on minmax, the behavior required is the same as for B-tree operator families: all the operators in the family must sort compatibly, and casts must not change the associated sort ordering. @@ -1128,14 +1128,14 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD - In particular, there are SQL features such as ORDER BY and - DISTINCT that require comparison and sorting of values. + In particular, there are SQL features such as ORDER BY and + DISTINCT that require comparison and sorting of values. To implement these features on a user-defined data type, PostgreSQL looks for the default B-tree operator - class for the data type. The equals member of this operator + class for the data type. The equals member of this operator class defines the system's notion of equality of values for - GROUP BY and DISTINCT, and the sort ordering - imposed by the operator class defines the default ORDER BY + GROUP BY and DISTINCT, and the sort ordering + imposed by the operator class defines the default ORDER BY ordering. @@ -1153,7 +1153,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD When there is no default operator class for a data type, you will get - errors like could not identify an ordering operator if you + errors like could not identify an ordering operator if you try to use these SQL features with the data type. @@ -1161,7 +1161,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In PostgreSQL versions before 7.4, sorting and grouping operations would implicitly use operators named - =, <, and >. The new + =, <, and >. The new behavior of relying on default operator classes avoids having to make any assumption about the behavior of operators with particular names. @@ -1180,22 +1180,22 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD Some index access methods (currently, only GiST) support the concept of - ordering operators. What we have been discussing so far - are search operators. A search operator is one for which + ordering operators. What we have been discussing so far + are search operators. A search operator is one for which the index can be searched to find all rows satisfying - WHERE - indexed_column - operator - constant. + WHERE + indexed_column + operator + constant. Note that nothing is promised about the order in which the matching rows will be returned. In contrast, an ordering operator does not restrict the set of rows that can be returned, but instead determines their order. An ordering operator is one for which the index can be scanned to return rows in the order represented by - ORDER BY - indexed_column - operator - constant. + ORDER BY + indexed_column + operator + constant. The reason for defining ordering operators that way is that it supports nearest-neighbor searches, if the operator is one that measures distance. For example, a query like @@ -1205,7 +1205,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; finds the ten places closest to a given target point. A GiST index on the location column can do this efficiently because - <-> is an ordering operator. + <-> is an ordering operator. @@ -1217,17 +1217,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; a B-tree operator family that specifies the sort ordering of the result data type. As was stated in the previous section, B-tree operator families define PostgreSQL's notion of ordering, so - this is a natural representation. Since the point <-> - operator returns float8, it could be specified in an operator + this is a natural representation. Since the point <-> + operator returns float8, it could be specified in an operator class creation command like this: (point, point) FOR ORDER BY float_ops ]]> - where float_ops is the built-in operator family that includes - operations on float8. This declaration states that the index + where float_ops is the built-in operator family that includes + operations on float8. This declaration states that the index is able to return rows in order of increasing values of the - <-> operator. + <-> operator. @@ -1243,21 +1243,21 @@ OPERATOR 15 <-> (point, point) FOR ORDER BY float_ops Normally, declaring an operator as a member of an operator class (or family) means that the index method can retrieve exactly the set of rows - that satisfy a WHERE condition using the operator. For example: + that satisfy a WHERE condition using the operator. For example: SELECT * FROM table WHERE integer_column < 4; can be satisfied exactly by a B-tree index on the integer column. But there are cases where an index is useful as an inexact guide to the matching rows. For example, if a GiST index stores only bounding boxes - for geometric objects, then it cannot exactly satisfy a WHERE + for geometric objects, then it cannot exactly satisfy a WHERE condition that tests overlap between nonrectangular objects such as polygons. Yet we could use the index to find objects whose bounding box overlaps the bounding box of the target object, and then do the exact overlap test only on the objects found by the index. If this - scenario applies, the index is said to be lossy for the + scenario applies, the index is said to be lossy for the operator. Lossy index searches are implemented by having the index - method return a recheck flag when a row might or might + method return a recheck flag when a row might or might not really satisfy the query condition. The core system will then test the original query condition on the retrieved row to see whether it should be returned as a valid match. This approach works if @@ -1274,8 +1274,8 @@ SELECT * FROM table WHERE integer_column < 4; the bounding box of a complex object such as a polygon. In this case there's not much value in storing the whole polygon in the index entry — we might as well store just a simpler object of type - box. This situation is expressed by the STORAGE - option in CREATE OPERATOR CLASS: we'd write something like: + box. This situation is expressed by the STORAGE + option in CREATE OPERATOR CLASS: we'd write something like: CREATE OPERATOR CLASS polygon_ops @@ -1285,16 +1285,16 @@ CREATE OPERATOR CLASS polygon_ops At present, only the GiST, GIN and BRIN index methods support a - STORAGE type that's different from the column data type. - The GiST compress and decompress support - routines must deal with data-type conversion when STORAGE - is used. In GIN, the STORAGE type identifies the type of - the key values, which normally is different from the type + STORAGE type that's different from the column data type. + The GiST compress and decompress support + routines must deal with data-type conversion when STORAGE + is used. In GIN, the STORAGE type identifies the type of + the key values, which normally is different from the type of the indexed column — for example, an operator class for integer-array columns might have keys that are just integers. The - GIN extractValue and extractQuery support + GIN extractValue and extractQuery support routines are responsible for extracting keys from indexed values. - BRIN is similar to GIN: the STORAGE type identifies the + BRIN is similar to GIN: the STORAGE type identifies the type of the stored summary values, and operator classes' support procedures are responsible for interpreting the summary values correctly. diff --git a/doc/src/sgml/xml2.sgml b/doc/src/sgml/xml2.sgml index 9bbc9e75d7..35e1ccb7a1 100644 --- a/doc/src/sgml/xml2.sgml +++ b/doc/src/sgml/xml2.sgml @@ -8,7 +8,7 @@ - The xml2 module provides XPath querying and + The xml2 module provides XPath querying and XSLT functionality. @@ -16,7 +16,7 @@ Deprecation Notice - From PostgreSQL 8.3 on, there is XML-related + From PostgreSQL 8.3 on, there is XML-related functionality based on the SQL/XML standard in the core server. That functionality covers XML syntax checking and XPath queries, which is what this module does, and more, but the API is @@ -36,7 +36,7 @@ shows the functions provided by this module. These functions provide straightforward XML parsing and XPath queries. - All arguments are of type text, so for brevity that is not shown. + All arguments are of type text, so for brevity that is not shown.
@@ -63,8 +63,8 @@ This parses the document text in its parameter and returns true if the document is well-formed XML. (Note: this is an alias for the standard - PostgreSQL function xml_is_well_formed(). The - name xml_valid() is technically incorrect since validity + PostgreSQL function xml_is_well_formed(). The + name xml_valid() is technically incorrect since validity and well-formedness have different meanings in XML.) @@ -124,7 +124,7 @@ <itemtag>Value 2....</itemtag> </toptag> - If either toptag or itemtag is an empty string, the relevant tag is omitted. + If either toptag or itemtag is an empty string, the relevant tag is omitted. @@ -139,7 +139,7 @@ - Like xpath_nodeset(document, query, toptag, itemtag) but result omits both tags. + Like xpath_nodeset(document, query, toptag, itemtag) but result omits both tags. @@ -154,7 +154,7 @@ - Like xpath_nodeset(document, query, toptag, itemtag) but result omits toptag. + Like xpath_nodeset(document, query, toptag, itemtag) but result omits toptag. @@ -170,8 +170,8 @@ This function returns multiple values separated by the specified - separator, for example Value 1,Value 2,Value 3 if - separator is ,. + separator, for example Value 1,Value 2,Value 3 if + separator is ,. @@ -185,7 +185,7 @@ text - This is a wrapper for the above function that uses , + This is a wrapper for the above function that uses , as the separator. @@ -206,7 +206,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - xpath_table is a table function that evaluates a set of XPath + xpath_table is a table function that evaluates a set of XPath queries on each of a set of documents and returns the results as a table. The primary key field from the original document table is returned as the first column of the result so that the result set @@ -228,7 +228,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) key - the name of the key field — this is just a field to be used as + the name of the key field — this is just a field to be used as the first column of the output table, i.e., it identifies the record from which each output row came (see note below about multiple values) @@ -285,7 +285,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - so those parameters can be anything valid in those particular + so those parameters can be anything valid in those particular locations. The result from this SELECT needs to return exactly two columns (which it will unless you try to list multiple fields for key or document). Beware that this simplistic approach requires that you @@ -293,8 +293,8 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - The function has to be used in a FROM expression, with an - AS clause to specify the output columns; for example + The function has to be used in a FROM expression, with an + AS clause to specify the output columns; for example SELECT * FROM xpath_table('article_id', @@ -304,8 +304,8 @@ xpath_table('article_id', 'date_entered > ''2003-01-01'' ') AS t(article_id integer, author text, page_count integer, title text); - The AS clause defines the names and types of the columns in the - output table. The first is the key field and the rest correspond + The AS clause defines the names and types of the columns in the + output table. The first is the key field and the rest correspond to the XPath queries. If there are more XPath queries than result columns, the extra queries will be ignored. If there are more result columns @@ -313,19 +313,19 @@ AS t(article_id integer, author text, page_count integer, title text); - Notice that this example defines the page_count result + Notice that this example defines the page_count result column as an integer. The function deals internally with string representations, so when you say you want an integer in the output, it will take the string representation of the XPath result and use PostgreSQL input - functions to transform it into an integer (or whatever type the AS + functions to transform it into an integer (or whatever type the AS clause requests). An error will result if it can't do this — for example if the result is empty — so you may wish to just stick to - text as the column type if you think your data has any problems. + text as the column type if you think your data has any problems. - The calling SELECT statement doesn't necessarily have to be - just SELECT * — it can reference the output + The calling SELECT statement doesn't necessarily have to be + just SELECT * — it can reference the output columns by name or join them to other tables. The function produces a virtual table with which you can perform any operation you wish (e.g. aggregation, joining, sorting etc). So we could also have: @@ -346,7 +346,7 @@ WHERE t.author_id = p.person_id; Multivalued Results - The xpath_table function assumes that the results of each XPath query + The xpath_table function assumes that the results of each XPath query might be multivalued, so the number of rows returned by the function may not be the same as the number of input documents. The first row returned contains the first result from each query, the second row the @@ -393,8 +393,8 @@ WHERE id = 1 ORDER BY doc_num, line_num - To get doc_num on every line, the solution is to use two invocations - of xpath_table and join the results: + To get doc_num on every line, the solution is to use two invocations + of xpath_table and join the results: SELECT t.*,i.doc_num FROM @@ -437,15 +437,15 @@ xslt_process(text document, text stylesheet, text paramlist) returns text This function applies the XSL stylesheet to the document and returns - the transformed result. The paramlist is a list of parameter + the transformed result. The paramlist is a list of parameter assignments to be used in the transformation, specified in the form - a=1,b=2. Note that the + a=1,b=2. Note that the parameter parsing is very simple-minded: parameter values cannot contain commas! - There is also a two-parameter version of xslt_process which + There is also a two-parameter version of xslt_process which does not pass any parameters to the transformation. diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml index d484d80105..4b0716951a 100644 --- a/doc/src/sgml/xoper.sgml +++ b/doc/src/sgml/xoper.sgml @@ -65,12 +65,12 @@ SELECT (a + b) AS c FROM test_complex; We've shown how to create a binary operator here. To create unary - operators, just omit one of leftarg (for left unary) or - rightarg (for right unary). The procedure + operators, just omit one of leftarg (for left unary) or + rightarg (for right unary). The procedure clause and the argument clauses are the only required items in - CREATE OPERATOR. The commutator + CREATE OPERATOR. The commutator clause shown in the example is an optional hint to the query - optimizer. Further details about commutator and other + optimizer. Further details about commutator and other optimizer hints appear in the next section. @@ -98,16 +98,16 @@ SELECT (a + b) AS c FROM test_complex; - <literal>COMMUTATOR</> + <literal>COMMUTATOR</literal> - The COMMUTATOR clause, if provided, names an operator that is the + The COMMUTATOR clause, if provided, names an operator that is the commutator of the operator being defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible input values x, y. Notice that B is also the commutator of A. For example, - operators < and > for a particular data type are usually each others' - commutators, and operator + is usually commutative with itself. - But operator - is usually not commutative with anything. + operators < and > for a particular data type are usually each others' + commutators, and operator + is usually commutative with itself. + But operator - is usually not commutative with anything. @@ -115,23 +115,23 @@ SELECT (a + b) AS c FROM test_complex; right operand type of its commutator, and vice versa. So the name of the commutator operator is all that PostgreSQL needs to be given to look up the commutator, and that's all that needs to - be provided in the COMMUTATOR clause. + be provided in the COMMUTATOR clause. It's critical to provide commutator information for operators that will be used in indexes and join clauses, because this allows the - query optimizer to flip around such a clause to the forms + query optimizer to flip around such a clause to the forms needed for different plan types. For example, consider a query with - a WHERE clause like tab1.x = tab2.y, where tab1.x - and tab2.y are of a user-defined type, and suppose that - tab2.y is indexed. The optimizer cannot generate an + a WHERE clause like tab1.x = tab2.y, where tab1.x + and tab2.y are of a user-defined type, and suppose that + tab2.y is indexed. The optimizer cannot generate an index scan unless it can determine how to flip the clause around to - tab2.y = tab1.x, because the index-scan machinery expects + tab2.y = tab1.x, because the index-scan machinery expects to see the indexed column on the left of the operator it is given. - PostgreSQL will not simply + PostgreSQL will not simply assume that this is a valid transformation — the creator of the - = operator must specify that it is valid, by marking the + = operator must specify that it is valid, by marking the operator with commutator information. @@ -145,20 +145,20 @@ SELECT (a + b) AS c FROM test_complex; - One way is to omit the COMMUTATOR clause in the first operator that + One way is to omit the COMMUTATOR clause in the first operator that you define, and then provide one in the second operator's definition. Since PostgreSQL knows that commutative operators come in pairs, when it sees the second definition it will - automatically go back and fill in the missing COMMUTATOR clause in + automatically go back and fill in the missing COMMUTATOR clause in the first definition. - The other, more straightforward way is just to include COMMUTATOR clauses + The other, more straightforward way is just to include COMMUTATOR clauses in both definitions. When PostgreSQL processes - the first definition and realizes that COMMUTATOR refers to a nonexistent + the first definition and realizes that COMMUTATOR refers to a nonexistent operator, the system will make a dummy entry for that operator in the system catalog. This dummy entry will have valid data only for the operator name, left and right operand types, and result type, @@ -175,15 +175,15 @@ SELECT (a + b) AS c FROM test_complex; - <literal>NEGATOR</> + <literal>NEGATOR</literal> - The NEGATOR clause, if provided, names an operator that is the + The NEGATOR clause, if provided, names an operator that is the negator of the operator being defined. We say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT (x B y) for all possible inputs x, y. Notice that B is also the negator of A. - For example, < and >= are a negator pair for most data types. + For example, < and >= are a negator pair for most data types. An operator can never validly be its own negator. @@ -195,15 +195,15 @@ SELECT (a + b) AS c FROM test_complex; An operator's negator must have the same left and/or right operand types - as the operator to be defined, so just as with COMMUTATOR, only the operator - name need be given in the NEGATOR clause. + as the operator to be defined, so just as with COMMUTATOR, only the operator + name need be given in the NEGATOR clause. Providing a negator is very helpful to the query optimizer since - it allows expressions like NOT (x = y) to be simplified into - x <> y. This comes up more often than you might think, because - NOT operations can be inserted as a consequence of other rearrangements. + it allows expressions like NOT (x = y) to be simplified into + x <> y. This comes up more often than you might think, because + NOT operations can be inserted as a consequence of other rearrangements. @@ -214,13 +214,13 @@ SELECT (a + b) AS c FROM test_complex; - <literal>RESTRICT</> + <literal>RESTRICT</literal> - The RESTRICT clause, if provided, names a restriction selectivity + The RESTRICT clause, if provided, names a restriction selectivity estimation function for the operator. (Note that this is a function - name, not an operator name.) RESTRICT clauses only make sense for - binary operators that return boolean. The idea behind a restriction + name, not an operator name.) RESTRICT clauses only make sense for + binary operators that return boolean. The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a WHERE-clause condition of the form: @@ -228,10 +228,10 @@ column OP constant for the current operator and a particular constant value. This assists the optimizer by - giving it some idea of how many rows will be eliminated by WHERE + giving it some idea of how many rows will be eliminated by WHERE clauses that have this form. (What happens if the constant is on the left, you might be wondering? Well, that's one of the things that - COMMUTATOR is for...) + COMMUTATOR is for...) @@ -240,12 +240,12 @@ column OP constant one of the system's standard estimators for many of your own operators. These are the standard restriction estimators: - eqsel for = - neqsel for <> - scalarltsel for < - scalarlesel for <= - scalargtsel for > - scalargesel for >= + eqsel for = + neqsel for <> + scalarltsel for < + scalarlesel for <= + scalargtsel for > + scalargesel for >= @@ -258,14 +258,14 @@ column OP constant - You can use scalarltsel, scalarlesel, - scalargtsel and scalargesel for comparisons on + You can use scalarltsel, scalarlesel, + scalargtsel and scalargesel for comparisons on data types that have some sensible means of being converted into numeric scalars for range comparisons. If possible, add the data type to those understood by the function convert_to_scalar() in src/backend/utils/adt/selfuncs.c. (Eventually, this function should be replaced by per-data-type functions - identified through a column of the pg_type system catalog; but that hasn't happened + identified through a column of the pg_type system catalog; but that hasn't happened yet.) If you do not do this, things will still work, but the optimizer's estimates won't be as good as they could be. @@ -279,15 +279,15 @@ column OP constant - <literal>JOIN</> + <literal>JOIN</literal> - The JOIN clause, if provided, names a join selectivity + The JOIN clause, if provided, names a join selectivity estimation function for the operator. (Note that this is a function - name, not an operator name.) JOIN clauses only make sense for + name, not an operator name.) JOIN clauses only make sense for binary operators that return boolean. The idea behind a join selectivity estimator is to guess what fraction of the rows in a - pair of tables will satisfy a WHERE-clause condition of the form: + pair of tables will satisfy a WHERE-clause condition of the form: table1.column1 OP table2.column2 @@ -301,27 +301,27 @@ table1.column1 OP table2.column2 a join selectivity estimator function, but will just suggest that you use one of the standard estimators if one is applicable: - eqjoinsel for = - neqjoinsel for <> - scalarltjoinsel for < - scalarlejoinsel for <= - scalargtjoinsel for > - scalargejoinsel for >= - areajoinsel for 2D area-based comparisons - positionjoinsel for 2D position-based comparisons - contjoinsel for 2D containment-based comparisons + eqjoinsel for = + neqjoinsel for <> + scalarltjoinsel for < + scalarlejoinsel for <= + scalargtjoinsel for > + scalargejoinsel for >= + areajoinsel for 2D area-based comparisons + positionjoinsel for 2D position-based comparisons + contjoinsel for 2D containment-based comparisons - <literal>HASHES</> + <literal>HASHES</literal> The HASHES clause, if present, tells the system that it is permissible to use the hash join method for a join based on this - operator. HASHES only makes sense for a binary operator that - returns boolean, and in practice the operator must represent + operator. HASHES only makes sense for a binary operator that + returns boolean, and in practice the operator must represent equality for some data type or pair of data types. @@ -336,7 +336,7 @@ table1.column1 OP table2.column2 hashing for operators that take the same data type on both sides. However, sometimes it is possible to design compatible hash functions for two or more data types; that is, functions that will generate the - same hash codes for equal values, even though the values + same hash codes for equal values, even though the values have different representations. For example, it's fairly simple to arrange this property when hashing integers of different widths. @@ -357,10 +357,10 @@ table1.column1 OP table2.column2 are machine-dependent ways in which it might fail to do the right thing. For example, if your data type is a structure in which there might be uninteresting pad bits, you cannot simply pass the whole structure to - hash_any. (Unless you write your other operators and + hash_any. (Unless you write your other operators and functions to ensure that the unused bits are always zero, which is the recommended strategy.) - Another example is that on machines that meet the IEEE + Another example is that on machines that meet the IEEE floating-point standard, negative zero and positive zero are different values (different bit patterns) but they are defined to compare equal. If a float value might contain negative zero then extra steps are needed @@ -392,8 +392,8 @@ table1.column1 OP table2.column2 strict, the function must also be complete: that is, it should return true or false, never null, for any two nonnull inputs. If this rule is - not followed, hash-optimization of IN operations might - generate wrong results. (Specifically, IN might return + not followed, hash-optimization of IN operations might + generate wrong results. (Specifically, IN might return false where the correct answer according to the standard would be null; or it might yield an error complaining that it wasn't prepared for a null result.) @@ -403,13 +403,13 @@ table1.column1 OP table2.column2 - <literal>MERGES</> + <literal>MERGES</literal> The MERGES clause, if present, tells the system that it is permissible to use the merge-join method for a join based on this - operator. MERGES only makes sense for a binary operator that - returns boolean, and in practice the operator must represent + operator. MERGES only makes sense for a binary operator that + returns boolean, and in practice the operator must represent equality for some data type or pair of data types. @@ -418,7 +418,7 @@ table1.column1 OP table2.column2 into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the - same place + same place in the sort order. In practice this means that the join operator must behave like equality. But it is possible to merge-join two distinct data types so long as they are logically compatible. For @@ -430,7 +430,7 @@ table1.column1 OP table2.column2 To be marked MERGES, the join operator must appear - as an equality member of a btree index operator family. + as an equality member of a btree index operator family. This is not enforced when you create the operator, since of course the referencing operator family couldn't exist yet. But the operator will not actually be used for merge joins @@ -445,7 +445,7 @@ table1.column1 OP table2.column2 if they are different) that appears in the same operator family. If this is not the case, planner errors might occur when the operator is used. Also, it is a good idea (but not strictly required) for - a btree operator family that supports multiple data types to provide + a btree operator family that supports multiple data types to provide equality operators for every combination of the data types; this allows better optimization. diff --git a/doc/src/sgml/xplang.sgml b/doc/src/sgml/xplang.sgml index 4460c8f361..60d0cc6190 100644 --- a/doc/src/sgml/xplang.sgml +++ b/doc/src/sgml/xplang.sgml @@ -11,7 +11,7 @@ PostgreSQL allows user-defined functions to be written in other languages besides SQL and C. These other languages are generically called procedural - languages (PLs). For a function + languages (PLs). For a function written in a procedural language, the database server has no built-in knowledge about how to interpret the function's source text. Instead, the task is passed to a special handler that knows @@ -44,9 +44,9 @@ A procedural language must be installed into each database where it is to be used. But procedural languages installed in - the database template1 are automatically available in all + the database template1 are automatically available in all subsequently created databases, since their entries in - template1 will be copied by CREATE DATABASE. + template1 will be copied by CREATE DATABASE. So the database administrator can decide which languages are available in which databases and can make some languages available by default if desired. @@ -54,8 +54,8 @@ For the languages supplied with the standard distribution, it is - only necessary to execute CREATE EXTENSION - language_name to install the language into the + only necessary to execute CREATE EXTENSION + language_name to install the language into the current database. The manual procedure described below is only recommended for installing languages that have not been packaged as extensions. @@ -70,7 +70,7 @@ A procedural language is installed in a database in five steps, which must be carried out by a database superuser. In most cases the required SQL commands should be packaged as the installation script - of an extension, so that CREATE EXTENSION can be + of an extension, so that CREATE EXTENSION can be used to execute them. @@ -103,7 +103,7 @@ CREATE FUNCTION handler_function_name() - Optionally, the language handler can provide an inline + Optionally, the language handler can provide an inline handler function that executes anonymous code blocks ( commands) written in this language. If an inline handler function @@ -119,10 +119,10 @@ CREATE FUNCTION inline_function_name(internal) - Optionally, the language handler can provide a validator + Optionally, the language handler can provide a validator function that checks a function definition for correctness without actually executing it. The validator function is called by - CREATE FUNCTION if it exists. If a validator function + CREATE FUNCTION if it exists. If a validator function is provided by the language, declare it with a command like CREATE FUNCTION validator_function_name(oid) @@ -217,13 +217,13 @@ CREATE TRUSTED PROCEDURAL LANGUAGE plperl is built and installed into the library directory; furthermore, the PL/pgSQL language itself is installed in all databases. - If Tcl support is configured in, the handlers for - PL/Tcl and PL/TclU are built and installed + If Tcl support is configured in, the handlers for + PL/Tcl and PL/TclU are built and installed in the library directory, but the language itself is not installed in any database by default. - Likewise, the PL/Perl and PL/PerlU + Likewise, the PL/Perl and PL/PerlU handlers are built and installed if Perl support is configured, and the - PL/PythonU handler is installed if Python support is + PL/PythonU handler is installed if Python support is configured, but these languages are not installed by default. diff --git a/doc/src/sgml/xtypes.sgml b/doc/src/sgml/xtypes.sgml index ac0b8a2943..2f90c1d42c 100644 --- a/doc/src/sgml/xtypes.sgml +++ b/doc/src/sgml/xtypes.sgml @@ -12,7 +12,7 @@ As described in , PostgreSQL can be extended to support new data types. This section describes how to define new base types, - which are data types defined below the level of the SQL + which are data types defined below the level of the SQL language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C. @@ -20,8 +20,8 @@ The examples in this section can be found in complex.sql and complex.c - in the src/tutorial directory of the source distribution. - See the README file in that directory for instructions + in the src/tutorial directory of the source distribution. + See the README file in that directory for instructions about running the examples. @@ -45,7 +45,7 @@ - Suppose we want to define a type complex that represents + Suppose we want to define a type complex that represents complex numbers. A natural way to represent a complex number in memory would be the following C structure: @@ -57,7 +57,7 @@ typedef struct Complex { We will need to make this a pass-by-reference type, since it's too - large to fit into a single Datum value. + large to fit into a single Datum value. @@ -130,7 +130,7 @@ complex_out(PG_FUNCTION_ARGS) external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation. For complex, we will piggy-back on the binary I/O converters - for type float8: + for type float8: PostgreSQL automatically provides support for arrays of that type. The array type typically has the same name as the base type with the underscore character - (_) prepended. + (_) prepended. @@ -237,7 +237,7 @@ CREATE TYPE complex ( If the internal representation of the data type is variable-length, the internal representation must follow the standard layout for variable-length data: the first four bytes must be a char[4] field which is - never accessed directly (customarily named vl_len_). You + never accessed directly (customarily named vl_len_). You must use the SET_VARSIZE() macro to store the total size of the datum (including the length field itself) in this field and VARSIZE() to retrieve it. (These macros exist @@ -258,41 +258,41 @@ CREATE TYPE complex ( If the values of your data type vary in size (in internal form), it's - usually desirable to make the data type TOAST-able (see TOAST-able (see ). You should do this even if the values are always too small to be compressed or stored externally, because - TOAST can save space on small data too, by reducing header + TOAST can save space on small data too, by reducing header overhead. - To support TOAST storage, the C functions operating on the data + To support TOAST storage, the C functions operating on the data type must always be careful to unpack any toasted values they are handed - by using PG_DETOAST_DATUM. (This detail is customarily hidden + by using PG_DETOAST_DATUM. (This detail is customarily hidden by defining type-specific GETARG_DATATYPE_P macros.) Then, when running the CREATE TYPE command, specify the - internal length as variable and select some appropriate storage - option other than plain. + internal length as variable and select some appropriate storage + option other than plain. If data alignment is unimportant (either just for a specific function or because the data type specifies byte alignment anyway) then it's possible - to avoid some of the overhead of PG_DETOAST_DATUM. You can use - PG_DETOAST_DATUM_PACKED instead (customarily hidden by - defining a GETARG_DATATYPE_PP macro) and using the macros - VARSIZE_ANY_EXHDR and VARDATA_ANY to access + to avoid some of the overhead of PG_DETOAST_DATUM. You can use + PG_DETOAST_DATUM_PACKED instead (customarily hidden by + defining a GETARG_DATATYPE_PP macro) and using the macros + VARSIZE_ANY_EXHDR and VARDATA_ANY to access a potentially-packed datum. Again, the data returned by these macros is not aligned even if the data type definition specifies an alignment. If the alignment is important you - must go through the regular PG_DETOAST_DATUM interface. + must go through the regular PG_DETOAST_DATUM interface. - Older code frequently declares vl_len_ as an - int32 field instead of char[4]. This is OK as long as - the struct definition has other fields that have at least int32 + Older code frequently declares vl_len_ as an + int32 field instead of char[4]. This is OK as long as + the struct definition has other fields that have at least int32 alignment. But it is dangerous to use such a struct definition when working with a potentially unaligned datum; the compiler may take it as license to assume the datum actually is aligned, leading to core dumps on @@ -301,28 +301,28 @@ CREATE TYPE complex ( - Another feature that's enabled by TOAST support is the - possibility of having an expanded in-memory data + Another feature that's enabled by TOAST support is the + possibility of having an expanded in-memory data representation that is more convenient to work with than the format that - is stored on disk. The regular or flat varlena storage format + is stored on disk. The regular or flat varlena storage format is ultimately just a blob of bytes; it cannot for example contain pointers, since it may get copied to other locations in memory. For complex data types, the flat format may be quite expensive to work - with, so PostgreSQL provides a way to expand + with, so PostgreSQL provides a way to expand the flat format into a representation that is more suited to computation, and then pass that format in-memory between functions of the data type. To use expanded storage, a data type must define an expanded format that - follows the rules given in src/include/utils/expandeddatum.h, - and provide functions to expand a flat varlena value into - expanded format and flatten the expanded format back to the + follows the rules given in src/include/utils/expandeddatum.h, + and provide functions to expand a flat varlena value into + expanded format and flatten the expanded format back to the regular varlena representation. Then ensure that all C functions for the data type can accept either representation, possibly by converting one into the other immediately upon receipt. This does not require fixing all existing functions for the data type at once, because the standard - PG_DETOAST_DATUM macro is defined to convert expanded inputs + PG_DETOAST_DATUM macro is defined to convert expanded inputs into regular flat format. Therefore, existing functions that work with the flat varlena format will continue to work, though slightly inefficiently, with expanded inputs; they need not be converted until and @@ -344,14 +344,14 @@ CREATE TYPE complex ( will detoast external, short-header, and compressed varlena inputs, but not expanded inputs. Such a function can be defined as returning a pointer to a union of the flat varlena format and the expanded format. - Callers can use the VARATT_IS_EXPANDED_HEADER() macro to + Callers can use the VARATT_IS_EXPANDED_HEADER() macro to determine which format they received. - The TOAST infrastructure not only allows regular varlena + The TOAST infrastructure not only allows regular varlena values to be distinguished from expanded values, but also - distinguishes read-write and read-only pointers to + distinguishes read-write and read-only pointers to expanded values. C functions that only need to examine an expanded value, or will only change it in safe and non-semantically-visible ways, need not care which type of pointer they receive. C functions that @@ -368,7 +368,7 @@ CREATE TYPE complex ( For examples of working with expanded values, see the standard array infrastructure, particularly - src/backend/utils/adt/array_expanded.c. + src/backend/utils/adt/array_expanded.c.