Commit Graph

1064 Commits

Author SHA1 Message Date
Teodor Sigaev 1c1791e000 Add json(b)_to_tsvector function
Jsonb has a complex nature so there isn't best-for-everything way to convert it
to tsvector for full text search. Current to_tsvector(json(b)) suggests to
convert only string values, but it's possible to index keys, numerics and even
booleans value. To solve that json(b)_to_tsvector has a second required
argument contained a list of desired types of json fields. Second argument is
a jsonb scalar or array right now with possibility to add new options in a
future.

Bump catalog version

Author: Dmitry Dolgov with some editorization by me
Reviewed by: Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
2018-04-07 20:58:03 +03:00
Stephen Frost 0fdc8495bf Add default roles for file/program access
This patch adds new default roles named 'pg_read_server_files',
'pg_write_server_files', 'pg_execute_server_program' which
allow an administrator to GRANT to a non-superuser role the ability to
access server-side files or run programs through PostgreSQL (as the user
the database is running as).  Having one of these roles allows a
non-superuser to use server-side COPY to read, write, or with a program,
and to use file_fdw (if installed by a superuser and GRANT'd USAGE on
it) to read from files or run a program.

The existing misc file functions are also changed to allow a user with
the 'pg_read_server_files' default role to read any files on the
filesystem, matching the privileges given to that role through COPY and
file_fdw from above.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20171231191939.GR2416%40tamriel.snowman.net
2018-04-06 14:47:10 -04:00
Magnus Hagander 1fde38beaa Allow on-line enabling and disabling of data checksums
This makes it possible to turn checksums on in a live cluster, without
the previous need for dump/reload or logical replication (and to turn it
off).

Enabling checkusm starts a background process in the form of a
launcher/worker combination that goes through the entire database and
recalculates checksums on each and every page. Only when all pages have
been checksummed are they fully enabled in the cluster. Any failure of
the process will revert to checksums off and the process has to be
started.

This adds a new WAL record that indicates the state of checksums, so
the process works across replicated clusters.

Authors: Magnus Hagander and Daniel Gustafsson
Review: Tomas Vondra, Michael Banck, Heikki Linnakangas, Andrey Borodin
2018-04-05 22:04:48 +02:00
Teodor Sigaev 1664ae1978 Add websearch_to_tsquery
Error-tolerant conversion function with web-like syntax for search query,
it simplifies  constraining search engine with close to habitual interface for
users.

Bump catalog version

Authors: Victor Drobny, Dmitry Ivanov with editorization by me
Reviewed by: Aleksander Alekseev, Tomas Vondra, Thomas Munro, Aleksandr Parfenov
Discussion: https://www.postgresql.org/message-id/flat/fe931111ff7e9ad79196486ada79e268@postgrespro.ru
2018-04-05 19:55:11 +03:00
Teodor Sigaev 710d90da1f Add prefix operator for TEXT type.
The prefix operator along with SP-GiST indexes can be used as an alternative
for LIKE 'word%' commands  and it doesn't have a limitation of string/prefix
length as B-Tree has.

Bump catalog version

Author: Ildus Kurbangaliev with some editorization by me
Review by: Arthur Zakirov, Alexander Korotkov, and me
Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
2018-04-03 19:46:45 +03:00
Bruce Momjian 756dca8e7f doc: document "IS NOT DOCUMENT"
Reported-by: scott.ure@caseware.com

Discussion: https://postgr.es/m/152056505045.4963.16783351661813640274@wrigleys.postgresql.org

Author: Euler Taveira

Backpatch-through: 10
2018-03-30 10:39:48 -04:00
Andres Freund fb60478011 Improve JIT docs.
Author: John Naylor and Andres Freund
Discussion: https://postgr.es/m/CAJVSVGUs-VcwSY7-Kx-GQe__8hvWuA4Uhyf3gxoMXeiZqebE9g@mail.gmail.com
2018-03-29 16:13:40 -07:00
Andres Freund e6c039d13e Add documentation for the JIT feature.
As promised in earlier commits, this adds documentation about the new
build options, the new GUCs, about the planner logic when JIT is used,
and the benefits of JIT in general.

Also adds a more implementation oriented README.

I'm sure we're going to want to expand this further, but I think this
is a reasonable start.

Author: Andres Freund, with contributions by Thomas Munro
Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-28 14:22:42 -07:00
Tom Lane ee4a2c4a03 Doc: remove extra comma in syntax summary for array_fill().
Noted by Scott Ure.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/152199346794.4544.1888397173908716912@wrigleys.postgresql.org
2018-03-25 12:38:21 -04:00
Peter Eisentraut 10cfce34c0 Add user-callable SHA-2 functions
Add the user-callable functions sha224, sha256, sha384, sha512.  We
already had these in the C code to support SCRAM, but there was no test
coverage outside of the SCRAM tests.  Adding these as user-callable
functions allows writing some tests.  Also, we have a user-callable md5
function but no more modern alternative, which led to wide use of md5 as
a general-purpose hash function, which leads to occasional complaints
about using md5.

Also mark the existing md5 functions as leak-proof.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-02-22 11:34:53 -05:00
Peter Eisentraut 2ac3e6acc2 doc: pg_function_is_visible also applies to aggregates and procedures 2018-02-13 15:13:44 -05:00
Peter Eisentraut 7a32ac8a66 Add procedure support to pg_get_functiondef
This also makes procedures work in psql's \ef and \sf commands.

Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-02-13 15:13:44 -05:00
Tom Lane 0a459cec96 Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions.  We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion.  That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value.  Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that.  (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)

The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types.  Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.

In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011.  As far as I can see, we are now fully
up to spec on window framing options.

Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.

Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.

Oliver Ford, reviewed and whacked around some by me

Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 00:06:56 -05:00
Simon Riggs 9c7d06d606 Ability to advance replication slots
Ability to advance both physical and logical replication slots using a
new user function pg_replication_slot_advance().

For logical advance that means records are consumed as fast as possible
and changes are not given to output plugin for sending. Makes 2nd phase
(after we reached SNAPBUILD_FULL_SNAPSHOT) of replication slot creation
faster, especially when there are big transactions as the reorder buffer
does not have to deal with data changes and does not have to spill to
disk.

Author: Petr Jelinek
Reviewed-by: Simon Riggs
2018-01-17 11:38:34 +00:00
Andrew Dunstan 11b623dd0a Implement TZH and TZM timestamp format patterns
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.

Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
2018-01-09 14:25:05 -05:00
Peter Eisentraut 3c49c6facb Convert documentation to DocBook XML
Since some preparation work had already been done, the only source
changes left were changing empty-element tags like <xref linkend="foo">
to <xref linkend="foo"/>, and changing the DOCTYPE.

The source files are still named *.sgml, but they are actually XML files
now.  Renaming could be considered later.

In the build system, the intermediate step to convert from SGML to XML
is removed.  Everything is build straight from the source files again.
The OpenSP (or the old SP) package is no longer needed.

The documentation toolchain instructions are updated and are much
simpler now.

Peter Eisentraut, Alexander Lakhin, Jürgen Purtz
2017-11-23 09:44:28 -05:00
Tom Lane 63ca86318d Fix quoted-substring handling in format parsing for to_char/to_number/etc.
This code evidently intended to treat backslash as an escape character
within double-quoted substrings, but it was sufficiently confused that
cases like ..."foo\\"... did not work right: the second backslash
managed to quote the double-quote after it, despite being quoted itself.
Rewrite to get that right, while preserving the existing behavior
outside double-quoted substrings, which is that backslash isn't special
except in the combination \".

Comparing to Oracle, it seems that their version of to_char() for
timestamps allows literal alphanumerics only within double quotes, while
non-alphanumerics are allowed outside quotes; backslashes aren't special
anywhere; there is no way at all to emit a literal double quote.
(Bizarrely, their to_char() for numbers is different; it doesn't allow
literal text at all AFAICT.)  The fact that they don't treat backslash
as special justifies our existing behavior for backslash outside double
quotes.  I considered making backslash inside double quotes act the same
way (ie, special only if before "), which in a green field would be a
more consistent behavior.  But that would likely break more existing SQL
code than what this patch does.

Add some test cases illustrating this behavior.  (Only the last new
case actually changes behavior in this commit.)

Little of this behavior was documented, either, so fix that.

Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us
2017-11-18 12:16:37 -05:00
Tom Lane e87d4965bd Prevent to_number() from losing data when template doesn't match exactly.
Non-data template patterns would consume characters whether or not those
characters were what the pattern expected, for example
	SELECT TO_NUMBER('1234', '9,999');
produced 134 because the '2' got eaten by the comma pattern.  This seems
undesirable, not least because it doesn't happen in Oracle.  For the ','
and 'G' template patterns, we can fix this by consuming characters only
if they match what the pattern would output.  For non-data patterns such
as 'L' and 'TH', it seems impractical to tighten things up to the point of
consuming only exact matches to what the pattern would output; but we can
improve matters quite a lot by redefining the behavior as "consume only
characters that aren't digits, signs, decimal point, or comma".

Also, fix it so that the behavior is to consume the number of *characters*
the pattern would output, not the number of *bytes*.  The old coding would
do surprising things with non-ASCII currency symbols, for example.  (It
would be good to apply that rule for literal text as well, but this commit
only fixes it for non-data patterns.)

Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around
a bit more by me

Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com
2017-11-17 12:04:13 -05:00
Simon Riggs 4b0d28de06 Remove secondary checkpoint
Previously server reserved WAL for last two checkpoints,
which used too much disk space for small servers.

Bumps PG_CONTROL_VERSION

Author: Simon Riggs <simon@2ndQuadrant.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-11-07 12:56:30 -05:00
Peter Eisentraut 1ff01b3902 Convert SGML IDs to lower case
IDs in SGML are case insensitive, and we have accumulated a mix of upper
and lower case IDs, including different variants of the same ID.  In
XML, these will be case sensitive, so we need to fix up those
differences.  Going to all lower case seems most straightforward, and
the current build process already makes all anchors and lower case
anyway during the SGML->XML conversion, so this doesn't create any
difference in the output right now.  A future XML-only build process
would, however, maintain any mixed case ID spellings in the output, so
that is another reason to clean this up beforehand.

Author: Alexander Lakhin <exclusion@gmail.com>
2017-10-20 19:26:10 -04:00
Peter Eisentraut c29c578908 Don't use SGML empty tags
For DocBook XML compatibility, don't use SGML empty tags (</>) anymore,
replace by the full tag name.  Add a warning option to catch future
occurrences.

Alexander Lakhin, Jürgen Purtz
2017-10-17 15:10:33 -04:00
Peter Eisentraut 44b3230e82 Use lower-case SGML attribute values
for DocBook XML compatibility
2017-10-10 10:15:57 -04:00
Peter Eisentraut 22d9764646 Remove SGML marked sections
For XML compatibility, replace marked sections <![IGNORE[ ]]> with
comments <!-- -->.  In some cases it seemed better to remove the ignored
text altogether, and in one case the text should not have been ignored.
2017-09-28 16:17:28 -04:00
Peter Eisentraut 3012061b86 Apply pg_get_serial_sequence() to identity column sequences as well
Bug: #14813
2017-09-15 14:21:20 -04:00
Tom Lane ce5dcf54b9 Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either.  Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0".  Per gripes from Erwin Brandstetter.

Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com
2017-08-29 09:34:21 -04:00
Robert Haas 449338cc64 Improve low-level backup documentation.
Our documentation hasn't really caught up with the fact that
non-exclusive backups can now be taken using pg_start_backup and
pg_stop_backup even on standbys.  Update, also correcting some
errors introduced by 52f8a59dd9.
Updates to the 9.6 documentation are needed as well, but that
will need a separate patch as some things are different on that
version.

David Steele, reviewed by Robert Haas and Michael Paquier

Discussion: http://postgr.es/m/d4d951b9-89c0-6bc1-b6ff-d0b2dd5a8966@pgmasters.net
2017-08-25 15:20:27 -04:00
Robert Haas 52f8a59dd9 Make pg_stop_backup's wait_for_archive flag work on standbys.
Previously, it had no effect.  Now, if archive_mode=always, it will
work, and if not, you'll get a warning.

Masahiko Sawada, Michael Paquier, and Robert Haas.  The patch as
submitted also changed the behavior so that we would write and remove
history files on standbys, but that seems like material for a separate
patch to me.

Discussion: http://postgr.es/m/CAD21AoC2Xw6M=ZJyejq_9d_iDkReC_=rpvQRw5QsyzKQdfYpkw@mail.gmail.com
2017-08-05 10:49:26 -04:00
Tom Lane 0b13b2a771 Rethink behavior of pg_import_system_collations().
Marco Atzeri reported that initdb would fail if "locale -a" reported
the same locale name more than once.  All previous versions of Postgres
implicitly de-duplicated the results of "locale -a", but the rewrite
to move the collation import logic into C had lost that property.
It had also lost the property that locale names matching built-in
collation names were silently ignored.

The simplest way to fix this is to make initdb run the function in
if-not-exists mode, which means that there's no real use-case for
non if-not-exists mode; we might as well just drop the boolean argument
and simplify the function's definition to be "add any collations not
already known".  This change also gets rid of some odd corner cases
caused by the fact that aliases were added in if-not-exists mode even
if the function argument said otherwise.

While at it, adjust the behavior so that pg_import_system_collations()
doesn't spew "collation foo already exists, skipping" messages during a
re-run; that's completely unhelpful, especially since there are often
hundreds of them.  And make it return a count of the number of collations
it did add, which seems like it might be helpful.

Also, re-integrate the previous coding's property that it would make a
deterministic selection of which alias to use if there were conflicting
possibilities.  This would only come into play if "locale -a" reports
multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8",
but that hardly seems out of the question.

In passing, fix incorrect behavior in pg_import_system_collations()'s
ICU code path: it neglected CommandCounterIncrement, which would result
in failures if ICU returns duplicate names, and it would try to create
comments even if a new collation hadn't been created.

Also, reorder operations in initdb so that the 'ucs_basic' collation
is created before calling pg_import_system_collations() not after.
This prevents a failure if "locale -a" were to report a locale named
that.  There's no reason to think that that ever happens in the wild,
but the old coding would have survived it, so let's be equally robust.

Discussion: https://postgr.es/m/20c74bc3-d6ca-243d-1bbc-12f17fa4fe9a@gmail.com
2017-06-23 14:19:58 -04:00
Peter Eisentraut bbaf9e8f84 Documentation spell checking and markup improvements 2017-06-18 14:02:12 -04:00
Peter Eisentraut bf6e4c3c82 Trim trailing whitespace 2017-06-12 09:51:18 -04:00
Tom Lane f04c9a6146 Standardize terminology for pg_statistic_ext entries.
Consistently refer to such an entry as a "statistics object", not just
"statistics" or "extended statistics".  Previously we had a mismash of
terms, accompanied by utter confusion as to whether the term was
singular or plural.  That's not only grating (at least to the ear of
a native English speaker) but could be outright misleading, eg in error
messages that seemed to be referring to multiple objects where only one
could be meant.

This commit fixes the code and a lot of comments (though I may have
missed a few).  I also renamed two new SQL functions,
pg_get_statisticsextdef -> pg_get_statisticsobjdef
pg_statistic_ext_is_visible -> pg_statistics_obj_is_visible
to conform better with this terminology.

I have not touched the SGML docs other than fixing those function
names; the docs certainly need work but it seems like a separable task.

Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
2017-05-14 10:55:01 -04:00
Alvaro Herrera d99d58cdc8 Complete tab completion for DROP STATISTICS
Tab-completing DROP STATISTICS would only work if you started writing
the schema name containing the statistics object, because the visibility
clause was missing.  To add it, we need to add SQL-callable support for
testing visibility of a statistics object, like all other object types
already have.

Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
2017-05-13 01:05:48 -03:00
Peter Eisentraut 46052d9ef3 Replace another "transaction log" with "write-ahead log"
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
2017-05-12 13:53:24 -04:00
Peter Eisentraut c1a7f64b4a Replace "transaction log" with "write-ahead log"
This makes documentation and error messages match the renaming of "xlog"
to "wal" in APIs and file naming.
2017-05-12 11:52:43 -04:00
Tom Lane efa2c18f4e Doc fix: scale(numeric) returns integer, not numeric.
Thinko in commit abb173392, which introduced this function.

Report: https://postgr.es/m/20170511215234.1795.54347@wrigleys.postgresql.org
2017-05-11 18:09:22 -04:00
Tom Lane d10c626de4 Rename WAL-related functions and views to use "lsn" not "location".
Per discussion, "location" is a rather vague term that could refer to
multiple concepts.  "LSN" is an unambiguous term for WAL locations and
should be preferred.  Some function names, view column names, and function
output argument names used "lsn" already, but others used "location",
as well as yet other terms such as "wal_position".  Since we've already
renamed a lot of things in this area from "xlog" to "wal" for v10,
we may as well incur a bit more compatibility pain and make these names
all consistent.

David Rowley, minor additional docs hacking by me

Discussion: https://postgr.es/m/CAKJS1f8O0njDKe8ePFQ-LK5-EjwThsDws6ohJ-+c6nWK+oUxtg@mail.gmail.com
2017-05-11 11:49:59 -04:00
Tom Lane a9c6d70435 Document current_role.
This system function has been there a very long time, but somehow escaped
being listed in func.sgml.

Fabien Coelho and Tom Lane

Discussion: https://postgr.es/m/alpine.DEB.2.20.1705061027580.3896@lancre
2017-05-06 14:19:47 -04:00
Alvaro Herrera 73c1748d83 Fix XMLTABLE synopsis, add XMLNAMESPACES example
Add a missing comma in the synopsis after the XMLNAMESPACES clause.
Also, add an example illustrating the use of that clause.

Author: Arjen Nienhuis and Pavel Stěhule
2017-04-13 12:12:23 -03:00
Bruce Momjian 1c1a4726eb docs: Improve window function docs
Specifically, the behavior of general-purpose and statistical aggregates
as window functions was not clearly documented, and terms were
inconsistently used.  Also add docs about the difference between
cume_dist and percent_rank, rather than just the formulas.

Discussion: 20170406214918.GA5757@momjian.us
2017-04-11 10:47:47 -04:00
Tom Lane 511540dadf Move isolationtester's is-blocked query into C code for speed.
Commit 4deb41381 modified isolationtester's query to see whether a
session is blocked to also check for waits occurring in GetSafeSnapshot.
However, it did that in a way that enormously increased the query's
runtime under CLOBBER_CACHE_ALWAYS, causing the buildfarm members
that use that to run about four times slower than before, and in some
cases fail entirely.  To fix, push the entire logic into a dedicated
backend function.  This should actually reduce the CLOBBER_CACHE_ALWAYS
runtime from what it was previously, though I've not checked that.

In passing, expose a SQL function to check for safe-snapshot blockage,
comparable to pg_blocking_pids.  This is more or less free given the
infrastructure built to solve the other problem, so we might as well.

Thomas Munro

Discussion: https://postgr.es/m/20170407165749.pstcakbc637opkax@alap3.anarazel.de
2017-04-10 10:26:54 -04:00
Andrew Dunstan cf35346e81 Make json_populate_record and friends operate recursively
With this change array fields are populated from json(b) arrays, and
composite fields are populated from json(b) objects.

Along the way, some significant code refactoring is done to remove
redundancy in the way to populate_record[_set] and to_record[_set]
functions operate, and some significant efficiency gains are made by
caching tuple descriptors.

Nikita Glukhov, edited some by me.

Reviewed by Aleksander Alekseev and Tom Lane.
2017-04-06 22:22:13 -04:00
Andrew Dunstan e75a78656b Clarify documentation of to_tsvector(json(b))
Per gripe from Sven R. Kunze
2017-04-04 16:50:13 -04:00
Alvaro Herrera c655899ba9 BRIN de-summarization
When the BRIN summary tuple for a page range becomes too "wide" for the
values actually stored in the table (because the tuples that were
present originally are no longer present due to updates or deletes), it
can be useful to remove the outdated summary tuple, so that a future
summarization can install a tighter summary.

This commit introduces a SQL-callable interface to do so.

Author: Álvaro Herrera
Reviewed-by: Eiji Seki
Discussion: https://postgr.es/m/20170228045643.n2ri74ara4fhhfxf@alvherre.pgsql
2017-04-01 16:10:04 -03:00
Alvaro Herrera 7526e10224 BRIN auto-summarization
Previously, only VACUUM would cause a page range to get initially
summarized by BRIN indexes, which for some use cases takes too much time
since the inserts occur.  To avoid the delay, have brininsert request a
summarization run for the previous range as soon as the first tuple is
inserted into the first page of the next range.  Autovacuum is in charge
of processing these requests, after doing all the regular vacuuming/
analyzing work on tables.

This doesn't impose any new tasks on autovacuum, because autovacuum was
already in charge of doing summarizations.  The only actual effect is to
change the timing, i.e. that it occurs earlier.  For this reason, we
don't go any great lengths to record these requests very robustly; if
they are lost because of a server crash or restart, they will happen at
a later time anyway.

Most of the new code here is in autovacuum, which can now be told about
"work items" to process.  This can be used for other things such as GIN
pending list cleaning, perhaps visibility map bit setting, both of which
are currently invoked during vacuum, but do not really depend on vacuum
taking place.

The requests are at the page range level, a granularity for which we did
not have SQL-level access; we only had index-level summarization
requests via brin_summarize_new_values().  It seems reasonable to add
SQL-level access to range-level summarization too, so add a function
brin_summarize_range() to do that.

Authors: Álvaro Herrera, based on sketch from Simon Riggs.
Reviewed-by: Thomas Munro.
Discussion: https://postgr.es/m/20170301045823.vneqdqkmsd4as4ds@alvherre.pgsql
2017-04-01 14:00:53 -03:00
Andrew Dunstan e306df7f9c Full Text Search support for json and jsonb
The new functions are ts_headline() and to_tsvector.

Dmitry Dolgov, edited and documented by me.
2017-03-31 14:26:03 -04:00
Simon Riggs 25fff40798 Default monitoring roles
Three nologin roles with non-overlapping privs are created by default
* pg_read_all_settings - read all GUCs.
* pg_read_all_stats - pg_stat_*, pg_database_size(), pg_tablespace_size()
* pg_stat_scan_tables - may lock/scan tables

Top level role - pg_monitor includes all of the above by default, plus others

Author: Dave Page
Reviewed-by: Stephen Frost, Robert Haas, Peter Eisentraut, Simon Riggs
2017-03-30 14:18:53 -04:00
Simon Riggs ff539da316 Cleanup slots during drop database
Automatically drop all logical replication slots associated with a
database when the database is dropped. Previously we threw an ERROR
if a slot existed. Now we throw ERROR only if a slot is active in
the database being dropped.

Craig Ringer
2017-03-28 10:05:21 -04:00
Alvaro Herrera 7b504eb282 Implement multivariate n-distinct coefficients
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns.  Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too.  All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.

This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table.  This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve.  A new
special pseudo-type pg_ndistinct is used.

(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)

Author: Tomas Vondra.  Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
    Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
    https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
2017-03-24 14:06:10 -03:00
Robert Haas 857ee8e391 Add a txid_status function.
If your connection to the database server is lost while a COMMIT is
in progress, it may be difficult to figure out whether the COMMIT was
successful or not.  This function will tell you, provided that you
don't wait too long to ask.  It may be useful in other situations,
too.

Craig Ringer, reviewed by Simon Riggs and by me

Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
2017-03-24 12:00:53 -04:00
Peter Eisentraut eccfef81e1 ICU support
Add a column collprovider to pg_collation that determines which library
provides the collation data.  The existing choices are default and libc,
and this adds an icu choice, which uses the ICU4C library.

The pg_locale_t type is changed to a union that contains the
provider-specific locale handles.  Users of locale information are
changed to look into that struct for the appropriate handle to use.

Also add a collversion column that records the version of the collation
when it is created, and check at run time whether it is still the same.
This detects potentially incompatible library upgrades that can corrupt
indexes and other structures.  This is currently only supported by
ICU-provided collations.

initdb initializes the default collation set as before from the `locale
-a` output but also adds all available ICU locales with a "-x-icu"
appended.

Currently, ICU-provided collations can only be explicitly named
collations.  The global database locales are still always libc-provided.

ICU support is enabled by configure --with-icu.

Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
2017-03-23 15:28:48 -04:00
Stephen Frost 017e4f2588 Expose waitforarchive option through pg_stop_backup()
Internally, we have supported the option to either wait for all of the
WAL associated with a backup to be archived, or to return immediately.
This option is useful to users of pg_stop_backup() as well, when they
are reading the stop backup record position and checking that the WAL
they need has been archived independently.

This patch adds an additional, optional, argument to pg_stop_backup()
which allows the user to indicate if they wish to wait for the WAL to be
archived or not.  The default matches current behavior, which is to
wait.

Author: David Steele, with some minor changes, doc updates by me.
Reviewed by: Takayuki Tsunakawa, Fujii Masao
Discussion: https://postgr.es/m/758e3fd1-45b4-5e28-75cd-e9e7f93a4c02@pgmasters.net
2017-03-22 23:44:58 -04:00
Robert Haas 88e66d193f Rename "pg_clog" directory to "pg_xact".
Names containing the letters "log" sometimes confuse users into
believing that only non-critical data is present.  It is hoped
this renaming will discourage ill-considered removals of transaction
status data.

Michael Paquier

Discussion: http://postgr.es/m/CA+Tgmoa9xFQyjRZupbdEFuwUerFTvC6HjZq1ud6GYragGDFFgA@mail.gmail.com
2017-03-17 09:48:38 -04:00
Robert Haas b4ff8609db Fix grammar.
This would have ben grammatical if the closing tag name were actually
part of the output, but of course it's not.
2017-03-16 20:29:42 -04:00
Robert Haas befd73c50f Add pg_ls_logdir() and pg_ls_waldir() functions.
These functions are intended to be used by monitoring tools, and,
unlike pg_ls_dir(), access to them can be granted to non-superusers,
so that those monitoring tools can observe the principle of least
privilege.

Dave Page, revised by me, and also reviewed a bit by Thomas Munro.

Discussion: http://postgr.es/m/CA+OCxow-X=D2fWdKy+HP+vQ1LtrgbsYQ=CshzZBqyFT5jOYrFw@mail.gmail.com
2017-03-16 15:05:02 -04:00
Stephen Frost c7a9fa399d Add support for EUI-64 MAC addresses as macaddr8
This adds in support for EUI-64 MAC addresses by adding a new data type
called 'macaddr8' (using our usual convention of indicating the number
of bytes stored).

This was largely a copy-and-paste from the macaddr data type, with
appropriate adjustments for having 8 bytes instead of 6 and adding
support for converting a provided EUI-48 (6 byte format) to the EUI-64
format.  Conversion from EUI-48 to EUI-64 inserts FFFE as the 4th and
5th bytes but does not perform the IPv6 modified EUI-64 action of
flipping the 7th bit, but we add a function to perform that specific
action for the user as it may be commonly done by users who wish to
calculate their IPv6 address based on their network prefix and 48-bit
MAC address.

Author: Haribabu Kommi, with a good bit of rework of macaddr8_in by me.
Reviewed by: Vitaly Burovoy, Kuntal Ghosh

Discussion: https://postgr.es/m/CAJrrPGcUi8ZH+KkK+=TctNQ+EfkeCEHtMU_yo1mvX8hsk_ghNQ@mail.gmail.com
2017-03-15 11:16:25 -04:00
Alvaro Herrera fcec6caafa Support XMLTABLE query expression
XMLTABLE is defined by the SQL/XML standard as a feature that allows
turning XML-formatted data into relational form, so that it can be used
as a <table primary> in the FROM clause of a query.

This new construct provides significant simplicity and performance
benefit for XML data processing; what in a client-side custom
implementation was reported to take 20 minutes can be executed in 400ms
using XMLTABLE.  (The same functionality was said to take 10 seconds
using nested PostgreSQL XPath function calls, and 5 seconds using
XMLReader under PL/Python).

The implemented syntax deviates slightly from what the standard
requires.  First, the standard indicates that the PASSING clause is
optional and that multiple XML input documents may be given to it; we
make it mandatory and accept a single document only.  Second, we don't
currently support a default namespace to be specified.

This implementation relies on a new executor node based on a hardcoded
method table.  (Because the grammar is fixed, there is no extensibility
in the current approach; further constructs can be implemented on top of
this such as JSON_TABLE, but they require changes to core code.)

Author: Pavel Stehule, Álvaro Herrera
Extensively reviewed by: Craig Ringer
Discussion: https://postgr.es/m/CAFj8pRAgfzMD-LoSmnMGybD0WsEznLHWap8DO79+-GTRAPR4qA@mail.gmail.com
2017-03-08 12:40:26 -03:00
Robert Haas 19dc233c32 Add pg_current_logfile() function.
The syslogger will write out the current stderr and csvlog names, if
it's running and there are any, to a new file in the data directory
called "current_logfiles".  We take care to remove this file when it
might no longer be valid (but not at shutdown).  The function
pg_current_logfile() can be used to read the entries in the file.

Gilles Darold, reviewed and modified by Karl O.  Pinc, Michael
Paquier, and me.  Further review by Álvaro Herrera and Christoph Berg.
2017-03-03 11:43:11 +05:30
Tom Lane d99706ed51 Update documentation of tsquery_phrase().
Missed in commit 028350f61.  Noted by Eiji Seki.
2017-03-02 09:34:35 -05:00
Alvaro Herrera 4461a9bfd1 Create <sect3> in the functions-xml section
This is a small change so that a new XMLTABLE sect3 can be added easily
later.

Author: Craig Ringer
Discussion: https://postgr.es/m/CAFj8pRAgfzMD-LoSmnMGybD0WsEznLHWap8DO79+-GTRAPR4qA@mail.gmail.com
2017-03-01 19:27:24 -03:00
Tom Lane d28aafb6dd Remove pg_control's enableIntTimes field.
We don't need it any more.

pg_controldata continues to report that date/time type storage is
"64-bit integers", but that's now a hard-wired behavior not something
it sees in the data.  This avoids breaking pg_upgrade, and perhaps other
utilities that inspect pg_control this way.  Ditto for pg_resetwal.

I chose to remove the "bigint_timestamps" output column of
pg_control_init(), though, as that function hasn't been around long
and probably doesn't have ossified users.

Discussion: https://postgr.es/m/26788.1487455319@sss.pgh.pa.us
2017-02-23 12:23:12 -05:00
Tom Lane 10257fc5ff Fix documentation of to_char/to_timestamp TZ, tz, OF formatting patterns.
These are only supported in to_char, not in the other direction, but the
documentation failed to mention that.  Also, describe TZ/tz as printing the
time zone "abbreviation", not "name", because what they print is elsewhere
referred to that way.  Per bug #14558.
2017-02-20 10:05:00 -05:00
Robert Haas 806091c96f Remove all references to "xlog" from SQL-callable functions in pg_proc.
Commit f82ec32ac3 renamed the pg_xlog
directory to pg_wal.  To make things consistent, and because "xlog" is
terrible terminology for either "transaction log" or "write-ahead log"
rename all SQL-callable functions that contain "xlog" in the name to
instead contain "wal".  (Note that this may pose an upgrade hazard for
some users.)

Similarly, rename the xlog_position argument of the functions that
create slots to be called wal_position.

Discussion: https://www.postgresql.org/message-id/CA+Tgmob=YmA=H3DbW1YuOXnFVgBheRmyDkWcD9M8f=5bGWYEoQ@mail.gmail.com
2017-02-09 15:10:09 -05:00
Peter Eisentraut 696af9ab0a doc: Document sequence function privileges better
Document the privileges required for each of the sequence functions.
This was already in the GRANT reference page, but also add it to the
function description for easier reference.
2017-02-06 15:27:01 -05:00
Peter Eisentraut 665d1fad99 Logical replication
- Add PUBLICATION catalogs and DDL
- Add SUBSCRIPTION catalog and DDL
- Define logical replication protocol and output plugin
- Add logical replication workers

From: Petr Jelinek <petr@2ndquadrant.com>
Reviewed-by: Steve Singer <steve@ssinger.info>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Erik Rijkers <er@xs4all.nl>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
2017-01-20 09:04:49 -05:00
Magnus Hagander d00ca333c3 Implement array version of jsonb_delete and operator
This makes it possible to delete multiple keys from a jsonb value by
passing in an array of text values, which makes the operaiton much
faster than individually deleting the keys (which would require copying
the jsonb structure over and over again.

Reviewed by Dmitry Dolgov and Michael Paquier
2017-01-18 21:37:59 +01:00
Peter Eisentraut aa17c06fb5 Add function to import operating system collations
Move this logic out of initdb into a user-callable function.  This
simplifies the code and makes it possible to update the standard
collations later on if additional operating system collations appear.

Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Euler Taveira <euler@timbira.com.br>
2017-01-18 09:35:56 -05:00
Tom Lane 3c9d398484 Doc: improve index entry for "median".
We had an index entry for "median" attached to the percentile_cont function
entry, which was pretty useless because a person following the link would
never realize that that function was the one they were being hinted to use.

Instead, make the index entry point at the example in syntax-aggregates,
and add a <seealso> link to "percentile".

Also, since that example explicitly claims to be calculating the median,
make it use percentile_cont not percentile_disc.  This makes no difference
in terms of the larger goals of that section, but so far as I can find,
nearly everyone thinks that "median" means the continuous not discrete
calculation.

Per gripe from Steven Winfield.  Back-patch to 9.4 where we introduced
percentile_cont.

Discussion: https://postgr.es/m/20161223102056.25614.1166@wrigleys.postgresql.org
2016-12-23 12:53:09 -05:00
Peter Eisentraut a924c327e2 Add support for temporary replication slots
This allows creating temporary replication slots that are removed
automatically at the end of the session or on error.

From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2016-12-12 08:38:17 -05:00
Tom Lane 213c0f2d78 Doc: improve description of trim() and related functions.
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string.  Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.

Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.

Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
2016-11-30 13:34:13 -05:00
Robert Haas f82ec32ac3 Rename "pg_xlog" directory to "pg_wal".
"xlog" is not a particularly clear abbreviation for "write-ahead log",
and it sometimes confuses users into believe that the contents of the
"pg_xlog" directory are not critical data, leading to unpleasant
consequences.  So, rename the directory to "pg_wal".

This patch modifies pg_upgrade and pg_basebackup to understand both
the old and new directory layouts; the former is necessary given the
purpose of the tool, while the latter merely avoids an unnecessary
backward-compatibility break.

We may wish to consider renaming other programs, switches, and
functions which still use the old "xlog" naming to also refer to
"wal".  However, that's still under discussion, so let's do just this
much for now.

Discussion: CAB7nPqTeC-8+zux8_-4ZD46V7YPwooeFxgndfsq5Rg8ibLVm1A@mail.gmail.com

Michael Paquier
2016-10-20 11:32:18 -04:00
Tom Lane d3cd36a133 Make to_timestamp() and to_date() range-check fields of their input.
Historically, something like to_date('2009-06-40','YYYY-MM-DD') would
return '2009-07-10' because there was no prohibition on out-of-range
month or day numbers.  This has been widely panned, and it also turns
out that Oracle throws an error in such cases.  Since these functions
are nominally Oracle-compatibility features, let's change that.

There's no particular restriction on year (modulo the fact that the
scanner may not believe that more than 4 digits are year digits,
a matter to be addressed separately if at all).  But we now check month,
day, hour, minute, second, and fractional-second fields, as well as
day-of-year and second-of-day fields if those are used.

Currently, no checks are made on ISO-8601-style week numbers or day
numbers; it's not very clear what the appropriate rules would be there,
and they're probably so little used that it's not worth sweating over.

Artur Zakirov, reviewed by Amul Sul, further adjustments by me

Discussion: <1873520224.1784572.1465833145330.JavaMail.yahoo@mail.yahoo.com>
See-Also: <57786490.9010201@wars-nicht.de>
2016-09-28 14:36:17 -04:00
Tom Lane a4afb2b5c0 Document has_type_privilege().
Evidently an oversight in commit 729205571.  Back-patch to 9.2 where
privileges for types were introduced.

Report: <20160922173517.8214.88959@wrigleys.postgresql.org>
2016-09-26 11:50:35 -04:00
Tom Lane 5a7bae0699 Doc: fix examples of # operators so they actually work.
These worked as-is until around 7.0, but fail in newer versions because
there are more operators named "#".  Besides it's a bit inconsistent that
only two of the examples on this page lack type names on their constants.

Report: <20160923081530.1517.75670@wrigleys.postgresql.org>
2016-09-23 14:22:07 -04:00
Tom Lane 42fd984c0b Docs: assorted minor cleanups.
Standardize on "user_name" for a field name in related examples in
ddl.sgml; before we had variously "user_name", "username", and "user".
The last is flat wrong because it conflicts with a reserved word.

Be consistent about entry capitalization in a table in func.sgml.

Fix a typo in pgtrgm.sgml.

Back-patch to 9.6 and 9.5 as relevant.

Alexander Law
2016-09-12 19:19:41 -04:00
Tom Lane 39d866433e Make another editorial pass over the 9.6 release notes.
I think they're pretty much release-quality now.
2016-08-28 17:40:06 -04:00
Bruce Momjian ca9cb940d2 doc: more replacement of <literal> with something better
Reported-by: Alexander Law

Author: Alexander Law

Backpatch-through: 9.6
2016-08-24 21:11:44 -04:00
Robert Haas 86f31695f3 Add txid_current_ifassigned().
Add a variant of txid_current() that returns NULL if no transaction ID
is assigned.  This version can be used even on a standby server,
although it will always return NULL since no transaction IDs can be
assigned during recovery.

Craig Ringer, per suggestion from Jim Nasby.  Reviewed by Petr Jelinek
and by me.
2016-08-23 10:30:52 -04:00
Tom Lane cf9b0fea5f Implement regexp_match(), a simplified alternative to regexp_matches().
regexp_match() is like regexp_matches(), but it disallows the 'g' flag
and in consequence does not need to return a set.  Instead, it returns
a simple text array value, or NULL if there's no match.  Previously people
usually got that behavior with a sub-select, but this way is considerably
more efficient.

Documentation adjusted so that regexp_match() is presented first and then
regexp_matches() is introduced as a more complicated version.  This is
a bit historically revisionist but seems pedagogically better.

Still TODO: extend contrib/citext to support this function.

Emre Hasegeli, reviewed by David Johnston

Discussion: <CAE2gYzy42sna2ME_e3y1KLQ-4UBrB-eVF0SWn8QG39sQSeVhEw@mail.gmail.com>
2016-08-17 18:33:01 -04:00
Tom Lane ed0097e4f9 Add SQL-accessible functions for inspecting index AM properties.
Per discussion, we should provide such functions to replace the lost
ability to discover AM properties by inspecting pg_am (cf commit
65c5fcd35).  The added functionality is also meant to displace any code
that was looking directly at pg_index.indoption, since we'd rather not
believe that the bit meanings in that field are part of any client API
contract.

As future-proofing, define the SQL API to not assume that properties that
are currently AM-wide or index-wide will remain so unless they logically
must be; instead, expose them only when inquiring about a specific index
or even specific index column.  Also provide the ability for an index
AM to override the behavior.

In passing, document pg_am.amtype, overlooked in commit 473b93287.

Andrew Gierth, with kibitzing by me and others

Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
2016-08-13 18:31:14 -04:00
Tom Lane f10eab73df Make array_to_tsvector() sort and de-duplicate the given strings.
This is required for the result to be a legal tsvector value.
Noted while fooling with Andreas Seltenreich's ts_delete() crash.

Discussion: <87invhoj6e.fsf@credativ.de>
2016-08-05 16:09:06 -04:00
Peter Eisentraut 5676da2d01 Documentation spell checking and markup improvements 2016-07-28 22:46:15 -04:00
Tom Lane 4452000f31 Fix constant-folding of ROW(...) IS [NOT] NULL with composite fields.
The SQL standard appears to specify that IS [NOT] NULL's tests of field
nullness are non-recursive, ie, we shouldn't consider that a composite
field with value ROW(NULL,NULL) is null for this purpose.
ExecEvalNullTest got this right, but eval_const_expressions did not,
leading to weird inconsistencies depending on whether the expression
was such that the planner could apply constant folding.

Also, adjust the docs to mention that IS [NOT] DISTINCT FROM NULL can be
used as a substitute test if a simple null check is wanted for a rowtype
argument.  That motivated reordering things so that IS [NOT] DISTINCT FROM
is described before IS [NOT] NULL.  In HEAD, I went a bit further and added
a table showing all the comparison-related predicates.

Per bug #14235.  Back-patch to all supported branches, since it's certainly
undesirable that constant-folding should change the semantics.

Report and patch by Andrew Gierth; assorted wordsmithing and revised
regression test cases by me.

Report: <20160708024746.1410.57282@wrigleys.postgresql.org>
2016-07-26 15:25:02 -04:00
Bruce Momjian 8a395e0b9a doc: add link for list-of-scalars mention
Reported-by: Manlio Perillo

Bug: 14016

Discussion: 20160311163928.6674.94707@wrigleys.postgresql.org

Reviewed-by: David G. Johnston
2016-06-28 16:16:06 -04:00
Bruce Momjian 675684fc23 doc: remove mention of UT1 in representing time
UT1 was incorrectly specified as our time representation.  (UT1 is
astronomical time.)  We are not actually UTC either because we ignore
leap seconds.

Reported-by: Thomas Munro

Discussion: CAEepm=3-TW9PLwGZhqjSSiEQ9UzJEKE-HELQDzRE0QUSCp8dgw@mail.gmail.com
2016-06-28 13:49:37 -04:00
Tom Lane 7feb60c1bb Clarify documentation of ceil/ceiling/floor functions.
Document these as "nearest integer >= argument" and "nearest integer <=
argument", which will hopefully be less confusing than the old formulation.
New wording is from Matlab via Dean Rasheed.

I changed the pg_description entries as well as the SGML docs.  In the
back branches, this will only affect installations initdb'd in the future,
but it should be harmless otherwise.

Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>
2016-06-09 11:58:00 -04:00
Tom Lane 6581e930a8 Polish the documentation concerning phrase text search.
Fix grammar, improve examples, etc.

I did not attempt to document the current behavior concerning distance-zero
matches, because I think that's broken and needs to change, so I'm not
going to use up brain cells figuring out how to explain how it works now.
One way or the other, there's still more to write here.
2016-06-09 00:30:59 -04:00
Tom Lane 82eafabeaa Improve docs about using ORDER BY to control aggregate input order.
David Johnston pointed out that the original text here had been obsoleted
by SQL:2008, which allowed ORDER BY in subqueries.  We could weaken the
text to describe ORDER-BY-in-subqueries as an optional SQL feature that's
possibly unportable; but then the exact same statements would apply to
the alternative it's being compared to (ORDER-BY-in-aggregate-calls).
So really that would be pretty useless; let's just take out the sentence
entirely.  Instead, point out the hazard that any extra processing in the
upper query might cause the subquery output order to be destroyed.

Discussion: <CAKFQuwbAX=iO9QbpN7_jr+BnUWm9FYX8WbEPUvG0p+nZhp6TZg@mail.gmail.com>
2016-05-21 12:55:31 -04:00
Tom Lane 691d99de38 Docs: improve warnings about nextval() not producing gapless sequences.
In the documentation for nextval(), point out explicitly that INSERT ...
ON CONFLICT will call nextval() if needed for the insertion case, whether
or not it ends up following the ON CONFLICT path.  This seems to be a
matter of some confusion, cf bug #14126, so let's be clear about it.

Also mention the issue in the CREATE SEQUENCE reference page, since that
is another place where people might expect such things to be covered.

Minor wording improvements nearby, as well.

Back-patch to 9.5 where ON CONFLICT was introduced.
2016-05-07 13:16:50 -04:00
Tom Lane 0b9a234432 Rename tsvector delete() to ts_delete(), and filter() to ts_filter().
The similarity of the original names to SQL keywords seems like a bad
idea.  Rename them before we're stuck with 'em forever.

In passing, minor code and docs cleanup.

Discussion: <4875.1462210058@sss.pgh.pa.us>
2016-05-05 19:43:32 -04:00
Tom Lane 21c2b1c611 Docs: improve index entries for new tsvector functions.
Fix typos, reword some overly general index entries.
2016-05-02 13:28:52 -04:00
Fujii Masao c8cb745323 Fix duplicated index entry in doc.
Commit cfe96ae corrected the name of pg_logical_emit_message()
in its index entry. But this typo fix caused duplicated index
entry because there was another index entry for the function.

Spotted by Tom Lane.
2016-04-14 11:17:41 +09:00
Fujii Masao cfe96ae24c Fix documented return type of pg_logical_emit_message() in func.sgml. 2016-04-11 21:28:17 +09:00
Stephen Frost 7a542700df Create default roles
This creates an initial set of default roles which administrators may
use to grant access to, historically, superuser-only functions.  Using
these roles instead of granting superuser access reduces the number of
superuser roles required for a system.  Documention for each of the
default roles has been added to user-manag.sgml.

Bump catversion to 201604082, as we had a commit that bumped it to
201604081 and another that set it back to 201604071...

Reviews by José Luis Tallón and Robert Haas
2016-04-08 16:56:27 -04:00
Robert Haas af025eed53 Add combine functions for various floating-point aggregates.
This allows parallel aggregation to use them.  It may seem surprising
that we use float8_combine for both float4_accum and float8_accum
transition functions, but that's because those functions differ only
in the type of the non-transition-state argument.

Haribabu Kommi, reviewed by David Rowley and Tomas Vondra
2016-04-08 13:47:06 -04:00
Robert Haas 752b948dfc Document which aggregates support partial mode.
David Rowley, reviewed by Tomas Vondra
2016-04-08 12:13:07 -04:00
Teodor Sigaev bb140506df Phrase full text search.
Patch introduces new text search operator (<-> or <DISTANCE>) into tsquery.
On-disk and binary in/out format of tsquery are backward compatible.
It has two side effect:
- change order for tsquery, so, users, who has a btree index over tsquery,
  should reindex it
- less number of parenthesis in tsquery output, and tsquery becomes more
  readable

Authors: Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov
Reviewers: Alexander Korotkov, Artur Zakirov
2016-04-07 18:44:18 +03:00
Stephen Frost 1574783b4c Use GRANT system to manage access to sensitive functions
Now that pg_dump will properly dump out any ACL changes made to
functions which exist in pg_catalog, switch to using the GRANT system
to manage access to those functions.

This means removing 'if (!superuser()) ereport()' checks from the
functions themselves and then REVOKEing EXECUTE right from 'public' for
these functions in system_views.sql.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Teodor Sigaev 0b62fd036e Add jsonb_insert
It inserts a new value into an jsonb array at arbitrary position or
a new key to jsonb object.

Author: Dmitry Dolgov
Reviewers: Petr Jelinek, Vitaly Burovoy, Andrew Dunstan
2016-04-06 19:25:00 +03:00
Simon Riggs 3fe3511d05 Generic Messages for Logical Decoding
API and mechanism to allow generic messages to be inserted into WAL that are
intended to be read by logical decoding plugins. This commit adds an optional
new callback to the logical decoding API.

Messages are either text or bytea. Messages can be transactional, or not, and
are identified by a prefix to allow multiple concurrent decoding plugins.

(Not to be confused with Generic WAL records, which are intended to allow crash
recovery of extensible objects.)

Author: Petr Jelinek and Andres Freund
Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs
Discussion: 5685F999.6010202@2ndquadrant.com
2016-04-06 10:05:41 +01:00
Magnus Hagander 7117685461 Implement backup API functions for non-exclusive backups
Previously non-exclusive backups had to be done using the replication protocol
and pg_basebackup. With this commit it's now possible to make them using
pg_start_backup/pg_stop_backup as well, as long as the backup program can
maintain a persistent connection to the database.

Doing this, backup_label and tablespace_map are returned as results from
pg_stop_backup() instead of being written to the data directory. This makes
the server safe from a crash during an ongoing backup, which can be a problem
with exclusive backups.

The old syntax of the functions remain and work exactly as before, but since the
new syntax is safer this should eventually be deprecated and removed.

Only reference documentation is included. The main section on backup still needs
to be rewritten to cover this, but since that is already scheduled for a separate
large rewrite, it's not included in this patch.

Reviewed by David Steele and Amit Kapila
2016-04-05 20:03:49 +02:00
Tom Lane e511d878f3 Allow to_timestamp(float8) to convert float infinity to timestamp infinity.
With the original SQL-function implementation, such cases failed because
we don't support infinite intervals.  Converting the function to C lets
us bypass the interval representation, which should be a bit faster as
well as more flexible.

Vitaly Burovoy, reviewed by Anastasia Lubennikova
2016-03-29 17:09:29 -04:00
Tom Lane d12e5bb79b Code and docs review for commit 3187d6de0e.
Fix up check for high-bit-set characters, which provoked "comparison is
always true due to limited range of data type" warnings on some compilers,
and was unlike the way we do it elsewhere anyway.  Fix omission of "$"
from the set of valid identifier continuation characters.  Get rid of
sanitize_text(), which was utterly inconsistent with any other error report
anywhere in the system, and wasn't even well designed on its own terms
(double-quoting the result string without escaping contained double quotes
doesn't seem very well thought out).  Fix up error messages, which didn't
follow the message style guidelines very well, and were overly specific in
situations where the actual mistake might not be what they said.  Improve
documentation.

(I started out just intending to fix the compiler warning, but the more
I looked at the patch the less I liked it.)
2016-03-28 01:00:30 -04:00
Teodor Sigaev 61d2ebdbf9 Fix a typo
Erik Rijkers
2016-03-18 18:49:24 +03:00
Teodor Sigaev 3187d6de0e Introduce parse_ident()
SQL-layer function to split qualified identifier into array parts.

Author: Pavel Stehule with minor editorization by me and Jim Nasby
2016-03-18 18:16:14 +03:00
Teodor Sigaev 6943a946c7 Tsvector editing functions
Adds several tsvector editting function: convert tsvector to/from text array,
set weight for given lexemes, delete lexeme(s), unnest, filter lexemes
with given weights

Author: Stas Kelvich with some editorization by me
Reviewers: Tomas Vondram, Teodor Sigaev
2016-03-11 19:22:36 +03:00
Joe Conway dc7d70ea05 Expose control file data via SQL accessible functions.
Add four new SQL accessible functions: pg_control_system(),
pg_control_checkpoint(), pg_control_recovery(), and pg_control_init()
which expose a subset of the control file data.

Along the way move the code to read and validate the control file to
src/common, where it can be shared by the new backend functions
and the original pg_controldata frontend program.

Patch by me, significant input, testing, and review by Michael Paquier.
2016-03-05 11:10:19 -08:00
Tom Lane 52f5d578d6 Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether.  (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)

The new function has the following behaviors that are painful or impossible
to get right via pg_locks:

1. Correctly understands which lock modes block which other ones.

2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.

3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.

The motivation for doing this right now is mostly to fix the isolation
tests.  Commit 38f8bdcac4 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly.  But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions.  Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds.  That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
2016-02-22 14:31:43 -05:00
Tom Lane 64a169d131 Docs: make prose discussion match the ordering of Table 9-58.
The "Session Information Functions" table seems to be sorted mostly
alphabetically (although it's not perfect), which would be all right
if it didn't lead to some related functions being described in a
pretty nonintuitive order.  Also, the prose discussions after the table
were in an order that hardly matched the table at all.  Rearrange to
make things a bit easier to follow.
2016-02-21 15:23:17 -05:00
Dean Rasheed 53874c5228 Add pg_size_bytes() to parse human-readable size strings.
This will parse strings in the format produced by pg_size_pretty() and
return sizes in bytes. This allows queries to be written with clauses
like "pg_total_relation_size(oid) > pg_size_bytes('10 GB')".

Author: Pavel Stehule with various improvements by Vitaly Burovoy
Discussion: http://www.postgresql.org/message-id/CAFj8pRD-tGoDKnxdYgECzA4On01_uRqPrwF-8LdkSE-6bDHp0w@mail.gmail.com
Reviewed-by: Vitaly Burovoy, Oleksandr Shulgin, Kyotaro Horiguchi,
    Michael Paquier and Robert Haas
2016-02-20 09:57:27 +00:00
Tom Lane 6819514fca Add num_nulls() and num_nonnulls() to count NULL arguments.
An example use-case is "CHECK(num_nonnulls(a,b,c) = 1)" to assert that
exactly one of a,b,c isn't NULL.  The functions are variadic, so they
can also be pressed into service to count the number of null or nonnull
elements in an array.

Marko Tiikkaja, reviewed by Pavel Stehule
2016-02-04 23:03:37 -05:00
Fujii Masao 7f46eaf035 Add gin_clean_pending_list function to clean up GIN pending list
This function cleans up the pending list of the GIN index by
moving entries in it to the main GIN data structure in bulk.
It returns the number of pages cleaned up from the pending list.

This function is useful, for example, when the pending list
needs to be cleaned up *quickly* to improve the performance of
the search using GIN index. VACUUM can do the same thing, too,
but it may take days to run on a large table.

Jeff Janes,
reviewed by Julien Rouhaud, Jaime Casanova, Alvaro Herrera and me.

Discussion: CAMkU=1x8zFkpfnozXyt40zmR3Ub_kHu58LtRmwHUKRgQss7=iQ@mail.gmail.com
2016-01-28 12:57:52 +09:00
Tom Lane e1bd684a34 Add trigonometric functions that work in degrees.
The implementations go to some lengths to deliver exact results for values
where an exact result can be expected, such as sind(30) = 0.5 exactly.

Dean Rasheed, reviewed by Michael Paquier
2016-01-22 15:46:22 -05:00
Tom Lane 647d87c56a Make extract() do something more reasonable with infinite datetimes.
Historically, extract() just returned zero for any case involving an
infinite timestamp[tz] input; even cases in which the unit name was
invalid.  This is not very sensible.  Instead, return infinity or
-infinity as appropriate when the requested field is one that is
monotonically increasing (e.g, year, epoch), or NULL when it is not
(e.g., day, hour).  Also, throw the expected errors for bad unit names.

BACKWARDS INCOMPATIBLE CHANGE

Vitaly Burovoy, reviewed by Vik Fearing
2016-01-21 22:26:20 -05:00
Simon Riggs e63bb4549a Add new user fn pg_current_xlog_flush_location()
Tomas Vondra, reviewed by Michael Paquier and Amit Kapila
Minor edits by me
2016-01-12 07:54:52 +00:00
Alvaro Herrera abb1733922 Add scale(numeric)
Author: Marko Tiikkaja
2016-01-05 19:02:13 -03:00
Tom Lane ea0d494dae Make the to_reg*() functions accept text not cstring.
Using cstring as the input type was a poor decision, because that's not
really a full-fledged type.  In particular, it lacks implicit coercions
from text or varchar, meaning that usages like to_regproc('foo'||'bar')
wouldn't work; basically the only case that did work without explicit
casting was a simple literal constant argument.

The lack of field complaints about this suggests that hardly anyone
is using these functions, so hopefully fixing it won't cause much of
a compatibility problem.  They've only been there since 9.4, anyway.

Petr Korobeinikov
2016-01-05 13:02:43 -05:00
Tom Lane 83be1844ac Add to_regnamespace() and to_regrole() to the documentation.
Commits cb9fa802b3 and 0c90f6769d added these functions,
but did not bother with documentation.
2016-01-05 12:35:18 -05:00
Tom Lane c6aeba353a Do some copy-editing on the docs for replication origins.
Minor grammar and markup improvements.
2016-01-03 16:03:42 -05:00
Alvaro Herrera ac443d1034 Document brin_summarize_new_pages
Pointer out by Jeff Janes
2015-12-28 15:28:19 -03:00
Tom Lane 54aaafe95f Document the exponentiation operator as associating left to right.
Common mathematical convention is that exponentiation associates right to
left.  We aren't going to change the parser for this, but we could note
it in the operator's description.  (It's already noted in the operator
precedence/associativity table, but users might not look there.)
Per bug #13829 from Henrik Pauli.
2015-12-28 12:09:00 -05:00
Tom Lane 71dd092c01 Docs: fix erroneously-given function name.
pg_replication_session_is_setup() exists nowhere; apparently this is
meant to refer to pg_replication_origin_session_is_setup().

Adrien Nayrat
2015-12-24 10:50:03 -05:00
Tom Lane fc0b893521 Remove obsolete advice about doubling backslashes in regex escapes.
Standard-conforming literals have been the default for long enough that
it no longer seems necessary to go out of our way to tell people to write
regex escapes illegibly.
2015-11-03 11:57:56 -05:00
Tom Lane 12c9a04008 Implement lookbehind constraints in our regular-expression engine.
A lookbehind constraint is like a lookahead constraint in that it consumes
no text; but it checks for existence (or nonexistence) of a match *ending*
at the current point in the string, rather than one *starting* at the
current point.  This is a long-requested feature since it exists in many
other regex libraries, but Henry Spencer had never got around to
implementing it in the code we use.

Just making it work is actually pretty trivial; but naive copying of the
logic for lookahead constraints leads to code that often spends O(N^2) time
to scan an N-character string, because we have to run the match engine
from string start to the current probe point each time the constraint is
checked.  In typical use-cases a lookbehind constraint will be written at
the start of the regex and hence will need to be checked at every character
--- so O(N^2) work overall.  To fix that, I introduced a third copy of the
core DFA matching loop, paralleling the existing longest() and shortest()
loops.  This version, matchuntil(), can suspend and resume matching given
a couple of pointers' worth of storage space.  So we need only run it
across the string once, stopping at each interesting probe point and then
resuming to advance to the next one.

I also put in an optimization that simplifies one-character lookahead and
lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND
constraints, which already existed in the engine.  This avoids the overhead
of the LACON machinery entirely for these rather common cases.

The net result is that lookbehind constraints run a factor of three or so
slower than Perl's for multi-character constraints, but faster than Perl's
for one-character constraints ... and they work fine for variable-length
constraints, which Perl gives up on entirely.  So that's not bad from a
competitive perspective, and there's room for further optimization if
anyone cares.  (In reality, raw scan rate across a large input string is
probably not that big a deal for Postgres usage anyway; so I'm happy if
it's linear.)
2015-10-30 19:14:19 -04:00
Robert Haas 41562b14bb Fix typo in docs.
Pallavi Sontakke
2015-10-08 13:21:16 -04:00
Bruce Momjian b852dc4cbd docs: clarify JSONB operator descriptions
No catalog bump as the catalog changes are for SQL operator comments.

Backpatch through 9.5
2015-10-07 09:06:49 -04:00
Bruce Momjian 28b3a3d41a to_number(): allow 'V' to divide by 10^(the number of digits)
to_char('V') already multiplied in a similar manner.

Report by Jeremy Lowery
2015-10-05 21:03:38 -04:00
Tom Lane 558d4ada18 Docs: add disclaimer about hazards of using regexps from untrusted sources.
It's not terribly hard to devise regular expressions that take large
amounts of time and/or memory to process.  Recent testing by Greg Stark has
also shown that machines with small stack limits can be driven to stack
overflow by suitably crafted regexps.  While we intend to fix these things
as much as possible, it's probably impossible to eliminate slow-execution
cases altogether.  In any case we don't want to treat such things as
security issues.  The history of that code should already discourage
prudent DBAs from allowing execution of regexp patterns coming from
possibly-hostile sources, but it seems like a good idea to warn about the
hazard explicitly.

Currently, similar_escape() allows access to enough of the underlying
regexp behavior that the warning has to apply to SIMILAR TO as well.
We might be able to make it safer if we tightened things up to allow only
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
non-backwards-compatible change, so it requires discussion and probably
could not be back-patched.

Per discussion among pgsql-security list.
2015-10-02 13:30:42 -04:00
Tom Lane b1d5cc375b Docs: fix typo in to_char() example.
Per bug #13631 from KOIZUMI Satoru.
2015-09-22 10:40:25 -04:00
Tom Lane d0f18cde7e Fix documentation of regular expression character-entry escapes.
The docs claimed that \uhhhh would be interpreted as a Unicode value
regardless of the database encoding, but it's never been implemented
that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely
the character that pg_mb2wchar translates to 0xhhhh.  Moreover we were
falsely dismissive of the usefulness of Unicode code points above FFFF.
Fix that.

It's been like this for ages, so back-patch to all supported branches.
2015-09-16 14:50:12 -04:00
Peter Eisentraut 103ef20211 doc: Spell checking 2015-09-10 21:35:06 -04:00
Tom Lane 94324abfb9 Docs: be explicit about datatype matching for lead/lag functions.
The default argument, if given, has to be of exactly the same datatype
as the first argument; but this was not stated in so many words, and
the error message you get about it might not lead your thought in the
right direction.  Per bug #13587 from Robert McGehee.

A quick scan says that these are the only two built-in functions with two
anyelement arguments and no other polymorphic arguments.  There are plenty
of cases of, eg, anyarray and anyelement, but those seem less likely to
confuse.  For instance this doesn't seem terribly hard to figure out:
"function array_remove(integer[], numeric) does not exist".  So I've
contented myself with fixing these two cases.
2015-08-25 19:11:17 -04:00
Peter Eisentraut 90a1d0aa76 doc: Whitespace and formatting fixes 2015-08-20 22:47:08 -04:00
Tom Lane 750fc78bca Fix broken markup, and copy-edit a bit.
Fix docs build failure introduced by commit 6fcd88511f.
I failed to resist the temptation to rearrange the description of
pg_create_physical_replication_slot(), too.
2015-08-11 10:46:51 -04:00
Andres Freund 6fcd88511f Allow pg_create_physical_replication_slot() to reserve WAL.
When creating a physical slot it's often useful to immediately reserve
the current WAL position instead of only doing after the first feedback
message arrives. That e.g. allows slots to guarantee that all the WAL
for a base backup will be available afterwards.

Logical slots already have to reserve WAL during creation, so generalize
that logic into being usable for both physical and logical slots.

Catversion bump because of the new parameter.

Author: Gurjeet Singh
Reviewed-By: Andres Freund
Discussion: CABwTF4Wh_dBCzTU=49pFXR6coR4NW1ynb+vBqT+Po=7fuq5iCw@mail.gmail.com
2015-08-11 12:34:31 +02:00
Andres Freund 18e8613564 Address points made in post-commit review of replication origins.
Amit reviewed the replication origins patch and made some good
points. Address them. This fixes typos in error messages, docs and
comments and adds a missing error check (although in a
should-never-happen scenario).

Discussion: CAA4eK1JqUBVeWWKwUmBPryFaje4190ug0y-OAUHWQ6tD83V4xg@mail.gmail.com
Backpatch: 9.5, where replication origins were introduced.
2015-08-07 15:09:05 +02:00
Bruce Momjian e641d7b22f docs: HTML-escape '>' in '=>' using HTML entities 2015-08-05 23:03:45 -04:00
Tom Lane 1b5d34ca62 Docs: add an explicit example about controlling overall greediness of REs.
Per discussion of bug #13538.
2015-08-04 21:09:12 -04:00
Joe Conway 7b4bfc87d5 Plug RLS related information leak in pg_stats view.
The pg_stats view is supposed to be restricted to only show rows
about tables the user can read. However, it sometimes can leak
information which could not otherwise be seen when row level security
is enabled. Fix that by not showing pg_stats rows to users that would
be subject to RLS on the table the row is related to. This is done
by creating/using the newly introduced SQL visible function,
row_security_active().

Along the way, clean up three call sites of check_enable_rls(). The second
argument of that function should only be specified as other than
InvalidOid when we are checking as a different user than the current one,
as in when querying through a view. These sites were passing GetUserId()
instead of InvalidOid, which can cause the function to return incorrect
results if the current user has the BYPASSRLS privilege and row_security
has been set to OFF.

Additionally fix a bug causing RI Trigger error messages to unintentionally
leak information when RLS is enabled, and other minor cleanup and
improvements. Also add WITH (security_barrier) to the definition of pg_stats.

Bumped CATVERSION due to new SQL functions and pg_stats view definition.

Back-patch to 9.5 where RLS was introduced. Reported by Yaroslav.
Patch by Joe Conway and Dean Rasheed with review and input by
Michael Paquier and Stephen Frost.
2015-07-28 13:21:22 -07:00
Andrew Dunstan e02d44b8a7 Support JSON negative array subscripts everywhere
Previously, there was an inconsistency across json/jsonb operators that
operate on datums containing JSON arrays -- only some operators
supported negative array count-from-the-end subscripting.  Specifically,
only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
integer" operator).  This inconsistency seemed likely to be
counter-intuitive to users.  To fix, allow all places where the user can
supply an integer subscript to accept a negative subscript value,
including path-orientated operators and functions, as well as other
extraction operators.  This will need to be called out as an
incompatibility in the 9.5 release notes, since it's possible that users
are relying on certain established extraction operators changed here
yielding NULL in the event of a negative subscript.

For the json type, this requires adding a way of cheaply getting the
total JSON array element count ahead of time when parsing arrays with a
negative subscript involved, necessitating an ad-hoc lex and parse.
This is followed by a "conversion" from a negative subscript to its
equivalent positive-wise value using the count.  From there on, it's as
if a positive-wise value was originally provided.

Note that there is still a minor inconsistency here across jsonb
deletion operators.  Unlike the aforementioned new "-" deletion operator
that accepts an integer on its right hand side, the new "#-" path
orientated deletion variant does not throw an error when it appears like
an array subscript (input that could be recognized by as an integer
literal) is being used on an object, which is wrong-headed.  The reason
for not being stricter is that it could be the case that an object pair
happens to have a key value that looks like an integer; in general,
these two possibilities are impossible to differentiate with rhs path
text[] argument elements.  However, we still don't allow the "#-"
path-orientated deletion operator to perform array-style subscripting.
Rather, we just return the original left operand value in the event of a
negative subscript (which seems analogous to how the established
"jsonb/json #> text[]" path-orientated operator may yield NULL in the
event of an invalid subscript).

In passing, make SetArrayPath() stricter about not accepting cases where
there is trailing non-numeric garbage bytes rather than a clean NUL
byte.  This means, for example, that strings like "10e10" are now not
accepted as an array subscript of 10 by some new-to-9.5 path-orientated
jsonb operators (e.g. the new #- operator).  Finally, remove dead code
for jsonb subscript deletion; arguably, this should have been done in
commit b81c7b409.

Peter Geoghegan and Andrew Dunstan
2015-07-17 21:13:47 -04:00
Robert Haas a04bb65f70 Add new function pg_notification_queue_usage.
This tells you what fraction of NOTIFY's queue is currently filled.

Brendan Jurd, reviewed by Merlin Moncure and Gurjeet Singh.  A few
further tweaks by me.
2015-07-17 09:12:03 -04:00
Tom Lane 10fb48d66d Add an optional missing_ok argument to SQL function current_setting().
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.

David Christensen, reviewed by Jeevan Chalke
2015-07-02 16:41:07 -04:00
Heikki Linnakangas 5b1b6bf49b Use American spelling for "behavior".
For consistency with the rest of the docs.

Michael Paquier
2015-07-02 12:15:09 +03:00
Heikki Linnakangas 6ab4d38ab0 Fix markup in docs.
Oops. I could swear I built the docs before pushing, but I guess not..
2015-06-29 00:01:26 +03:00
Heikki Linnakangas cb2acb1081 Add missing_ok option to the SQL functions for reading files.
This makes it possible to use the functions without getting errors, if there
is a chance that the file might be removed or renamed concurrently.
pg_rewind needs to do just that, although this could be useful for other
purposes too. (The changes to pg_rewind to use these functions will come in
a separate commit.)

The read_binary_file() function isn't very well-suited for extensions.c's
purposes anymore, if it ever was. So bite the bullet and make a copy of it
in extension.c, tailored for that use case. This seems better than the
accidental code reuse, even if it's a some more lines of code.

Michael Paquier, with plenty of kibitzing by me.
2015-06-28 21:35:46 +03:00
Tom Lane d759b7eb6a Docs: fix claim that to_char('FM') removes trailing zeroes.
Of course, what it removes is leading zeroes.  Seems to have been a thinko
in commit ffe92d15d5.  Noted by Hubert Depesz
Lubaczewski.
2015-06-25 10:44:03 -04:00
Fujii Masao 0b157a0dad Add index terms for functions jsonb_set and jsonb_pretty. 2015-06-24 22:30:19 +09:00
Andrew Dunstan 908e234733 Rename jsonb - text[] operator to #- to avoid ambiguity.
Following recent discussion  on -hackers. The underlying function is
also renamed to jsonb_delete_path. The regression tests now don't need
ugly type casts to avoid the ambiguity, so they are also removed.

Catalog version bumped.
2015-06-11 10:06:58 -04:00
Andrew Dunstan 94d6727dbe Clarify documentation of jsonb - text
Peter Geoghegan
2015-06-07 21:31:52 -04:00
Andrew Dunstan b81c7b4098 Desupport jsonb subscript deletion on objects
Supporting deletion of JSON pairs within jsonb objects using an
array-style integer subscript allowed for surprising outcomes.  This was
mostly due to the implementation-defined ordering of pairs within
objects for jsonb.

It also seems desirable to make jsonb integer subscript deletion
consistent with the 9.4 era general purpose integer subscripting
operator for jsonb (although that operator returns NULL when an object
is encountered, while we prefer here to throw an error).

Peter Geoghegan, following discussion on -hackers.
2015-06-07 20:46:00 -04:00
Peter Eisentraut d23a3a603b doc: Fix broken links in FOP build
FOP doesn't handle links to table rows, so put the link to a cell
instead.
2015-06-07 20:27:27 -04:00
Robert Haas 1c645da8eb docs: Fix list of object types pg_table_is_visible() can handle.
Materialized views and foreign tables were missing from the list,
probably because they are newer than the other object types that were
mentioned.

Etsuro Fujita
2015-06-04 17:48:00 -04:00
Fujii Masao 37013621f3 Minor improvement to txid_current() documentation.
Michael Paquier, reviewed by Christoph Berg and Naoya Anzai
2015-06-03 12:12:48 +09:00
Andrew Dunstan 37def42245 Rename jsonb_replace to jsonb_set and allow it to add new values
The function is given a fourth parameter, which defaults to true. When
this parameter is true, if the last element of the path is missing
in the original json, jsonb_set creates it in the result and assigns it
the new value. If it is false then the function does nothing unless all
elements of the path are present, including the last.

Based on some original code from Dmitry Dolgov, heavily modified by me.

Catalog version bumped.
2015-05-31 20:34:10 -04:00
Andres Freund f3d3118532 Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.

This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.

The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.

The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.

Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage.  The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting.  The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.

A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.

Needs a catversion bump because stored rules may change.

Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
    Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:46:31 +02:00
Andrew Dunstan c6947010ce Additional functions and operators for jsonb
jsonb_pretty(jsonb) produces nicely indented json output.
jsonb || jsonb concatenates two jsonb values.
jsonb - text removes a key and its associated value from the json
jsonb - int removes the designated array element
jsonb - text[] removes a key and associated value or array element at
the designated path
jsonb_replace(jsonb,text[],jsonb) replaces the array element designated
by the path or the value associated with the key designated by the path
with the given value.

Original work by Dmitry Dolgov, adapted and reworked for PostgreSQL core
by Andrew Dunstan, reviewed and tidied up by Petr Jelinek.
2015-05-12 15:52:45 -04:00
Andrew Dunstan 72d422a522 Map basebackup tablespaces using a tablespace_map file
Windows can't reliably restore symbolic links from a tar format, so
instead during backup start we create a tablespace_map file, which is
used by the restoring postgres to create the correct links in pg_tblspc.
The backup protocol also now has an option to request this file to be
included in the backup stream, and this is used by pg_basebackup when
operating in tar mode.

This is done on all platforms, not just Windows.

This means that pg_basebackup will not not work in tar mode against 9.4
and older servers, as this protocol option isn't implemented there.

Amit Kapila, reviewed by Dilip Kumar, with a little editing from me.
2015-05-12 09:29:10 -04:00
Alvaro Herrera b488c580ae Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.

The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object.  This is sufficient for many
uses of this feature.  For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code.  The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.

There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension.  The intention is that we will add some deparsing
code in a later release, as an in-core extension.

A new test module is included.  It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.

Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.

Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.

Based on original work by Dimitri Fontaine, though I didn't use his
code.

Discussion:
  https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr
  https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org
  https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
2015-05-11 19:14:31 -03:00
Alvaro Herrera 3b6db1f445 Add geometry/range functions to support BRIN inclusion
This commit adds the following functions:
    box(point) -> box
    bound_box(box, box) -> box
    inet_same_family(inet, inet) -> bool
    inet_merge(inet, inet) -> cidr
    range_merge(anyrange, anyrange) -> anyrange

The first of these is also used to implement a new assignment cast from
point to box.

These functions are the first part of a base to implement an "inclusion"
operator class for BRIN, for multidimensional data types.

Author: Emre Hasegeli
Reviewed by: Andreas Karlsson
2015-05-05 15:22:24 -03:00
Peter Eisentraut ad8d6d064c Fix typos
Author: Erik Rijkers <er@xs4all.nl>
2015-05-04 20:40:19 -04:00
Andres Freund 2b22795b32 Copy editing of the replication origins patch.
Michael Paquier and myself.
2015-05-01 12:22:13 +02:00
Andres Freund 5aa2350426 Introduce replication progress tracking infrastructure.
When implementing a replication solution ontop of logical decoding, two
related problems exist:
* How to safely keep track of replication progress
* How to change replication behavior, based on the origin of a row;
  e.g. to avoid loops in bi-directional replication setups

The solution to these problems, as implemented here, consist out of
three parts:

1) 'replication origins', which identify nodes in a replication setup.
2) 'replication progress tracking', which remembers, for each
   replication origin, how far replay has progressed in a efficient and
   crash safe manner.
3) The ability to filter out changes performed on the behest of a
   replication origin during logical decoding; this allows complex
   replication topologies. E.g. by filtering all replayed changes out.

Most of this could also be implemented in "userspace", e.g. by inserting
additional rows contain origin information, but that ends up being much
less efficient and more complicated.  We don't want to require various
replication solutions to reimplement logic for this independently. The
infrastructure is intended to be generic enough to be reusable.

This infrastructure also replaces the 'nodeid' infrastructure of commit
timestamps. It is intended to provide all the former capabilities,
except that there's only 2^16 different origins; but now they integrate
with logical decoding. Additionally more functionality is accessible via
SQL.  Since the commit timestamp infrastructure has also been introduced
in 9.5 (commit 73c986add) changing the API is not a problem.

For now the number of origins for which the replication progress can be
tracked simultaneously is determined by the max_replication_slots
GUC. That GUC is not a perfect match to configure this, but there
doesn't seem to be sufficient reason to introduce a separate new one.

Bumps both catversion and wal page magic.

Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer
Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer
Discussion: 20150216002155.GI15326@awork2.anarazel.de,
    20140923182422.GA15776@alap3.anarazel.de,
    20131114172632.GE7522@alap2.anarazel.de
2015-04-29 19:30:53 +02:00
Bruce Momjian 5086dfceba doc: recommend use of GUC server_version_num for version checks
Patch by Craig Ringer
2015-04-28 20:31:08 -04:00
Alvaro Herrera e9a077cad3 pg_event_trigger_dropped_objects: add is_temp column
It now also reports temporary objects dropped that are local to the
backend.  Previously we weren't reporting any temp objects because it
was deemed unnecessary; but as it turns out, it is necessary if we want
to keep close track of DDL command execution inside one session.  Temp
objects are reported as living in schema pg_temp, which works because
such a schema-qualification always refers to the temp objects of the
current session.
2015-04-06 11:40:55 -03:00
Alvaro Herrera 97690ea6e8 Change array_offset to return subscripts, not offsets
... and rename it and its sibling array_offsets to array_position and
array_positions, to account for the changed behavior.

Having the functions return subscripts better matches existing practice,
and is better suited to using the result value as a subscript into the
array directly.  For one-based arrays, the new definition is identical
to what was originally committed.

(We use the term "subscript" in the documentation, which is what we use
whenever we talk about arrays; but the functions themselves are named
using the word "position" to match the standard-defined POSITION()
functions.)

Author: Pavel Stěhule
Behavioral problem noted by Dean Rasheed.
2015-03-30 16:13:21 -03:00
Bruce Momjian 0e1f6d8132 PL/pgSQL docs: recommend format() for query construction
Previously only concatenation was recommended.

Report by Pavel Stehule
2015-03-24 21:10:36 -04:00
Stephen Frost bf03889996 GetUserId() changes to has_privs_of_role()
The pg_stat and pg_signal-related functions have been using GetUserId()
instead of has_privs_of_role() for checking if the current user should
be able to see details in pg_stat_activity or signal other processes,
requiring a user to do 'SET ROLE' for inheirited roles for a permissions
check, unlike other permissions checks.

This patch changes that behavior to, instead, act like most other
permission checks and use has_privs_of_role(), removing the 'SET ROLE'
need.  Documentation and error messages updated accordingly.

Per discussion with Alvaro, Peter, Adam (though not using Adam's patch),
and Robert.

Reviewed by Jeevan Chalke.
2015-03-19 15:02:33 -04:00
Alvaro Herrera 13dbc7a824 array_offset() and array_offsets()
These functions return the offset position or positions of a value in an
array.

Author: Pavel Stěhule
Reviewed by: Jim Nasby
2015-03-18 16:01:34 -03:00
Robert Haas 865f14a2d3 Allow named parameters to be specified using => in addition to :=
SQL has standardized on => as the use of to specify named parameters,
and we've wanted for many years to support the same syntax ourselves,
but this has been complicated by the possible use of => as an operator
name.  In PostgreSQL 9.0, we began emitting a warning when an operator
named => was defined, and in PostgreSQL 9.2, we stopped shipping a
=>(text, text) operator as part of hstore.  By the time the next major
version of PostgreSQL is released, => will have been deprecated for a
full five years, so hopefully there won't be too many people still
relying on it.  We continue to support := for compatibility with
previous PostgreSQL releases.

Pavel Stehule, reviewed by Petr Jelinek, with a few documentation
tweaks by me.
2015-03-10 11:09:41 -04:00
Andrew Dunstan c5b31e29f0 Minor housekeeping on JSON docs. 2015-02-15 17:08:37 -05:00
Alvaro Herrera 412f604adb doc: Fix typos in make_timestamp{,tz} examples
Pointed out by Alan Mogi (bug #12571)
2015-01-19 12:43:58 -03:00
Tom Lane 2f03ae6987 Docs: improve descriptions of ISO week-numbering date features.
Use the phraseology "ISO 8601 week-numbering year" in place of just
"ISO year", and make related adjustments to other terminology.

The point of this change is that it seems some people see "ISO year"
and think "standard year", whereupon they're surprised when constructs
like to_char(..., "IYYY-MM-DD") produce nonsensical results.  Perhaps
hanging a few more adjectives on it will discourage them from jumping
to false conclusions.  I put in an explicit warning against that
specific usage, too, though the main point is to discourage people
who haven't read this far down the page.

In passing fix some nearby markup and terminology inconsistencies.
2014-12-31 16:42:42 -05:00
Alvaro Herrera 72dd233d3e pg_event_trigger_dropped_objects: Add name/args output columns
These columns can be passed to pg_get_object_address() and used to
reconstruct the dropped objects identities in a remote server containing
similar objects, so that the drop can be replicated.

Reviewed by Stephen Frost, Heikki Linnakangas, Abhijit Menon-Sen, Andres
Freund.
2014-12-30 17:41:46 -03:00
Alvaro Herrera a676201490 Add pg_identify_object_as_address
This function returns object type and objname/objargs arrays, which can
be passed to pg_get_object_address.  This is especially useful because
the textual representation can be copied to a remote server in order to
obtain the corresponding OID-based address.  In essence, this function
is the inverse of recently added pg_get_object_address().

Catalog version bumped due to the addition of the new function.

Also add docs to pg_get_object_address.
2014-12-30 15:41:50 -03:00
Alvaro Herrera a609d96778 Revert "Use a bitmask to represent role attributes"
This reverts commit 1826987a46.

The overall design was deemed unacceptable, in discussion following the
previous commit message; we might find some parts of it still
salvageable, but I don't want to be on the hook for fixing it, so let's
wait until we have a new patch.
2014-12-23 15:35:49 -03:00
Alvaro Herrera 1826987a46 Use a bitmask to represent role attributes
The previous representation using a boolean column for each attribute
would not scale as well as we want to add further attributes.

Extra auxilliary functions are added to go along with this change, to
make up for the lost convenience of access of the old representation.

Catalog version bumped due to change in catalogs and the new functions.

Author: Adam Brightwell, minor tweaks by Álvaro
Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
2014-12-23 10:22:09 -03:00
Andrew Dunstan 2a3f2743f2 Further tidy up on json aggregate documentation 2014-12-22 18:30:46 -05:00
Andrew Dunstan b2d145bff0 Fix documentation of argument type of json_agg and jsonb_agg
json_agg was originally designed to aggregate records. However, it soon
became clear that it is useful for aggregating all kinds of values and
that's what we have on 9.3 and 9.4, and in head for it and jsonb_agg.
The documentation suggested otherwise, so this fixes it.
2014-12-22 14:12:06 -05:00
Alvaro Herrera 0ee98d1cbf pg_event_trigger_dropped_objects: add behavior flags
Add "normal" and "original" flags as output columns to the
pg_event_trigger_dropped_objects() function.  With this it's possible to
distinguish which objects, among those listed, need to be explicitely
referenced when trying to replicate a deletion.

This is necessary so that the list of objects can be pruned to the
minimum necessary to replicate the DROP command in a remote server that
might have slightly different schema (for instance, TOAST tables and
constraints with different names and such.)

Catalog version bumped due to change of function definition.

Reviewed by: Abhijit Menon-Sen, Stephen Frost, Heikki Linnakangas,
Robert Haas.
2014-12-19 15:00:45 -03:00
Tom Lane 5b51683589 Improve documentation about CASE and constant subexpressions.
The possibility that constant subexpressions of a CASE might be evaluated
at planning time was touched on in 9.17.1 (CASE expressions), but it really
ought to be explained in 4.2.14 (Expression Evaluation Rules) which is the
primary discussion of such topics.  Add text and an example there, and
revise the <note> under CASE to link there.

Back-patch to all supported branches, since it's acted like this for a
long time (though 9.2+ is probably worse because of its more aggressive
use of constant-folding via replanning of nominally-prepared statements).
Pre-9.4, also back-patch text added in commit 0ce627d4 about CASE versus
aggregate functions.

Tom Lane and David Johnston, per discussion of bug #12273.
2014-12-18 16:39:20 -05:00
Andrew Dunstan 7e354ab9fe Add several generator functions for jsonb that exist for json.
The functions are:
    to_jsonb()
    jsonb_object()
    jsonb_build_object()
    jsonb_build_array()
    jsonb_agg()
    jsonb_object_agg()

Also along the way some better logic is implemented in
json_categorize_type() to match that in the newly implemented
jsonb_categorize_type().

Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
2014-12-12 15:31:14 -05:00
Andrew Dunstan 237a882443 Add json_strip_nulls and jsonb_strip_nulls functions.
The functions remove object fields, including in nested objects, that
have null as a value. In certain cases this can lead to considerably
smaller datums, with no loss of semantic information.

Andrew Dunstan, reviewed by Pavel Stehule.
2014-12-12 09:00:43 -05:00
Simon Riggs 618c9430a8 Event Trigger for table_rewrite
Generate a table_rewrite event when ALTER TABLE
attempts to rewrite a table. Provide helper
functions to identify table and reason.

Intended use case is to help assess or to react
to schema changes that might hold exclusive locks
for long periods.

Dimitri Fontaine, triggering an edit by Simon Riggs

Reviewed in detail by Michael Paquier
2014-12-08 00:55:28 +09:00
Alvaro Herrera 73c986adde Keep track of transaction commit timestamps
Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData().  This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.

This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.

A new test in src/test/modules is included.

Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.

Authors: Álvaro Herrera and Petr Jelínek

Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
2014-12-03 11:53:02 -03:00
Tom Lane bac27394a1 Support arrays as input to array_agg() and ARRAY(SELECT ...).
These cases formerly failed with errors about "could not find array type
for data type".  Now they yield arrays of the same element type and one
higher dimension.

The implementation involves creating functions with API similar to the
existing accumArrayResult() family.  I (tgl) also extended the base family
by adding an initArrayResult() function, which allows callers to avoid
special-casing the zero-inputs case if they just want an empty array as
result.  (Not all do, so the previous calling convention remains valid.)
This allowed simplifying some existing code in xml.c and plperl.c.

Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
2014-11-25 12:21:28 -05:00
Peter Eisentraut 1d69ae419d doc: Add index entry for "hypothetical-set aggregate" 2014-11-13 11:57:50 -05:00
Fujii Masao 1871c89202 Add generate_series(numeric, numeric).
Платон Малюгин
Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
2014-11-11 21:44:46 +09:00
Bruce Momjian 8660b40d0c doc: mention TM is ignored for to_date/to_timestamp()
Report by Goulven Guillard
2014-10-13 17:36:50 -04:00
Stephen Frost c8a026e4f1 Revert 95d737ff to add 'ignore_nulls'
Per discussion, revert the commit which added 'ignore_nulls' to
row_to_json.  This capability would be better added as an independent
function rather than being bolted on to row_to_json.  Additionally,
the implementation didn't address complex JSON objects, and so was
incomplete anyway.

Pointed out by Tom and discussed with Andrew and Robert.
2014-09-29 13:32:22 -04:00
Stephen Frost 95d737ff45 Add 'ignore_nulls' option to row_to_json
Provide an option to skip NULL values in a row when generating a JSON
object from that row with row_to_json.  This can reduce the size of the
JSON object in cases where columns are NULL without really reducing the
information in the JSON object.

This also makes row_to_json into a single function with default values,
rather than having multiple functions.  In passing, change array_to_json
to also be a single function with default values (we don't add an
'ignore_nulls' option yet- it's not clear that there is a sensible
use-case there, and it hasn't been asked for in any case).

Pavel Stehule
2014-09-11 21:23:51 -04:00
Tom Lane e80252d424 Add width_bucket(anyelement, anyarray).
This provides a convenient method of classifying input values into buckets
that are not necessarily equal-width.  It works on any sortable data type.

The choice of function name is a bit debatable, perhaps, but showing that
there's a relationship to the SQL standard's width_bucket() function seems
more attractive than the other proposals.

Petr Jelinek, reviewed by Pavel Stehule
2014-09-09 15:34:14 -04:00
Tom Lane 0640c1bfc0 Documentation fix: sum(float4) returns float4, not float8.
The old claim is from my commit d06ebdb8d3 of
2000-07-17, but it seems to have been a plain old thinko; sum(float4) has
been distinct from sum(float8) since Berkeley days.  Noted by KaiGai Kohei.

While at it, mention the existence of sum(money), which is also of
embarrassingly ancient vintage.
2014-09-07 22:40:41 -04:00
Bruce Momjian 1f4d1074c5 Clarify documentation about "peer" rows in window functions
Peer rows are matching rows when ORDER BY is specified.

Report by arnaud.mouronval@gmail.com, David G Johnston
2014-09-05 18:59:41 -04:00
Kevin Grittner 05258761bf doc: Various typo/grammar fixes
Errors detected using Topy (https://github.com/intgr/topy), all
changes verified by hand and some manual tweaks added.

Marti Raudsepp

Individual changes backpatched, where applicable, as far as 9.0.
2014-08-30 10:52:36 -05:00
Bruce Momjian 01363beae5 pg_is_xlog_replay_paused(): remove super-user-only restriction
Also update docs to mention which function are super-user-only.

Report by sys-milan@statpro.com

Backpatch through 9.4
2014-08-29 09:06:05 -04:00
Tom Lane 6c40f8316e Add min and max aggregates for inet/cidr data types.
Haribabu Kommi, reviewed by Muhammad Asif Naeem
2014-08-28 22:37:58 -04:00
Tom Lane 41dd50e84d Fix corner-case behaviors in JSON/JSONB field extraction operators.
Cause the path extraction operators to return their lefthand input,
not NULL, if the path array has no elements.  This seems more consistent
since the case ought to correspond to applying the simple extraction
operator (->) zero times.

Cause other corner cases in field/element/path extraction to return NULL
rather than failing.  This behavior is arguably more useful than throwing
an error, since it allows an expression index using these operators to be
built even when not all values in the column are suitable for the
extraction being indexed.  Moreover, we already had multiple
inconsistencies between the path extraction operators and the simple
extraction operators, as well as inconsistencies between the JSON and
JSONB code paths.  Adopt a uniform rule of returning NULL rather than
throwing an error when the JSON input does not have a structure that
permits the request to be satisfied.

Back-patch to 9.4.  Update the release notes to list this as a behavior
change since 9.3.
2014-08-22 13:17:58 -04:00
Tom Lane 495cadda5e Further cleanup of JSON-specific error messages.
Fix an obvious typo in json_build_object()'s complaint about invalid
number of arguments, and make the errhint a bit more sensible too.

Per discussion about how to word the improved hint, change the few places
in the documentation that refer to JSON object field names as "names" to
say "keys" instead, since that's what we've said in the vast majority of
places in the docs.  Arguably "name" is more correct, since that's the
terminology used in RFC 7159; but we're stuck with "key" in view of the
naming of json_object_keys() so let's at least be self-consistent.

I adjusted a few code comments to match this as well, and failed to
resist the temptation to clean up some odd whitespace choices in the
same area, as well as a useless duplicate PG_ARGISNULL() check.  There's
still quite a bit of code that uses the phrase "field name" in non-user-
visible ways, so I left those usages alone.
2014-08-09 16:35:29 -04:00
Tom Lane 4fd9e6ffdd Rearrange documentation paragraph describing pg_relation_size().
Break the list of available options into an <itemizedlist> instead of
inline sentences.  This is mostly motivated by wanting to ensure that the
cross-references to the FSM and VM docs don't cross page boundaries in PDF
format; but it seems to me to read more easily this way anyway.  I took the
liberty of editorializing a bit further while at it.

Per complaint from Magnus about 9.0.18 docs not building in A4 format.
Patch all active branches so we don't get blind-sided by this particular
issue again in future.
2014-07-23 15:20:31 -04:00
Peter Eisentraut aa68872561 doc: Spell checking 2014-07-16 22:48:11 -04:00
Peter Eisentraut 8522f21400 Fix whitespace 2014-07-08 23:29:25 -04:00