Functions and OperatorsfunctionoperatorPostgreSQL provides a large number of
functions and operators for the built-in data types. Users can also
define their own functions and operators, as described in
. The
psql commands \df and
\do can be used to list all
available functions and operators, respectively.
If you are concerned about portability then note that most of
the functions and operators described in this chapter, with the
exception of the most trivial arithmetic and comparison operators
and some explicitly marked functions, are not specified by the
SQL standard. Some of this extended functionality
is present in other SQL database management
systems, and in many cases this functionality is compatible and
consistent between the various implementations. This chapter is also
not exhaustive; additional functions appear in relevant sections of
the manual.
Logical OperatorsoperatorlogicalBooleanoperatorsoperators, logical
The usual logical operators are available:
AND (operator)OR (operator)NOT (operator)conjunctiondisjunctionnegationAND>OR>NOT>
SQL uses a three-valued logic system with true,
false, and null>, which represents unknown.
Observe the following truth tables:
aba AND ba OR bTRUETRUETRUETRUETRUEFALSEFALSETRUETRUENULLNULLTRUEFALSEFALSEFALSEFALSEFALSENULLFALSENULLNULLNULLNULLNULLaNOT aTRUEFALSEFALSETRUENULLNULL
The operators AND and OR are
commutative, that is, you can switch the left and right operand
without affecting the result. But see for more information about the
order of evaluation of subexpressions.
Comparison Operatorscomparisonoperators
The usual comparison operators are available, shown in .
Comparison OperatorsOperatorDescription<less than>greater than<=less than or equal to>=greater than or equal to=equal<> or !=not equal
The != operator is converted to
<> in the parser stage. It is not
possible to implement != and
<> operators that do different things.
Comparison operators are available for all relevant data types.
All comparison operators are binary operators that
return values of type boolean; expressions like
1 < 2 < 3 are not valid (because there is
no < operator to compare a Boolean value with
3).
BETWEEN
In addition to the comparison operators, the special
BETWEEN construct is available:
a BETWEEN x AND y
is equivalent to
a >= x AND a <= y
Notice that BETWEEN treats the endpoint values as included
in the range.
NOT BETWEEN does the opposite comparison:
a NOT BETWEEN x AND y
is equivalent to
a < x OR a > yBETWEEN SYMMETRICBETWEEN SYMMETRIC> is the same as BETWEEN>
except there is no requirement that the argument to the left of
AND> be less than or equal to the argument on the right.
If it is not, those two arguments are automatically swapped, so that
a nonempty range is always implied.
IS NULLIS NOT NULLISNULLNOTNULL
To check whether a value is or is not null, use the constructs:
expression IS NULL
expression IS NOT NULL
or the equivalent, but nonstandard, constructs:
expression ISNULL
expression NOTNULL
null valuecomparing
Do not write
expression = NULL
because NULL> is not equal toNULL>. (The null value represents an unknown value,
and it is not known whether two unknown values are equal.) This
behavior conforms to the SQL standard.
Some applications might expect that
expression = NULL
returns true if expression evaluates to
the null value. It is highly recommended that these applications
be modified to comply with the SQL standard. However, if that
cannot be done the
configuration variable is available. If it is enabled,
PostgreSQL will convert x =
NULL clauses to x IS NULL.
If the expression is row-valued, then
IS NULL> is true when the row expression itself is null
or when all the row's fields are null, while
IS NOT NULL> is true when the row expression itself is non-null
and all the row's fields are non-null. Because of this behavior,
IS NULL> and IS NOT NULL> do not always return
inverse results for row-valued expressions, i.e., a row-valued
expression that contains both NULL and non-null values will return false
for both tests.
This definition conforms to the SQL standard, and is a change from the
inconsistent behavior exhibited by PostgreSQL
versions prior to 8.2.
IS DISTINCT FROMIS NOT DISTINCT FROM
Ordinary comparison operators yield null (signifying unknown>),
not true or false, when either input is null. For example,
7 = NULL> yields null, as does 7 <> NULL>. When
this behavior is not suitable, use the
IS NOT > DISTINCT FROM constructs:
expression IS DISTINCT FROM expressionexpression IS NOT DISTINCT FROM expression
For non-null inputs, IS DISTINCT FROM is
the same as the <>> operator. However, if both
inputs are null it returns false, and if only one input is
null it returns true. Similarly, IS NOT DISTINCT
FROM is identical to = for non-null
inputs, but it returns true when both inputs are null, and false when only
one input is null. Thus, these constructs effectively act as though null
were a normal data value, rather than unknown>.
IS TRUEIS NOT TRUEIS FALSEIS NOT FALSEIS UNKNOWNIS NOT UNKNOWN
Boolean values can also be tested using the constructs
expression IS TRUE
expression IS NOT TRUE
expression IS FALSE
expression IS NOT FALSE
expression IS UNKNOWN
expression IS NOT UNKNOWN
These will always return true or false, never a null value, even when the
operand is null.
A null input is treated as the logical value unknown>.
Notice that IS UNKNOWN> and IS NOT UNKNOWN> are
effectively the same as IS NULL and
IS NOT NULL, respectively, except that the input
expression must be of Boolean type.
Mathematical Functions and Operators
Mathematical operators are provided for many
PostgreSQL types. For types without
standard mathematical conventions
(e.g., date/time types) we
describe the actual behavior in subsequent sections.
shows the available mathematical operators.
The bitwise operators work only on integral data types, whereas
the others are available for all numeric data types. The bitwise
operators are also available for the bit
string types bit and bit varying, as
shown in .
shows the available
mathematical functions. In the table, dp
indicates double precision. Many of these functions
are provided in multiple forms with different argument types.
Except where noted, any given form of a function returns the same
data type as its argument.
The functions working with double precision data are mostly
implemented on top of the host system's C library; accuracy and behavior in
boundary cases can therefore vary depending on the host system.
Mathematical FunctionsFunctionReturn TypeDescriptionExampleResultabsabs(x)(same as input)absolute valueabs(-17.4)17.4cbrtcbrt(dp)dpcube rootcbrt(27.0)3ceilceil(dp or numeric)(same as input)smallest integer not less than argumentceil(-42.8)-42ceilingceiling(dp or numeric)(same as input)smallest integer not less than argument (alias for ceil)ceiling(-95.3)-95degreesdegrees(dp)dpradians to degreesdegrees(0.5)28.6478897565412divdiv(ynumeric>,
xnumeric>)numeric>integer quotient of y/xdiv(9,4)2expexp(dp or numeric)(same as input)exponentialexp(1.0)2.71828182845905floorfloor(dp or numeric)(same as input)largest integer not greater than argumentfloor(-42.8)-43lnln(dp or numeric)(same as input)natural logarithmln(2.0)0.693147180559945loglog(dp or numeric)(same as input)base 10 logarithmlog(100.0)2log(bnumeric,
xnumeric)numericlogarithm to base blog(2.0, 64.0)6.0000000000modmod(y,
x)(same as argument types)remainder of y/xmod(9,4)1pipi()dpπ constantpi()3.14159265358979powerpower(adp,
bdp)dpa> raised to the power of bpower(9.0, 3.0)729power(anumeric,
bnumeric)numerica> raised to the power of bpower(9.0, 3.0)729radiansradians(dp)dpdegrees to radiansradians(45.0)0.785398163397448roundround(dp or numeric)(same as input)round to nearest integerround(42.4)42round(vnumeric, sint)numericround to s decimal placesround(42.4382, 2)42.44signsign(dp or numeric)(same as input)sign of the argument (-1, 0, +1)sign(-8.4)-1sqrtsqrt(dp or numeric)(same as input)square rootsqrt(2.0)1.4142135623731trunctrunc(dp or numeric)(same as input)truncate toward zerotrunc(42.8)42trunc(vnumeric, sint)numerictruncate to s decimal placestrunc(42.4382, 2)42.43width_bucketwidth_bucket(opnumeric, b1numeric, b2numeric, countint)intreturn the bucket to which operand> would
be assigned in an equidepth histogram with count>
buckets, in the range b1> to b2>width_bucket(5.35, 0.024, 10.06, 5)3width_bucket(opdp, b1dp, b2dp, countint)intreturn the bucket to which operand> would
be assigned in an equidepth histogram with count>
buckets, in the range b1> to b2>width_bucket(5.35, 0.024, 10.06, 5)3
shows functions for
generating random numbers.
Random FunctionsFunctionReturn TypeDescriptionrandomrandom()dprandom value in the range 0.0 <= x < 1.0setseedsetseed(dp)voidset seed for subsequent random() calls (value between -1.0 and
1.0, inclusive)
The characteristics of the values returned by
random() depend
on the system implementation. It is not suitable for cryptographic
applications; see module for an alternative.
Finally, shows the
available trigonometric functions. All trigonometric functions
take arguments and return values of type double
precision. Trigonometric functions arguments are expressed
in radians. Inverse functions return values are expressed in
radians. See unit transformation functions
radians() and
degrees() above.
Trigonometric FunctionsFunctionDescriptionacosacos(x)inverse cosineasinasin(x)inverse sineatanatan(x)inverse tangentatan2atan2(y,
x)inverse tangent of
y/xcoscos(x)cosinecotcot(x)cotangentsinsin(x)sinetantan(x)tangent
String Functions and Operators
This section describes functions and operators for examining and
manipulating string values. Strings in this context include values
of the types character, character varying,
and text. Unless otherwise noted, all
of the functions listed below work on all of these types, but be
wary of potential effects of automatic space-padding when using the
character type. Some functions also exist
natively for the bit-string types.
SQL defines some string functions that use
key words, rather than commas, to separate
arguments. Details are in
.
PostgreSQL> also provides versions of these functions
that use the regular function invocation syntax
(see ).
Before PostgreSQL 8.3, these functions would
silently accept values of several non-string data types as well, due to
the presence of implicit coercions from those data types to
text>. Those coercions have been removed because they frequently
caused surprising behaviors. However, the string concatenation operator
(||>) still accepts non-string input, so long as at least one
input is of a string type, as shown in . For other cases, insert an explicit
coercion to text> if you need to duplicate the previous behavior.
SQL String Functions and OperatorsFunctionReturn TypeDescriptionExampleResultstring||stringtext
String concatenation
character stringconcatenation'Post' || 'greSQL'PostgreSQLstring||non-string
or
non-string||stringtext
String concatenation with one non-string input
'Value: ' || 42Value: 42bit_lengthbit_length(string)intNumber of bits in stringbit_length('jose')32char_lengthchar_length(string) or character_length(string)int
Number of characters in string
character stringlengthlengthof a character stringcharacter string, lengthchar_length('jose')4lowerlower(string)textConvert string to lower caselower('TOM')tomoctet_lengthoctet_length(string)intNumber of bytes in stringoctet_length('jose')4overlayoverlay(string placing string from intfor int)text
Replace substring
overlay('Txxxxas' placing 'hom' from 2 for 4)Thomaspositionposition(substring in string)intLocation of specified substringposition('om' in 'Thomas')3substringsubstring(stringfrom intfor int)text
Extract substring
substring('Thomas' from 2 for 3)homsubstring(string from pattern)text
Extract substring matching POSIX regular expression. See
for more information on pattern
matching.
substring('Thomas' from '...$')massubstring(string from pattern for escape)text
Extract substring matching SQL regular expression.
See for more information on
pattern matching.
substring('Thomas' from '%#"o_a#"_' for '#')omatrimtrim(leading | trailing | bothcharacters from
string)text
Remove the longest string containing only the
characters (a space by default) from the
start/end/both ends of the stringtrim(both 'x' from 'xTomxx')Tomtrim(leading | trailing
| bothfromstring, characters
)text
Non-standard version of trim()>
trim(both from 'xTomxx', 'x')Tomupperupper(string)textConvert string to upper caseupper('tom')TOM
Additional string manipulation functions are available and are
listed in . Some of them are used internally to implement the
SQL-standard string functions listed in .
Other String FunctionsFunctionReturn TypeDescriptionExampleResultasciiascii(string)int
ASCII code of the first character of the
argument. For UTF8 returns the Unicode code
point of the character. For other multibyte encodings, the
argument must be an ASCII character.
ascii('x')120btrimbtrim(stringtext, characterstext)text
Remove the longest string consisting only of characters
in characters (a space by default)
from the start and end of stringbtrim('xyxtrimyyx', 'xy')trimchrchr(int)text
Character with the given code. For UTF8 the
argument is treated as a Unicode code point. For other multibyte
encodings the argument must designate an
ASCII character. The NULL (0) character is not
allowed because text data types cannot store such bytes.
chr(65)Aconcatconcat(str"any"
[, str"any" [, ...] ])text
Concatenate the text representations of all the arguments.
NULL arguments are ignored.
concat('abcde', 2, NULL, 22)abcde222concat_wsconcat_ws(septext,
str"any"
[, str"any" [, ...] ])text
Concatenate all but the first argument with separators. The first
argument is used as the separator string. NULL arguments are ignored.
concat_ws(',', 'abcde', 2, NULL, 22)abcde,2,22convertconvert(stringbytea,
src_encodingname,
dest_encodingname)bytea
Convert string to dest_encoding. The
original encoding is specified by
src_encoding. The
string must be valid in this encoding.
Conversions can be defined by CREATE CONVERSION.
Also there are some predefined conversions. See for available conversions.
convert('text_in_utf8', 'UTF8', 'LATIN1')text_in_utf8 represented in Latin-1
encoding (ISO 8859-1)convert_fromconvert_from(stringbytea,
src_encodingname)text
Convert string to the database encoding. The original encoding
is specified by src_encoding. The
string must be valid in this encoding.
convert_from('text_in_utf8', 'UTF8')text_in_utf8 represented in the current database encodingconvert_toconvert_to(stringtext,
dest_encodingname)bytea
Convert string to dest_encoding.
convert_to('some text', 'UTF8')some text represented in the UTF8 encodingdecodedecode(stringtext,
formattext)bytea
Decode binary data from textual representation in string>.
Options for format> are same as in encode>.
decode('MTIzAAE=', 'base64')\x3132330001encodeencode(databytea,
formattext)text
Encode binary data into a textual representation. Supported
formats are: base64>, hex>, escape>.
escape> converts zero bytes and high-bit-set bytes to
octal sequences (\>nnn>) and
doubles backslashes.
encode(E'123\\000\\001', 'base64')MTIzAAE=formatformat(formatstrtext
[, formatarg"any" [, ...] ])text
Format arguments according to a format string.
This function is similar to the C function sprintf>.
See .
format('Hello %s, %1$s', 'World')Hello World, Worldinitcapinitcap(string)text
Convert the first letter of each word to upper case and the
rest to lower case. Words are sequences of alphanumeric
characters separated by non-alphanumeric characters.
initcap('hi THOMAS')Hi Thomasleftleft(strtext,
nint)text
Return first n> characters in the string. When n>
is negative, return all but last |n>| characters.
left('abcde', 2)ablengthlength(string)int
Number of characters in stringlength('jose')4length(stringbytea,
encodingname )int
Number of characters in string in the given
encoding. The string
must be valid in this encoding.
length('jose', 'UTF8')4lpadlpad(stringtext,
lengthint, filltext)text
Fill up the string to length
length by prepending the characters
fill (a space by default). If the
string is already longer than
length then it is truncated (on the
right).
lpad('hi', 5, 'xy')xyxhiltrimltrim(stringtext, characterstext)text
Remove the longest string containing only characters from
characters (a space by default) from the start of
stringltrim('zzzytrim', 'xyz')trimmd5md5(string)text
Calculates the MD5 hash of string,
returning the result in hexadecimal
md5('abc')900150983cd24fb0 d6963f7d28e17f72pg_client_encodingpg_client_encoding()name
Current client encoding name
pg_client_encoding()SQL_ASCIIquote_identquote_ident(stringtext)text
Return the given string suitably quoted to be used as an identifier
in an SQL statement string.
Quotes are added only if necessary (i.e., if the string contains
non-identifier characters or would be case-folded).
Embedded quotes are properly doubled.
See also .
quote_ident('Foo bar')"Foo bar"quote_literalquote_literal(stringtext)text
Return the given string suitably quoted to be used as a string literal
in an SQL statement string.
Embedded single-quotes and backslashes are properly doubled.
Note that quote_literal returns null on null
input; if the argument might be null,
quote_nullable is often more suitable.
See also .
quote_literal(E'O\'Reilly')'O''Reilly'quote_literal(valueanyelement)text
Coerce the given value to text and then quote it as a literal.
Embedded single-quotes and backslashes are properly doubled.
quote_literal(42.5)'42.5'quote_nullablequote_nullable(stringtext)text
Return the given string suitably quoted to be used as a string literal
in an SQL statement string; or, if the argument
is null, return NULL>.
Embedded single-quotes and backslashes are properly doubled.
See also .
quote_nullable(NULL)NULLquote_nullable(valueanyelement)text
Coerce the given value to text and then quote it as a literal;
or, if the argument is null, return NULL>.
Embedded single-quotes and backslashes are properly doubled.
quote_nullable(42.5)'42.5'regexp_matchesregexp_matches(stringtext, patterntext [, flagstext])setof text[]
Return all captured substrings resulting from matching a POSIX regular
expression against the string. See
for more information.
regexp_matches('foobarbequebaz', '(bar)(beque)'){bar,beque}regexp_replaceregexp_replace(stringtext, patterntext, replacementtext [, flagstext])text
Replace substring(s) matching a POSIX regular expression. See
for more information.
regexp_replace('Thomas', '.[mN]a.', 'M')ThMregexp_split_to_arrayregexp_split_to_array(stringtext, patterntext [, flagstext ])text[]
Split string using a POSIX regular expression as
the delimiter. See for more
information.
regexp_split_to_array('hello world', E'\\s+'){hello,world}regexp_split_to_tableregexp_split_to_table(stringtext, patterntext [, flagstext])setof text
Split string using a POSIX regular expression as
the delimiter. See for more
information.
regexp_split_to_table('hello world', E'\\s+')helloworld (2 rows)repeatrepeat(stringtext, numberint)textRepeat string the specified
number of timesrepeat('Pg', 4)PgPgPgPgreplacereplace(stringtext,
fromtext,
totext)textReplace all occurrences in string of substring
from with substring toreplace('abcdefabcdef', 'cd', 'XX')abXXefabXXefreversereverse(str)text
Return reversed string.
reverse('abcde')edcbarightright(strtext,
nint)text
Return last n> characters in the string. When n>
is negative, return all but first |n>| characters.
right('abcde', 2)derpadrpad(stringtext,
lengthint, filltext)text
Fill up the string to length
length by appending the characters
fill (a space by default). If the
string is already longer than
length then it is truncated.
rpad('hi', 5, 'xy')hixyxrtrimrtrim(stringtext, characterstext)text
Remove the longest string containing only characters from
characters (a space by default) from the end of
stringrtrim('trimxxxx', 'x')trimsplit_partsplit_part(stringtext,
delimitertext,
fieldint)textSplit string on delimiter
and return the given field (counting from one)
split_part('abc~@~def~@~ghi', '~@~', 2)defstrposstrpos(string, substring)int
Location of specified substring (same as
position(substring in
string), but note the reversed
argument order)
strpos('high', 'ig')2substrsubstr(string, from, count)text
Extract substring (same as
substring(string from from for count))
substr('alphabet', 3, 2)phto_asciito_ascii(stringtext, encodingtext)text
Convert string to ASCII from another encoding
(only supports conversion from LATIN1>, LATIN2>, LATIN9>,
and WIN1250> encodings)
to_ascii('Karel')Karelto_hexto_hex(numberint
or bigint)textConvert number to its equivalent hexadecimal
representation
to_hex(2147483647)7ffffffftranslatetranslate(stringtext,
fromtext,
totext)text
Any character in string that matches a
character in the from set is replaced by
the corresponding character in the to
set. If from is longer than
to, occurrences of the extra characters in
from are removed.
translate('12345', '143', 'ax')a2x5
The concat, concat_ws and
format functions are variadic, so it is possible to
pass the values to be concatenated or formatted as an array marked with
the VARIADIC keyword (see ). The array's elements are
treated as if they were separate ordinary arguments to the function.
If the variadic array argument is NULL, concat
and concat_ws return NULL, but
format treats a NULL as a zero-element array.
See also the aggregate function string_agg in
.
Built-in ConversionsConversion Name
The conversion names follow a standard naming scheme: The
official name of the source encoding with all
non-alphanumeric characters replaced by underscores, followed
by _to_, followed by the similarly processed
destination encoding name. Therefore, the names might deviate
from the customary encoding names.
Source EncodingDestination Encodingascii_to_micSQL_ASCIIMULE_INTERNALascii_to_utf8SQL_ASCIIUTF8big5_to_euc_twBIG5EUC_TWbig5_to_micBIG5MULE_INTERNALbig5_to_utf8BIG5UTF8euc_cn_to_micEUC_CNMULE_INTERNALeuc_cn_to_utf8EUC_CNUTF8euc_jp_to_micEUC_JPMULE_INTERNALeuc_jp_to_sjisEUC_JPSJISeuc_jp_to_utf8EUC_JPUTF8euc_kr_to_micEUC_KRMULE_INTERNALeuc_kr_to_utf8EUC_KRUTF8euc_tw_to_big5EUC_TWBIG5euc_tw_to_micEUC_TWMULE_INTERNALeuc_tw_to_utf8EUC_TWUTF8gb18030_to_utf8GB18030UTF8gbk_to_utf8GBKUTF8iso_8859_10_to_utf8LATIN6UTF8iso_8859_13_to_utf8LATIN7UTF8iso_8859_14_to_utf8LATIN8UTF8iso_8859_15_to_utf8LATIN9UTF8iso_8859_16_to_utf8LATIN10UTF8iso_8859_1_to_micLATIN1MULE_INTERNALiso_8859_1_to_utf8LATIN1UTF8iso_8859_2_to_micLATIN2MULE_INTERNALiso_8859_2_to_utf8LATIN2UTF8iso_8859_2_to_windows_1250LATIN2WIN1250iso_8859_3_to_micLATIN3MULE_INTERNALiso_8859_3_to_utf8LATIN3UTF8iso_8859_4_to_micLATIN4MULE_INTERNALiso_8859_4_to_utf8LATIN4UTF8iso_8859_5_to_koi8_rISO_8859_5KOI8Riso_8859_5_to_micISO_8859_5MULE_INTERNALiso_8859_5_to_utf8ISO_8859_5UTF8iso_8859_5_to_windows_1251ISO_8859_5WIN1251iso_8859_5_to_windows_866ISO_8859_5WIN866iso_8859_6_to_utf8ISO_8859_6UTF8iso_8859_7_to_utf8ISO_8859_7UTF8iso_8859_8_to_utf8ISO_8859_8UTF8iso_8859_9_to_utf8LATIN5UTF8johab_to_utf8JOHABUTF8koi8_r_to_iso_8859_5KOI8RISO_8859_5koi8_r_to_micKOI8RMULE_INTERNALkoi8_r_to_utf8KOI8RUTF8koi8_r_to_windows_1251KOI8RWIN1251koi8_r_to_windows_866KOI8RWIN866koi8_u_to_utf8KOI8UUTF8mic_to_asciiMULE_INTERNALSQL_ASCIImic_to_big5MULE_INTERNALBIG5mic_to_euc_cnMULE_INTERNALEUC_CNmic_to_euc_jpMULE_INTERNALEUC_JPmic_to_euc_krMULE_INTERNALEUC_KRmic_to_euc_twMULE_INTERNALEUC_TWmic_to_iso_8859_1MULE_INTERNALLATIN1mic_to_iso_8859_2MULE_INTERNALLATIN2mic_to_iso_8859_3MULE_INTERNALLATIN3mic_to_iso_8859_4MULE_INTERNALLATIN4mic_to_iso_8859_5MULE_INTERNALISO_8859_5mic_to_koi8_rMULE_INTERNALKOI8Rmic_to_sjisMULE_INTERNALSJISmic_to_windows_1250MULE_INTERNALWIN1250mic_to_windows_1251MULE_INTERNALWIN1251mic_to_windows_866MULE_INTERNALWIN866sjis_to_euc_jpSJISEUC_JPsjis_to_micSJISMULE_INTERNALsjis_to_utf8SJISUTF8tcvn_to_utf8WIN1258UTF8uhc_to_utf8UHCUTF8utf8_to_asciiUTF8SQL_ASCIIutf8_to_big5UTF8BIG5utf8_to_euc_cnUTF8EUC_CNutf8_to_euc_jpUTF8EUC_JPutf8_to_euc_krUTF8EUC_KRutf8_to_euc_twUTF8EUC_TWutf8_to_gb18030UTF8GB18030utf8_to_gbkUTF8GBKutf8_to_iso_8859_1UTF8LATIN1utf8_to_iso_8859_10UTF8LATIN6utf8_to_iso_8859_13UTF8LATIN7utf8_to_iso_8859_14UTF8LATIN8utf8_to_iso_8859_15UTF8LATIN9utf8_to_iso_8859_16UTF8LATIN10utf8_to_iso_8859_2UTF8LATIN2utf8_to_iso_8859_3UTF8LATIN3utf8_to_iso_8859_4UTF8LATIN4utf8_to_iso_8859_5UTF8ISO_8859_5utf8_to_iso_8859_6UTF8ISO_8859_6utf8_to_iso_8859_7UTF8ISO_8859_7utf8_to_iso_8859_8UTF8ISO_8859_8utf8_to_iso_8859_9UTF8LATIN5utf8_to_johabUTF8JOHAButf8_to_koi8_rUTF8KOI8Rutf8_to_koi8_uUTF8KOI8Uutf8_to_sjisUTF8SJISutf8_to_tcvnUTF8WIN1258utf8_to_uhcUTF8UHCutf8_to_windows_1250UTF8WIN1250utf8_to_windows_1251UTF8WIN1251utf8_to_windows_1252UTF8WIN1252utf8_to_windows_1253UTF8WIN1253utf8_to_windows_1254UTF8WIN1254utf8_to_windows_1255UTF8WIN1255utf8_to_windows_1256UTF8WIN1256utf8_to_windows_1257UTF8WIN1257utf8_to_windows_866UTF8WIN866utf8_to_windows_874UTF8WIN874windows_1250_to_iso_8859_2WIN1250LATIN2windows_1250_to_micWIN1250MULE_INTERNALwindows_1250_to_utf8WIN1250UTF8windows_1251_to_iso_8859_5WIN1251ISO_8859_5windows_1251_to_koi8_rWIN1251KOI8Rwindows_1251_to_micWIN1251MULE_INTERNALwindows_1251_to_utf8WIN1251UTF8windows_1251_to_windows_866WIN1251WIN866windows_1252_to_utf8WIN1252UTF8windows_1256_to_utf8WIN1256UTF8windows_866_to_iso_8859_5WIN866ISO_8859_5windows_866_to_koi8_rWIN866KOI8Rwindows_866_to_micWIN866MULE_INTERNALwindows_866_to_utf8WIN866UTF8windows_866_to_windows_1251WIN866WINwindows_874_to_utf8WIN874UTF8euc_jis_2004_to_utf8EUC_JIS_2004UTF8utf8_to_euc_jis_2004UTF8EUC_JIS_2004shift_jis_2004_to_utf8SHIFT_JIS_2004UTF8utf8_to_shift_jis_2004UTF8SHIFT_JIS_2004euc_jis_2004_to_shift_jis_2004EUC_JIS_2004SHIFT_JIS_2004shift_jis_2004_to_euc_jis_2004SHIFT_JIS_2004EUC_JIS_2004
formatformat
The function format> produces output formatted according to
a format string, in a style similar to the C function
sprintf>.
format>(formatstr> text> [, formatarg> "any"> [, ...] ])
formatstr> is a format string that specifies how the
result should be formatted. Text in the format string is copied
directly to the result, except where format specifiers> are
used. Format specifiers act as placeholders in the string, defining how
subsequent function arguments should be formatted and inserted into the
result. Each formatarg> argument is converted to text
according to the usual output rules for its data type, and then formatted
and inserted into the result string according to the format specifier(s).
Format specifiers are introduced by a %> character and have
the form
%[position>][flags>][width>]type>
where the component fields are:
position (optional)
A string of the form n>$> where
n> is the index of the argument to print.
Index 1 means the first argument after
formatstr>. If the position> is
omitted, the default is to use the next argument in sequence.
flags (optional)
Additional options controlling how the format specifier's output is
formatted. Currently the only supported flag is a minus sign
(->) which will cause the format specifier's output to be
left-justified. This has no effect unless the width>
field is also specified.
width (optional)
Specifies the minimum> number of characters to use to
display the format specifier's output. The output is padded on the
left or right (depending on the -> flag) with spaces as
needed to fill the width. A too-small width does not cause
truncation of the output, but is simply ignored. The width may be
specified using any of the following: a positive integer; an
asterisk (*>) to use the next function argument as the
width; or a string of the form *n>$> to
use the n>th function argument as the width.
If the width comes from a function argument, that argument is
consumed before the argument that is used for the format specifier's
value. If the width argument is negative, the result is left
aligned (as if the -> flag had been specified) within a
field of length abs>(width).
type (required)
The type of format conversion to use to produce the format
specifier's output. The following types are supported:
s formats the argument value as a simple
string. A null value is treated as an empty string.
I treats the argument value as an SQL
identifier, double-quoting it if necessary.
It is an error for the value to be null.
L quotes the argument value as an SQL literal.
A null value is displayed as the string NULL>, without
quotes.
In addition to the format specifiers described above, the special sequence
%%> may be used to output a literal %> character.
Here are some examples of the basic format conversions:
SELECT format('Hello %s', 'World');
Result: Hello World
SELECT format('Testing %s, %s, %s, %%', 'one', 'two', 'three');
Result: >Testing one, two, three, %>
SELECT format('INSERT INTO %I VALUES(%L)', 'Foo bar', E'O\'Reilly');
Result: INSERT INTO "Foo bar" VALUES('O''Reilly')
SELECT format('INSERT INTO %I VALUES(%L)', 'locations', E'C:\\Program Files');
Result: INSERT INTO locations VALUES(E'C:\\Program Files')
Here are examples using width fields
and the -> flag:
SELECT format('|%10s|', 'foo');
Result: >| foo|>
SELECT format('|%-10s|', 'foo');
Result: >|foo |>
SELECT format('|%*s|', 10, 'foo');
Result: >| foo|>
SELECT format('|%*s|', -10, 'foo');
Result: >|foo |>
SELECT format('|%-*s|', 10, 'foo');
Result: >|foo |>
SELECT format('|%-*s|', -10, 'foo');
Result: >|foo |>
These examples show use of position> fields:
SELECT format('Testing %3$s, %2$s, %1$s', 'one', 'two', 'three');
Result: >Testing three, two, one>
SELECT format('|%*2$s|', 'foo', 10, 'bar');
Result: >| bar|>
SELECT format('|%1$*2$s|', 'foo', 10, 'bar');
Result: >| foo|>
Unlike the standard C function sprintf>,
PostgreSQL>'s format> function allows format
specifiers with and without position> fields to be mixed
in the same format string. A format specifier without a
position> field always uses the next argument after the
last argument consumed.
In addition, the format> function does not require all
function arguments to be used in the format string.
For example:
SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three');
Result: >Testing three, two, three>
The %I> and %L> format specifiers are particularly
useful for safely constructing dynamic SQL statements. See
.
Binary String Functions and Operatorsbinary datafunctions
This section describes functions and operators for examining and
manipulating values of type bytea.
SQL defines some string functions that use
key words, rather than commas, to separate
arguments. Details are in
.
PostgreSQL> also provides versions of these functions
that use the regular function invocation syntax
(see ).
The sample results shown on this page assume that the server parameter
bytea_output> is set
to escape (the traditional PostgreSQL format).
SQL Binary String Functions and OperatorsFunctionReturn TypeDescriptionExampleResultstring||stringbytea
String concatenation
binary stringconcatenationE'\\\\Post'::bytea || E'\\047gres\\000'::bytea\\Post'gres\000octet_lengthoctet_length(string)intNumber of bytes in binary stringoctet_length(E'jo\\000se'::bytea)5overlayoverlay(string placing string from intfor int)bytea
Replace substring
overlay(E'Th\\000omas'::bytea placing E'\\002\\003'::bytea from 2 for 3)T\\002\\003maspositionposition(substring in string)intLocation of specified substringposition(E'\\000om'::bytea in E'Th\\000omas'::bytea)3substringsubstring(stringfrom intfor int)bytea
Extract substring
substring(E'Th\\000omas'::bytea from 2 for 3)h\000otrimtrim(bothbytes from
string)bytea
Remove the longest string containing only the bytes in
bytes from the start
and end of stringtrim(E'\\000'::bytea from E'\\000Tom\\000'::bytea)Tom
Additional binary string manipulation functions are available and
are listed in . Some
of them are used internally to implement the
SQL-standard string functions listed in .
Other Binary String FunctionsFunctionReturn TypeDescriptionExampleResultbtrimbtrim(stringbytea, bytesbytea)bytea
Remove the longest string consisting only of bytes
in bytes from the start and end of
stringbtrim(E'\\000trim\\000'::bytea, E'\\000'::bytea)trimdecodedecode(stringtext,
formattext)bytea
Decode binary data from textual representation in string>.
Options for format> are same as in encode>.
decode(E'123\\000456', 'escape')123\000456encodeencode(databytea,
formattext)text
Encode binary data into a textual representation. Supported
formats are: base64>, hex>, escape>.
escape> converts zero bytes and high-bit-set bytes to
octal sequences (\>nnn>) and
doubles backslashes.
encode(E'123\\000456'::bytea, 'escape')123\000456get_bitget_bit(string, offset)int
Extract bit from string
get_bit(E'Th\\000omas'::bytea, 45)1get_byteget_byte(string, offset)int
Extract byte from string
get_byte(E'Th\\000omas'::bytea, 4)109lengthlength(string)int
Length of binary string
binary stringlengthlengthof a binary stringbinary strings, lengthlength(E'jo\\000se'::bytea)5md5md5(string)text
Calculates the MD5 hash of string,
returning the result in hexadecimal
md5(E'Th\\000omas'::bytea)8ab2d3c9689aaf18 b4958c334c82d8b1set_bitset_bit(string,
offset, newvalue>)bytea
Set bit in string
set_bit(E'Th\\000omas'::bytea, 45, 0)Th\000omAsset_byteset_byte(string,
offset, newvalue>)bytea
Set byte in string
set_byte(E'Th\\000omas'::bytea, 4, 64)Th\000o@as
get_byte> and set_byte> number the first byte
of a binary string as byte 0.
get_bit> and set_bit> number bits from the
right within each byte; for example bit 0 is the least significant bit of
the first byte, and bit 15 is the most significant bit of the second byte.
See also the aggregate function string_agg in
and the large object functions
in .
Bit String Functions and Operatorsbit stringsfunctions
This section describes functions and operators for examining and
manipulating bit strings, that is values of the types
bit and bit varying. Aside from the
usual comparison operators, the operators
shown in can be used.
Bit string operands of &, |,
and # must be of equal length. When bit
shifting, the original length of the string is preserved, as shown
in the examples.
The following SQL-standard functions work on bit
strings as well as character strings:
length,
bit_length,
octet_length,
position,
substring,
overlay.
The following functions work on bit strings as well as binary
strings:
get_bit,
set_bit.
When working with a bit string, these functions number the first
(leftmost) bit of the string as bit 0.
In addition, it is possible to cast integral values to and from type
bit>.
Some examples:
44::bit(10) 0000101100
44::bit(3) 100
cast(-44 as bit(12)) 111111010100
'1110'::bit(4)::integer 14
Note that casting to just bit> means casting to
bit(1)>, and so will deliver only the least significant
bit of the integer.
Casting an integer to bit(n)> copies the rightmost
n> bits. Casting an integer to a bit string width wider
than the integer itself will sign-extend on the left.
Pattern Matchingpattern matching
There are three separate approaches to pattern matching provided
by PostgreSQL: the traditional
SQL LIKE operator, the
more recent SIMILAR TO operator (added in
SQL:1999), and POSIX-style regular
expressions. Aside from the basic does this string match
this pattern?> operators, functions are available to extract
or replace matching substrings and to split a string at matching
locations.
If you have pattern matching needs that go beyond this,
consider writing a user-defined function in Perl or Tcl.
LIKELIKEstring LIKE patternESCAPE escape-characterstring NOT LIKE patternESCAPE escape-character
The LIKE expression returns true if the
string matches the supplied
pattern. (As
expected, the NOT LIKE expression returns
false if LIKE returns true, and vice versa.
An equivalent expression is
NOT (string LIKE
pattern).)
If pattern does not contain percent
signs or underscores, then the pattern only represents the string
itself; in that case LIKE acts like the
equals operator. An underscore (_) in
pattern stands for (matches) any single
character; a percent sign (%) matches any sequence
of zero or more characters.
Some examples:
'abc' LIKE 'abc' true
'abc' LIKE 'a%' true
'abc' LIKE '_b_' true
'abc' LIKE 'c' falseLIKE pattern matching always covers the entire
string. Therefore, if it's desired to match a sequence anywhere within
a string, the pattern must start and end with a percent sign.
To match a literal underscore or percent sign without matching
other characters, the respective character in
pattern must be
preceded by the escape character. The default escape
character is the backslash but a different one can be selected by
using the ESCAPE clause. To match the escape
character itself, write two escape characters.
If you have turned off,
any backslashes you write in literal string constants will need to be
doubled. See for more information.
It's also possible to select no escape character by writing
ESCAPE ''. This effectively disables the
escape mechanism, which makes it impossible to turn off the
special meaning of underscore and percent signs in the pattern.
The key word ILIKE can be used instead of
LIKE to make the match case-insensitive according
to the active locale. This is not in the SQL standard but is a
PostgreSQL extension.
The operator ~~ is equivalent to
LIKE, and ~~* corresponds to
ILIKE. There are also
!~~ and !~~* operators that
represent NOT LIKE and NOT
ILIKE, respectively. All of these operators are
PostgreSQL-specific.
SIMILAR TO Regular Expressionsregular expressionSIMILAR TOsubstringstring SIMILAR TO patternESCAPE escape-characterstring NOT SIMILAR TO patternESCAPE escape-character
The SIMILAR TO operator returns true or
false depending on whether its pattern matches the given string.
It is similar to LIKE, except that it
interprets the pattern using the SQL standard's definition of a
regular expression. SQL regular expressions are a curious cross
between LIKE notation and common regular
expression notation.
Like LIKE, the SIMILAR TO
operator succeeds only if its pattern matches the entire string;
this is unlike common regular expression behavior where the pattern
can match any part of the string.
Also like
LIKE, SIMILAR TO uses
_> and %> as wildcard characters denoting
any single character and any string, respectively (these are
comparable to .> and .*> in POSIX regular
expressions).
In addition to these facilities borrowed from LIKE,
SIMILAR TO supports these pattern-matching
metacharacters borrowed from POSIX regular expressions:
| denotes alternation (either of two alternatives).
* denotes repetition of the previous item zero
or more times.
+ denotes repetition of the previous item one
or more times.
? denotes repetition of the previous item zero
or one time.
{>m>} denotes repetition
of the previous item exactly m> times.
{>m>,} denotes repetition
of the previous item m> or more times.
{>m>,>n>}>
denotes repetition of the previous item at least m> and
not more than n> times.
Parentheses () can be used to group items into
a single logical item.
A bracket expression [...] specifies a character
class, just as in POSIX regular expressions.
Notice that the period (.>) is not a metacharacter
for SIMILAR TO>.
As with LIKE>, a backslash disables the special meaning
of any of these metacharacters; or a different escape character can
be specified with ESCAPE>.
Some examples:
'abc' SIMILAR TO 'abc' true
'abc' SIMILAR TO 'a' false
'abc' SIMILAR TO '%(b|d)%' true
'abc' SIMILAR TO '(b|c)%' false
The substring> function with three parameters,
substring(string from
pattern for
escape-character), provides
extraction of a substring that matches an SQL
regular expression pattern. As with SIMILAR TO>, the
specified pattern must match the entire data string, or else the
function fails and returns null. To indicate the part of the
pattern that should be returned on success, the pattern must contain
two occurrences of the escape character followed by a double quote
(">).
The text matching the portion of the pattern
between these markers is returned.
Some examples, with #"> delimiting the return string:
substring('foobar' from '%#"o_b#"%' for '#') oob
substring('foobar' from '#"o_b#"%' for '#') NULLPOSIX Regular Expressionsregular expressionpattern matchingsubstringregexp_replaceregexp_matchesregexp_split_to_tableregexp_split_to_array lists the available
operators for pattern matching using POSIX regular expressions.
Regular Expression Match OperatorsOperatorDescriptionExample~Matches regular expression, case sensitive'thomas' ~ '.*thomas.*'~*Matches regular expression, case insensitive'thomas' ~* '.*Thomas.*'!~Does not match regular expression, case sensitive'thomas' !~ '.*Thomas.*'!~*Does not match regular expression, case insensitive'thomas' !~* '.*vadim.*'
POSIX regular expressions provide a more
powerful means for pattern matching than the LIKE and
SIMILAR TO> operators.
Many Unix tools such as egrep,
sed, or awk use a pattern
matching language that is similar to the one described here.
A regular expression is a character sequence that is an
abbreviated definition of a set of strings (a regular
set). A string is said to match a regular expression
if it is a member of the regular set described by the regular
expression. As with LIKE, pattern characters
match string characters exactly unless they are special characters
in the regular expression language — but regular expressions use
different special characters than LIKE does.
Unlike LIKE patterns, a
regular expression is allowed to match anywhere within a string, unless
the regular expression is explicitly anchored to the beginning or
end of the string.
Some examples:
'abc' ~ 'abc' true
'abc' ~ '^a' true
'abc' ~ '(b|d)' true
'abc' ~ '^(b|c)' false
The POSIX pattern language is described in much
greater detail below.
The substring> function with two parameters,
substring(string from
pattern), provides extraction of a
substring
that matches a POSIX regular expression pattern. It returns null if
there is no match, otherwise the portion of the text that matched the
pattern. But if the pattern contains any parentheses, the portion
of the text that matched the first parenthesized subexpression (the
one whose left parenthesis comes first) is
returned. You can put parentheses around the whole expression
if you want to use parentheses within it without triggering this
exception. If you need parentheses in the pattern before the
subexpression you want to extract, see the non-capturing parentheses
described below.
Some examples:
substring('foobar' from 'o.b') oob
substring('foobar' from 'o(.)b') o
The regexp_replace> function provides substitution of
new text for substrings that match POSIX regular expression patterns.
It has the syntax
regexp_replace(source>,
pattern>, replacement>
, flags> ).
The source> string is returned unchanged if
there is no match to the pattern>. If there is a
match, the source> string is returned with the
replacement> string substituted for the matching
substring. The replacement> string can contain
\>n>, where n> is 1
through 9, to indicate that the source substring matching the
n>'th parenthesized subexpression of the pattern should be
inserted, and it can contain \&> to indicate that the
substring matching the entire pattern should be inserted. Write
\\> if you need to put a literal backslash in the replacement
text.
The flags> parameter is an optional text
string containing zero or more single-letter flags that change the
function's behavior. Flag i> specifies case-insensitive
matching, while flag g> specifies replacement of each matching
substring rather than only the first one. Supported flags (though
not g>) are
described in .
Some examples:
regexp_replace('foobarbaz', 'b..', 'X')
fooXbaz
regexp_replace('foobarbaz', 'b..', 'X', 'g')
fooXX
regexp_replace('foobarbaz', 'b(..)', E'X\\1Y', 'g')
fooXarYXazY
The regexp_matches> function returns a text array of
all of the captured substrings resulting from matching a POSIX
regular expression pattern. It has the syntax
regexp_matches(string>, pattern>
, flags> ).
The function can return no rows, one row, or multiple rows (see
the g> flag below). If the pattern>
does not match, the function returns no rows. If the pattern
contains no parenthesized subexpressions, then each row
returned is a single-element text array containing the substring
matching the whole pattern. If the pattern contains parenthesized
subexpressions, the function returns a text array whose
n>'th element is the substring matching the
n>'th parenthesized subexpression of the pattern
(not counting non-capturing> parentheses; see below for
details).
The flags> parameter is an optional text
string containing zero or more single-letter flags that change the
function's behavior. Flag g> causes the function to find
each match in the string, not only the first one, and return a row for
each such match. Supported flags (though
not g>)
are described in .
Some examples:
SELECT regexp_matches('foobarbequebaz', '(bar)(beque)');
regexp_matches
----------------
{bar,beque}
(1 row)
SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g');
regexp_matches
----------------
{bar,beque}
{bazil,barf}
(2 rows)
SELECT regexp_matches('foobarbequebaz', 'barbeque');
regexp_matches
----------------
{barbeque}
(1 row)
It is possible to force regexp_matches()> to always
return one row by using a sub-select; this is particularly useful
in a SELECT> target list when you want all rows
returned, even non-matching ones:
SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab;
The regexp_split_to_table> function splits a string using a POSIX
regular expression pattern as a delimiter. It has the syntax
regexp_split_to_table(string>, pattern>
, flags> ).
If there is no match to the pattern>, the function returns the
string>. If there is at least one match, for each match it returns
the text from the end of the last match (or the beginning of the string)
to the beginning of the match. When there are no more matches, it
returns the text from the end of the last match to the end of the string.
The flags> parameter is an optional text string containing
zero or more single-letter flags that change the function's behavior.
regexp_split_to_table supports the flags described in
.
The regexp_split_to_array> function behaves the same as
regexp_split_to_table>, except that regexp_split_to_array>
returns its result as an array of text>. It has the syntax
regexp_split_to_array(string>, pattern>
, flags> ).
The parameters are the same as for regexp_split_to_table>.
Some examples:
SELECT foo FROM regexp_split_to_table('the quick brown fox jumps over the lazy dog', E'\\s+') AS foo;
foo
-------
the
quick
brown
fox
jumps
over
the
lazy
dog
(9 rows)
SELECT regexp_split_to_array('the quick brown fox jumps over the lazy dog', E'\\s+');
regexp_split_to_array
-----------------------------------------------
{the,quick,brown,fox,jumps,over,the,lazy,dog}
(1 row)
SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo;
foo
-----
t
h
e
q
u
i
c
k
b
r
o
w
n
f
o
x
(16 rows)
As the last example demonstrates, the regexp split functions ignore
zero-length matches that occur at the start or end of the string
or immediately after a previous match. This is contrary to the strict
definition of regexp matching that is implemented by
regexp_matches>, but is usually the most convenient behavior
in practice. Other software systems such as Perl use similar definitions.
Regular Expression DetailsPostgreSQL's regular expressions are implemented
using a software package written by Henry Spencer. Much of
the description of regular expressions below is copied verbatim from his
manual.
Regular expressions (REs), as defined in
POSIX 1003.2, come in two forms:
extended> REs or ERE>s
(roughly those of egrep), and
basic> REs or BRE>s
(roughly those of ed).
PostgreSQL supports both forms, and
also implements some extensions
that are not in the POSIX standard, but have become widely used
due to their availability in programming languages such as Perl and Tcl.
REs using these non-POSIX extensions are called
advanced> REs or ARE>s
in this documentation. AREs are almost an exact superset of EREs,
but BREs have several notational incompatibilities (as well as being
much more limited).
We first describe the ARE and ERE forms, noting features that apply
only to AREs, and then describe how BREs differ.
PostgreSQL> always initially presumes that a regular
expression follows the ARE rules. However, the more limited ERE or
BRE rules can be chosen by prepending an embedded option>
to the RE pattern, as described in .
This can be useful for compatibility with applications that expect
exactly the POSIX 1003.2 rules.
A regular expression is defined as one or more
branches, separated by
|. It matches anything that matches one of the
branches.
A branch is zero or more quantified atoms> or
constraints>, concatenated.
It matches a match for the first, followed by a match for the second, etc;
an empty branch matches the empty string.
A quantified atom is an atom> possibly followed
by a single quantifier>.
Without a quantifier, it matches a match for the atom.
With a quantifier, it can match some number of matches of the atom.
An atom can be any of the possibilities
shown in .
The possible quantifiers and their meanings are shown in
.
A constraint> matches an empty string, but matches only when
specific conditions are met. A constraint can be used where an atom
could be used, except it cannot be followed by a quantifier.
The simple constraints are shown in
;
some more constraints are described later.
Regular Expression AtomsAtomDescription(>re>)> (where re> is any regular expression)
matches a match for
re>, with the match noted for possible reporting (?:>re>)> as above, but the match is not noted for reporting
(a non-capturing> set of parentheses)
(AREs only) .> matches any single character [>chars>]> a bracket expression>,
matching any one of the chars> (see
for more detail) \>k> (where k> is a non-alphanumeric character)
matches that character taken as an ordinary character,
e.g., \\> matches a backslash character \>c> where c> is alphanumeric
(possibly followed by other characters)
is an escape>, see
(AREs only; in EREs and BREs, this matches c>) {> when followed by a character other than a digit,
matches the left-brace character {>;
when followed by a digit, it is the beginning of a
bound> (see below) x> where x> is a single character with no other
significance, matches that character
An RE cannot end with a backslash (\>).
If you have turned off,
any backslashes you write in literal string constants will need to be
doubled. See for more information.
Regular Expression QuantifiersQuantifierMatches*> a sequence of 0 or more matches of the atom +> a sequence of 1 or more matches of the atom ?> a sequence of 0 or 1 matches of the atom {>m>}> a sequence of exactly m> matches of the atom {>m>,}> a sequence of m> or more matches of the atom {>m>,>n>}> a sequence of m> through n>
(inclusive) matches of the atom; m> cannot exceed
n> *?> non-greedy version of *> +?> non-greedy version of +> ??> non-greedy version of ?> {>m>}?> non-greedy version of {>m>}> {>m>,}?> non-greedy version of {>m>,}> {>m>,>n>}?> non-greedy version of {>m>,>n>}>
The forms using {>...>}>
are known as bounds>.
The numbers m> and n> within a bound are
unsigned decimal integers with permissible values from 0 to 255 inclusive.
Non-greedy> quantifiers (available in AREs only) match the
same possibilities as their corresponding normal (greedy>)
counterparts, but prefer the smallest number rather than the largest
number of matches.
See for more detail.
A quantifier cannot immediately follow another quantifier, e.g.,
**> is invalid.
A quantifier cannot
begin an expression or subexpression or follow
^ or |.
Regular Expression ConstraintsConstraintDescription^> matches at the beginning of the string $> matches at the end of the string (?=>re>)> positive lookahead> matches at any point
where a substring matching re> begins
(AREs only) (?!>re>)> negative lookahead> matches at any point
where no substring matching re> begins
(AREs only)
Lookahead constraints cannot contain back references>
(see ),
and all parentheses within them are considered non-capturing.
Bracket Expressions
A bracket expression is a list of
characters enclosed in []. It normally matches
any single character from the list (but see below). If the list
begins with ^, it matches any single character
not> from the rest of the list.
If two characters
in the list are separated by -, this is
shorthand for the full range of characters between those two
(inclusive) in the collating sequence,
e.g., [0-9] in ASCII matches
any decimal digit. It is illegal for two ranges to share an
endpoint, e.g., a-c-e. Ranges are very
collating-sequence-dependent, so portable programs should avoid
relying on them.
To include a literal ] in the list, make it the
first character (after ^, if that is used). To
include a literal -, make it the first or last
character, or the second endpoint of a range. To use a literal
- as the first endpoint of a range, enclose it
in [. and .] to make it a
collating element (see below). With the exception of these characters,
some combinations using [
(see next paragraphs), and escapes (AREs only), all other special
characters lose their special significance within a bracket expression.
In particular, \ is not special when following
ERE or BRE rules, though it is special (as introducing an escape)
in AREs.
Within a bracket expression, a collating element (a character, a
multiple-character sequence that collates as if it were a single
character, or a collating-sequence name for either) enclosed in
[. and .] stands for the
sequence of characters of that collating element. The sequence is
treated as a single element of the bracket expression's list. This
allows a bracket
expression containing a multiple-character collating element to
match more than one character, e.g., if the collating sequence
includes a ch collating element, then the RE
[[.ch.]]*c matches the first five characters of
chchcc.
PostgreSQL> currently does not support multi-character collating
elements. This information describes possible future behavior.
Within a bracket expression, a collating element enclosed in
[= and =] is an equivalence
class>, standing for the sequences of characters of all collating
elements equivalent to that one, including itself. (If there are
no other equivalent collating elements, the treatment is as if the
enclosing delimiters were [. and
.].) For example, if o and
^ are the members of an equivalence class, then
[[=o=]], [[=^=]], and
[o^] are all synonymous. An equivalence class
cannot be an endpoint of a range.
Within a bracket expression, the name of a character class
enclosed in [: and :] stands
for the list of all characters belonging to that class. Standard
character class names are: alnum,
alpha, blank,
cntrl, digit,
graph, lower,
print, punct,
space, upper,
xdigit. These stand for the character classes
defined in
ctype3.
A locale can provide others. A character class cannot be used as
an endpoint of a range.
There are two special cases of bracket expressions: the bracket
expressions [[:<:]] and
[[:>:]] are constraints,
matching empty strings at the beginning
and end of a word respectively. A word is defined as a sequence
of word characters that is neither preceded nor followed by word
characters. A word character is an alnum> character (as
defined by
ctype3)
or an underscore. This is an extension, compatible with but not
specified by POSIX 1003.2, and should be used with
caution in software intended to be portable to other systems.
The constraint escapes described below are usually preferable; they
are no more standard, but are easier to type.
Regular Expression EscapesEscapes> are special sequences beginning with \>
followed by an alphanumeric character. Escapes come in several varieties:
character entry, class shorthands, constraint escapes, and back references.
A \> followed by an alphanumeric character but not constituting
a valid escape is illegal in AREs.
In EREs, there are no escapes: outside a bracket expression,
a \> followed by an alphanumeric character merely stands for
that character as an ordinary character, and inside a bracket expression,
\> is an ordinary character.
(The latter is the one actual incompatibility between EREs and AREs.)
Character-entry escapes> exist to make it easier to specify
non-printing and other inconvenient characters in REs. They are
shown in .
Class-shorthand escapes> provide shorthands for certain
commonly-used character classes. They are
shown in .
A constraint escape> is a constraint,
matching the empty string if specific conditions are met,
written as an escape. They are
shown in .
A back reference> (\>n>) matches the
same string matched by the previous parenthesized subexpression specified
by the number n>
(see ). For example,
([bc])\1> matches bb> or cc>
but not bc> or cb>.
The subexpression must entirely precede the back reference in the RE.
Subexpressions are numbered in the order of their leading parentheses.
Non-capturing parentheses do not define subexpressions.
Keep in mind that an escape's leading \> will need to be
doubled when entering the pattern as an SQL string constant. For example:
'123' ~ E'^\\d{3}' true
Regular Expression Character-entry EscapesEscapeDescription\a> alert (bell) character, as in C \b> backspace, as in C \B> synonym for backslash (\>) to help reduce the need for backslash
doubling \c>X> (where X> is any character) the character whose
low-order 5 bits are the same as those of
X>, and whose other bits are all zero \e> the character whose collating-sequence name
is ESC>,
or failing that, the character with octal value 033 \f> form feed, as in C \n> newline, as in C \r> carriage return, as in C \t> horizontal tab, as in C \u>wxyz> (where wxyz> is exactly four hexadecimal digits)
the UTF16 (Unicode, 16-bit) character U+>wxyz>
in the local byte ordering \U>stuvwxyz> (where stuvwxyz> is exactly eight hexadecimal
digits)
reserved for a hypothetical Unicode extension to 32 bits
\v> vertical tab, as in C \x>hhh> (where hhh> is any sequence of hexadecimal
digits)
the character whose hexadecimal value is
0x>hhh>
(a single character no matter how many hexadecimal digits are used)
\0> the character whose value is 0> (the null byte)\>xy> (where xy> is exactly two octal digits,
and is not a back reference>)
the character whose octal value is
0>xy> \>xyz> (where xyz> is exactly three octal digits,
and is not a back reference>)
the character whose octal value is
0>xyz>
Hexadecimal digits are 0>-9>,
a>-f>, and A>-F>.
Octal digits are 0>-7>.
The character-entry escapes are always taken as ordinary characters.
For example, \135> is ]> in ASCII, but
\135> does not terminate a bracket expression.
Within bracket expressions, \d>, \s>,
and \w> lose their outer brackets,
and \D>, \S>, and \W> are illegal.
(So, for example, [a-c\d]> is equivalent to
[a-c[:digit:]]>.
Also, [a-c\D]>, which is equivalent to
[a-c^[:digit:]]>, is illegal.)
Regular Expression Constraint EscapesEscapeDescription\A> matches only at the beginning of the string
(see for how this differs from
^>) \m> matches only at the beginning of a word \M> matches only at the end of a word \y> matches only at the beginning or end of a word \Y> matches only at a point that is not the beginning or end of a
word \Z> matches only at the end of the string
(see for how this differs from
$>)
A word is defined as in the specification of
[[:<:]]> and [[:>:]]> above.
Constraint escapes are illegal within bracket expressions.
Regular Expression Back ReferencesEscapeDescription\>m> (where m> is a nonzero digit)
a back reference to the m>'th subexpression \>mnn> (where m> is a nonzero digit, and
nn> is some more digits, and the decimal value
mnn> is not greater than the number of closing capturing
parentheses seen so far)
a back reference to the mnn>'th subexpression
There is an inherent ambiguity between octal character-entry
escapes and back references, which is resolved by the following heuristics,
as hinted at above.
A leading zero always indicates an octal escape.
A single non-zero digit, not followed by another digit,
is always taken as a back reference.
A multi-digit sequence not starting with a zero is taken as a back
reference if it comes after a suitable subexpression
(i.e., the number is in the legal range for a back reference),
and otherwise is taken as octal.
Regular Expression Metasyntax
In addition to the main syntax described above, there are some special
forms and miscellaneous syntactic facilities available.
An RE can begin with one of two special director> prefixes.
If an RE begins with ***:>,
the rest of the RE is taken as an ARE. (This normally has no effect in
PostgreSQL>, since REs are assumed to be AREs;
but it does have an effect if ERE or BRE mode had been specified by
the flags> parameter to a regex function.)
If an RE begins with ***=>,
the rest of the RE is taken to be a literal string,
with all characters considered ordinary characters.
An ARE can begin with embedded options>:
a sequence (?>xyz>)>
(where xyz> is one or more alphabetic characters)
specifies options affecting the rest of the RE.
These options override any previously determined options —
in particular, they can override the case-sensitivity behavior implied by
a regex operator, or the flags> parameter to a regex
function.
The available option letters are
shown in .
Note that these same option letters are used in the flags>
parameters of regex functions.
ARE Embedded-option LettersOptionDescriptionb> rest of RE is a BRE c> case-sensitive matching (overrides operator type) e> rest of RE is an ERE i> case-insensitive matching (see
) (overrides operator type) m> historical synonym for n> n> newline-sensitive matching (see
) p> partial newline-sensitive matching (see
) q> rest of RE is a literal (quoted>) string, all ordinary
characters s> non-newline-sensitive matching (default) t> tight syntax (default; see below) w> inverse partial newline-sensitive (weird>) matching
(see ) x> expanded syntax (see below)
Embedded options take effect at the )> terminating the sequence.
They can appear only at the start of an ARE (after the
***:> director if any).
In addition to the usual (tight>) RE syntax, in which all
characters are significant, there is an expanded> syntax,
available by specifying the embedded x> option.
In the expanded syntax,
white-space characters in the RE are ignored, as are
all characters between a #>
and the following newline (or the end of the RE). This
permits paragraphing and commenting a complex RE.
There are three exceptions to that basic rule:
a white-space character or #> preceded by \> is
retained
white space or #> within a bracket expression is retained
white space and comments cannot appear within multi-character symbols,
such as (?:>
For this purpose, white-space characters are blank, tab, newline, and
any character that belongs to the space> character class.
Finally, in an ARE, outside bracket expressions, the sequence
(?#>ttt>)>
(where ttt> is any text not containing a )>)
is a comment, completely ignored.
Again, this is not allowed between the characters of
multi-character symbols, like (?:>.
Such comments are more a historical artifact than a useful facility,
and their use is deprecated; use the expanded syntax instead.
None> of these metasyntax extensions is available if
an initial ***=> director
has specified that the user's input be treated as a literal string
rather than as an RE.
Regular Expression Matching Rules
In the event that an RE could match more than one substring of a given
string, the RE matches the one starting earliest in the string.
If the RE could match more than one substring starting at that point,
either the longest possible match or the shortest possible match will
be taken, depending on whether the RE is greedy> or
non-greedy>.
Whether an RE is greedy or not is determined by the following rules:
Most atoms, and all constraints, have no greediness attribute (because
they cannot match variable amounts of text anyway).
Adding parentheses around an RE does not change its greediness.
A quantified atom with a fixed-repetition quantifier
({>m>}>
or
{>m>}?>)
has the same greediness (possibly none) as the atom itself.
A quantified atom with other normal quantifiers (including
{>m>,>n>}>
with m> equal to n>)
is greedy (prefers longest match).
A quantified atom with a non-greedy quantifier (including
{>m>,>n>}?>
with m> equal to n>)
is non-greedy (prefers shortest match).
A branch — that is, an RE that has no top-level
|> operator — has the same greediness as the first
quantified atom in it that has a greediness attribute.
An RE consisting of two or more branches connected by the
|> operator is always greedy.
The above rules associate greediness attributes not only with individual
quantified atoms, but with branches and entire REs that contain quantified
atoms. What that means is that the matching is done in such a way that
the branch, or whole RE, matches the longest or shortest possible
substring as a whole>. Once the length of the entire match
is determined, the part of it that matches any particular subexpression
is determined on the basis of the greediness attribute of that
subexpression, with subexpressions starting earlier in the RE taking
priority over ones starting later.
An example of what this means:
SELECT SUBSTRING('XY1234Z', 'Y*([0-9]{1,3})');
Result: 123
SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})');
Result: 1
In the first case, the RE as a whole is greedy because Y*>
is greedy. It can match beginning at the Y>, and it matches
the longest possible string starting there, i.e., Y123>.
The output is the parenthesized part of that, or 123>.
In the second case, the RE as a whole is non-greedy because Y*?>
is non-greedy. It can match beginning at the Y>, and it matches
the shortest possible string starting there, i.e., Y1>.
The subexpression [0-9]{1,3}> is greedy but it cannot change
the decision as to the overall match length; so it is forced to match
just 1>.
In short, when an RE contains both greedy and non-greedy subexpressions,
the total match length is either as long as possible or as short as
possible, according to the attribute assigned to the whole RE. The
attributes assigned to the subexpressions only affect how much of that
match they are allowed to eat> relative to each other.
The quantifiers {1,1}> and {1,1}?>
can be used to force greediness or non-greediness, respectively,
on a subexpression or a whole RE.
Match lengths are measured in characters, not collating elements.
An empty string is considered longer than no match at all.
For example:
bb*>
matches the three middle characters of abbbc>;
(week|wee)(night|knights)>
matches all ten characters of weeknights>;
when (.*).*>
is matched against abc> the parenthesized subexpression
matches all three characters; and when
(a*)*> is matched against bc>
both the whole RE and the parenthesized
subexpression match an empty string.
If case-independent matching is specified,
the effect is much as if all case distinctions had vanished from the
alphabet.
When an alphabetic that exists in multiple cases appears as an
ordinary character outside a bracket expression, it is effectively
transformed into a bracket expression containing both cases,
e.g., x> becomes [xX]>.
When it appears inside a bracket expression, all case counterparts
of it are added to the bracket expression, e.g.,
[x]> becomes [xX]>
and [^x]> becomes [^xX]>.
If newline-sensitive matching is specified, .>
and bracket expressions using ^>
will never match the newline character
(so that matches will never cross newlines unless the RE
explicitly arranges it)
and ^> and $>
will match the empty string after and before a newline
respectively, in addition to matching at beginning and end of string
respectively.
But the ARE escapes \A> and \Z>
continue to match beginning or end of string only>.
If partial newline-sensitive matching is specified,
this affects .> and bracket expressions
as with newline-sensitive matching, but not ^>
and $>.
If inverse partial newline-sensitive matching is specified,
this affects ^> and $>
as with newline-sensitive matching, but not .>
and bracket expressions.
This isn't very useful but is provided for symmetry.
Limits and Compatibility
No particular limit is imposed on the length of REs in this
implementation. However,
programs intended to be highly portable should not employ REs longer
than 256 bytes,
as a POSIX-compliant implementation can refuse to accept such REs.
The only feature of AREs that is actually incompatible with
POSIX EREs is that \> does not lose its special
significance inside bracket expressions.
All other ARE features use syntax which is illegal or has
undefined or unspecified effects in POSIX EREs;
the ***> syntax of directors likewise is outside the POSIX
syntax for both BREs and EREs.
Many of the ARE extensions are borrowed from Perl, but some have
been changed to clean them up, and a few Perl extensions are not present.
Incompatibilities of note include \b>, \B>,
the lack of special treatment for a trailing newline,
the addition of complemented bracket expressions to the things
affected by newline-sensitive matching,
the restrictions on parentheses and back references in lookahead
constraints, and the longest/shortest-match (rather than first-match)
matching semantics.
Two significant incompatibilities exist between AREs and the ERE syntax
recognized by pre-7.4 releases of PostgreSQL>:
In AREs, \> followed by an alphanumeric character is either
an escape or an error, while in previous releases, it was just another
way of writing the alphanumeric.
This should not be much of a problem because there was no reason to
write such a sequence in earlier releases.
In AREs, \> remains a special character within
[]>, so a literal \> within a bracket
expression must be written \\>.
Basic Regular Expressions
BREs differ from EREs in several respects.
In BREs, |>, +>, and ?>
are ordinary characters and there is no equivalent
for their functionality.
The delimiters for bounds are
\{> and \}>,
with {> and }>
by themselves ordinary characters.
The parentheses for nested subexpressions are
\(> and \)>,
with (> and )> by themselves ordinary characters.
^> is an ordinary character except at the beginning of the
RE or the beginning of a parenthesized subexpression,
$> is an ordinary character except at the end of the
RE or the end of a parenthesized subexpression,
and *> is an ordinary character if it appears at the beginning
of the RE or the beginning of a parenthesized subexpression
(after a possible leading ^>).
Finally, single-digit back references are available, and
\<> and \>>
are synonyms for
[[:<:]]> and [[:>:]]>
respectively; no other escapes are available in BREs.
Data Type Formatting Functionsformatting
The PostgreSQL formatting functions
provide a powerful set of tools for converting various data types
(date/time, integer, floating point, numeric) to formatted strings
and for converting from formatted strings to specific data types.
lists them.
These functions all follow a common calling convention: the first
argument is the value to be formatted and the second argument is a
template that defines the output or input format.
A single-argument to_timestamp function is also
available; it accepts a
double precision argument and converts from Unix epoch
(seconds since 1970-01-01 00:00:00+00) to
timestamp with time zone.
(Integer Unix epochs are implicitly cast to
double precision.)
Formatting FunctionsFunctionReturn TypeDescriptionExampleto_charto_char(timestamp, text)textconvert time stamp to stringto_char(current_timestamp, 'HH12:MI:SS')to_char(interval, text)textconvert interval to stringto_char(interval '15h 2m 12s', 'HH24:MI:SS')to_char(int, text)textconvert integer to stringto_char(125, '999')to_char(double precision,
text)textconvert real/double precision to stringto_char(125.8::real, '999D9')to_char(numeric, text)textconvert numeric to stringto_char(-125.8, '999D99S')to_dateto_date(text, text)dateconvert string to dateto_date('05 Dec 2000', 'DD Mon YYYY')to_numberto_number(text, text)numericconvert string to numericto_number('12,454.8-', '99G999D9S')to_timestampto_timestamp(text, text)timestamp with time zoneconvert string to time stampto_timestamp('05 Dec 2000', 'DD Mon YYYY')to_timestamp(double precision)timestamp with time zoneconvert Unix epoch to time stampto_timestamp(1284352323)
In a to_char> output template string, there are certain
patterns that are recognized and replaced with appropriately-formatted
data based on the given value. Any text that is not a template pattern is
simply copied verbatim. Similarly, in an input template string (for the
other functions), template patterns identify the values to be supplied by
the input data string.
shows the
template patterns available for formatting date and time values.
Template Patterns for Date/Time FormattingPatternDescriptionHHhour of day (01-12)HH12hour of day (01-12)HH24hour of day (00-23)MIminute (00-59)SSsecond (00-59)MSmillisecond (000-999)USmicrosecond (000000-999999)SSSSseconds past midnight (0-86399)AM, am,
PM or pmmeridiem indicator (without periods)A.M., a.m.,
P.M. or p.m.meridiem indicator (with periods)Y,YYYyear (4 and more digits) with commaYYYYyear (4 and more digits)YYYlast 3 digits of yearYYlast 2 digits of yearYlast digit of yearIYYYISO year (4 and more digits)IYYlast 3 digits of ISO yearIYlast 2 digits of ISO yearIlast digit of ISO yearBC, bc,
AD or adera indicator (without periods)B.C., b.c.,
A.D. or a.d.era indicator (with periods)MONTHfull upper case month name (blank-padded to 9 chars)Monthfull capitalized month name (blank-padded to 9 chars)monthfull lower case month name (blank-padded to 9 chars)MONabbreviated upper case month name (3 chars in English, localized lengths vary)Monabbreviated capitalized month name (3 chars in English, localized lengths vary)monabbreviated lower case month name (3 chars in English, localized lengths vary)MMmonth number (01-12)DAYfull upper case day name (blank-padded to 9 chars)Dayfull capitalized day name (blank-padded to 9 chars)dayfull lower case day name (blank-padded to 9 chars)DYabbreviated upper case day name (3 chars in English, localized lengths vary)Dyabbreviated capitalized day name (3 chars in English, localized lengths vary)dyabbreviated lower case day name (3 chars in English, localized lengths vary)DDDday of year (001-366)IDDDISO day of year (001-371; day 1 of the year is Monday of the first ISO week.)DDday of month (01-31)Dday of the week, Sunday(1>) to Saturday(7>)IDISO day of the week, Monday(1>) to Sunday(7>)Wweek of month (1-5) (The first week starts on the first day of the month.)WWweek number of year (1-53) (The first week starts on the first day of the year.)IWISO week number of year (01 - 53; the first Thursday of the new year is in week 1.)CCcentury (2 digits) (The twenty-first century starts on 2001-01-01.)JJulian Day (integer days since November 24, 4714 BC at midnight UTC)Qquarter (ignored by to_date> and to_timestamp>)RMmonth in upper case Roman numerals (I-XII; I=January)rmmonth in lower case Roman numerals (i-xii; i=January)TZupper case time-zone nametzlower case time-zone nameOFtime-zone offset
Modifiers can be applied to any template pattern to alter its
behavior. For example, FMMonth
is the Month pattern with the
FM modifier.
shows the
modifier patterns for date/time formatting.
Template Pattern Modifiers for Date/Time FormattingModifierDescriptionExampleFM prefixfill mode (suppress padding blanks and trailing zeroes)FMMonthTH suffixupper case ordinal number suffixDDTH, e.g., 12TH>th suffixlower case ordinal number suffixDDth, e.g., 12th>FX prefixfixed format global option (see usage notes)FX Month DD DayTM prefixtranslation mode (print localized day and month names based on
)TMMonthSP suffixspell mode (not implemented)DDSP
Usage notes for date/time formatting:
FM suppresses leading zeroes and trailing blanks
that would otherwise be added to make the output of a pattern be
fixed-width. In PostgreSQL,
FM modifies only the next specification, while in
Oracle FM affects all subsequent
specifications, and repeated FM modifiers
toggle fill mode on and off.
TM does not include trailing blanks.
to_timestamp and to_date
skip multiple blank spaces in the input string unless the
FX option is used. For example,
to_timestamp('2000 JUN', 'YYYY MON') works, but
to_timestamp('2000 JUN', 'FXYYYY MON') returns an error
because to_timestamp expects one space only.
FX must be specified as the first item in
the template.
to_timestamp and to_date
exist to handle input formats that cannot be converted by
simple casting. These functions interpret input liberally,
with minimal error checking. While they produce valid output,
the conversion can yield unexpected results. For example,
input to these functions is not restricted by normal ranges,
thus to_date('20096040','YYYYMMDD') returns
2014-01-17 rather than causing an error.
Casting does not have this behavior.
Ordinary text is allowed in to_char
templates and will be output literally. You can put a substring
in double quotes to force it to be interpreted as literal text
even if it contains pattern key words. For example, in
'"Hello Year "YYYY', the YYYY
will be replaced by the year data, but the single Y in Year
will not be. In to_date>, to_number>,
and to_timestamp>, double-quoted strings skip the number of
input characters contained in the string, e.g. "XX">
skips two input characters.
If you want to have a double quote in the output you must
precede it with a backslash, for example '\"YYYY
Month\"'.
If the year format specification is less than four digits, e.g.
YYY>, and the supplied year is less than four digits,
the year will be adjusted to be nearest to the year 2020, e.g.
95> becomes 1995.
The YYYY conversion from string to timestamp or
date has a restriction when processing years with more than 4 digits. You must
use some non-digit character or template after YYYY,
otherwise the year is always interpreted as 4 digits. For example
(with the year 20000):
to_date('200001131', 'YYYYMMDD') will be
interpreted as a 4-digit year; instead use a non-digit
separator after the year, like
to_date('20000-1131', 'YYYY-MMDD') or
to_date('20000Nov31', 'YYYYMonDD').
In conversions from string to timestamp or
date, the CC (century) field is ignored
if there is a YYY, YYYY or
Y,YYY field. If CC is used with
YY or Y then the year is computed
as the year in the specified century. If the century is
specified but the year is not, the first year of the century
is assumed.
An ISO week date (as distinct from a Gregorian date) can be
specified to to_timestamp and
to_date in one of two ways:
Year, week, and weekday: for example to_date('2006-42-4',
'IYYY-IW-ID') returns the date
2006-10-19. If you omit the weekday it
is assumed to be 1 (Monday).
Year and day of year: for example to_date('2006-291',
'IYYY-IDDD') also returns 2006-10-19.
Attempting to construct a date using a mixture of ISO week and
Gregorian date fields is nonsensical, and will cause an error. In the
context of an ISO year, the concept of a month> or day
of month> has no meaning. In the context of a Gregorian year, the
ISO week has no meaning. Users should avoid mixing Gregorian and
ISO date specifications.
In a conversion from string to timestamp, millisecond
(MS) or microsecond (US)
values are used as the
seconds digits after the decimal point. For example
to_timestamp('12:3', 'SS:MS') is not 3 milliseconds,
but 300, because the conversion counts it as 12 + 0.3 seconds.
This means for the format SS:MS, the input values
12:3, 12:30, and 12:300 specify the
same number of milliseconds. To get three milliseconds, one must use
12:003, which the conversion counts as
12 + 0.003 = 12.003 seconds.
Here is a more
complex example:
to_timestamp('15:12:02.020.001230', 'HH:MI:SS.MS.US')
is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds +
1230 microseconds = 2.021230 seconds.
to_char(..., 'ID')'s day of the week numbering
matches the extract(isodow from ...) function, but
to_char(..., 'D')'s does not match
extract(dow from ...)'s day numbering.
to_char(interval) formats HH> and
HH12> as shown on a 12-hour clock, i.e. zero hours
and 36 hours output as 12>, while HH24>
outputs the full hour value, which can exceed 23 for intervals.
shows the
template patterns available for formatting numeric values.
Template Patterns for Numeric FormattingPatternDescription9value with the specified number of digits0value with leading zeros. (period)decimal point, (comma)group (thousand) separatorPRnegative value in angle bracketsSsign anchored to number (uses locale)Lcurrency symbol (uses locale)Ddecimal point (uses locale)Ggroup separator (uses locale)MIminus sign in specified position (if number < 0)PLplus sign in specified position (if number > 0)SGplus/minus sign in specified positionRNRoman numeral (input between 1 and 3999)TH or thordinal number suffixVshift specified number of digits (see notes)EEEEexponent for scientific notation
Usage notes for numeric formatting:
A sign formatted using SG, PL, or
MI is not anchored to
the number; for example,
to_char(-12, 'MI9999') produces '- 12'
but to_char(-12, 'S9999') produces ' -12'.
The Oracle implementation does not allow the use of
MI before 9, but rather
requires that 9 precede
MI.
9 results in a value with the same number of
digits as there are 9s. If a digit is
not available it outputs a space.
TH does not convert values less than zero
and does not convert fractional numbers.
PL, SG, and
TH are PostgreSQL
extensions.
V effectively
multiplies the input values by
10^n, where
n is the number of digits following
V.
to_char does not support the use of
V combined with a decimal point
(e.g., 99.9V99 is not allowed).
EEEE (scientific notation) cannot be used in
combination with any of the other formatting patterns or
modifiers other than digit and decimal point patterns, and must be at the end of the format string
(e.g., 9.99EEEE is a valid pattern).
Certain modifiers can be applied to any template pattern to alter its
behavior. For example, FM9999
is the 9999 pattern with the
FM modifier.
shows the
modifier patterns for numeric formatting.
Template Pattern Modifiers for Numeric FormattingModifierDescriptionExampleFM prefixfill mode (suppress padding blanks and trailing zeroes)FM9999TH suffixupper case ordinal number suffix999THth suffixlower case ordinal number suffix999th
shows some
examples of the use of the to_char function.
Date/Time Functions and Operators shows the available
functions for date/time value processing, with details appearing in
the following subsections. illustrates the behaviors of
the basic arithmetic operators (+,
*, etc.). For formatting functions, refer to
. You should be familiar with
the background information on date/time data types from .
All the functions and operators described below that take time or timestamp
inputs actually come in two variants: one that takes time with time zone or timestamp
with time zone, and one that takes time without time zone or timestamp without time zone.
For brevity, these variants are not shown separately. Also, the
+> and *> operators come in commutative pairs (for
example both date + integer and integer + date); we show only one of each
such pair.
Date/Time FunctionsFunctionReturn TypeDescriptionExampleResultageage(timestamp, timestamp)intervalSubtract arguments, producing a symbolic> result that
uses years and months, rather than just daysage(timestamp '2001-04-10', timestamp '1957-06-13')43 years 9 mons 27 daysage(timestamp)intervalSubtract from current_date (at midnight)age(timestamp '1957-06-13')43 years 8 mons 3 daysclock_timestampclock_timestamp()timestamp with time zoneCurrent date and time (changes during statement execution);
see current_datecurrent_datedateCurrent date;
see current_timecurrent_timetime with time zoneCurrent time of day;
see current_timestampcurrent_timestamptimestamp with time zoneCurrent date and time (start of current transaction);
see date_partdate_part(text, timestamp)double precisionGet subfield (equivalent to extract);
see date_part('hour', timestamp '2001-02-16 20:38:40')20date_part(text, interval)double precisionGet subfield (equivalent to
extract); see date_part('month', interval '2 years 3 months')3date_truncdate_trunc(text, timestamp)timestampTruncate to specified precision; see also date_trunc('hour', timestamp '2001-02-16 20:38:40')2001-02-16 20:00:00date_trunc(text, interval)intervalTruncate to specified precision; see also date_trunc('hour', interval '2 days 3 hours 40 minutes')2 days 03:00:00extractextract(field from
timestamp)double precisionGet subfield; see extract(hour from timestamp '2001-02-16 20:38:40')20extract(field from
interval)double precisionGet subfield; see extract(month from interval '2 years 3 months')3isfiniteisfinite(date)booleanTest for finite date (not +/-infinity)isfinite(date '2001-02-16')trueisfinite(timestamp)booleanTest for finite time stamp (not +/-infinity)isfinite(timestamp '2001-02-16 21:28:30')trueisfinite(interval)booleanTest for finite intervalisfinite(interval '4 hours')truejustify_daysjustify_days(interval)intervalAdjust interval so 30-day time periods are represented as monthsjustify_days(interval '35 days')1 mon 5 daysjustify_hoursjustify_hours(interval)intervalAdjust interval so 24-hour time periods are represented as daysjustify_hours(interval '27 hours')1 day 03:00:00justify_intervaljustify_interval(interval)intervalAdjust interval using justify_days> and justify_hours>, with additional sign adjustmentsjustify_interval(interval '1 mon -1 hour')29 days 23:00:00localtimelocaltimetimeCurrent time of day;
see localtimestamplocaltimestamptimestampCurrent date and time (start of current transaction);
see make_date
make_date(yearint,
monthint,
dayint)
date
Create date from year, month and day fields
make_date(2013, 7, 15)2013-07-15make_interval
make_interval(yearsint DEFAULT 0,
monthsint DEFAULT 0,
weeksint DEFAULT 0,
daysint DEFAULT 0,
hoursint DEFAULT 0,
minsint DEFAULT 0,
secsdouble precision DEFAULT 0.0)
interval
Create interval from years, months, weeks, days, hours, minutes and
seconds fields
make_interval(days := 10)10 daysmake_time
make_time(hourint,
minint,
secdouble precision)
time
Create time from hour, minute and seconds fields
make_time(8, 15, 23.5)08:15:23.5make_timestamp
make_timestamp(yearint,
monthint,
dayint,
hourint,
minint,
secdouble precision)
timestamp
Create timestamp from year, month, day, hour, minute and seconds fields
make_timestamp(1-23, 7, 15, 8, 15, 23.5)2013-07-15 08:15:23.5make_timestamptz
make_timestamptz(yearint,
monthint,
dayint,
hourint,
minint,
secdouble precision,
timezonetext)
timestamp with time zone
Create timestamp with time zone from year, month, day, hour, minute
and seconds fields. When timezone is not specified,
then current time zone is used.
make_timestamp(1-23, 7, 15, 8, 15, 23.5)2013-07-15 08:15:23.5+01nownow()timestamp with time zoneCurrent date and time (start of current transaction);
see statement_timestampstatement_timestamp()timestamp with time zoneCurrent date and time (start of current statement);
see timeofdaytimeofday()textCurrent date and time
(like clock_timestamp>, but as a text> string);
see transaction_timestamptransaction_timestamp()timestamp with time zoneCurrent date and time (start of current transaction);
see
OVERLAPS
In addition to these functions, the SQL OVERLAPS> operator is
supported:
(start1, end1) OVERLAPS (start2, end2)
(start1, length1) OVERLAPS (start2, length2)
This expression yields true when two time periods (defined by their
endpoints) overlap, false when they do not overlap. The endpoints
can be specified as pairs of dates, times, or time stamps; or as
a date, time, or time stamp followed by an interval. When a pair
of values is provided, either the start or the end can be written
first; OVERLAPS> automatically takes the earlier value
of the pair as the start. Each time period is considered to
represent the half-open interval start> <=>
time> <> end>, unless
start> and end> are equal in which case it
represents that single time instant. This means for instance that two
time periods with only an endpoint in common do not overlap.
SELECT (DATE '2001-02-16', DATE '2001-12-21') OVERLAPS
(DATE '2001-10-30', DATE '2002-10-30');
Result: true
SELECT (DATE '2001-02-16', INTERVAL '100 days') OVERLAPS
(DATE '2001-10-30', DATE '2002-10-30');
Result: false
SELECT (DATE '2001-10-29', DATE '2001-10-30') OVERLAPS
(DATE '2001-10-30', DATE '2001-10-31');
Result: false
SELECT (DATE '2001-10-30', DATE '2001-10-30') OVERLAPS
(DATE '2001-10-30', DATE '2001-10-31');
Result: true
When adding an interval value to (or subtracting an
interval value from) a timestamp with time zone
value, the days component advances or decrements the date of the
timestamp with time zone by the indicated number of days.
Across daylight saving time changes (when the session time zone is set to a
time zone that recognizes DST), this means interval '1 day'
does not necessarily equal interval '24 hours'.
For example, with the session time zone set to CST7CDT,
timestamp with time zone '2005-04-02 12:00-07' + interval '1 day'
will produce timestamp with time zone '2005-04-03 12:00-06',
while adding interval '24 hours' to the same initial
timestamp with time zone produces
timestamp with time zone '2005-04-03 13:00-06', as there is
a change in daylight saving time at 2005-04-03 02:00 in time zone
CST7CDT.
Note there can be ambiguity in the months> field returned by
age> because different months have different numbers of
days. PostgreSQL>'s approach uses the month from the
earlier of the two dates when calculating partial months. For example,
age('2004-06-01', '2004-04-30')> uses April to yield
1 mon 1 day>, while using May would yield 1 mon 2
days> because May has 31 days, while April has only 30.
Subtraction of dates and timestamps can also be complex. One conceptually
simple way to perform subtraction is to convert each value to a number
of seconds using EXTRACT(EPOCH FROM ...)>, then subtract the
results; this produces the
number of seconds> between the two values. This will adjust
for the number of days in each month, timezone changes, and daylight
saving time adjustments. Subtraction of date or timestamp
values with the -> operator
returns the number of days (24-hours) and hours/minutes/seconds
between the values, making the same adjustments. The age>
function returns years, months, days, and hours/minutes/seconds,
performing field-by-field subtraction and then adjusting for negative
field values. The following queries illustrate the differences in these
approaches. The sample results were produced with timezone
= 'US/Eastern'>; there is a daylight saving time change between the
two dates used:
SELECT EXTRACT(EPOCH FROM timestamptz '2013-07-01 12:00:00') -
EXTRACT(EPOCH FROM timestamptz '2013-03-01 12:00:00');
Result: 10537200
SELECT (EXTRACT(EPOCH FROM timestamptz '2013-07-01 12:00:00') -
EXTRACT(EPOCH FROM timestamptz '2013-03-01 12:00:00'))
/ 60 / 60 / 24;
Result: 121.958333333333
SELECT timestamptz '2013-07-01 12:00:00' - timestamptz '2013-03-01 12:00:00';
Result: 121 days 23:00:00
SELECT age(timestamptz '2013-07-01 12:00:00', timestamptz '2013-03-01 12:00:00');
Result: 4 monsEXTRACT, date_partdate_partextract
EXTRACT(field FROM source)
The extract function retrieves subfields
such as year or hour from date/time values.
source must be a value expression of
type timestamp, time, or interval.
(Expressions of type date are
cast to timestamp and can therefore be used as
well.) field is an identifier or
string that selects what field to extract from the source value.
The extract function returns values of type
double precision.
The following are valid field names:
century
The century
SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13');
Result: 20
SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 21
The first century starts at 0001-01-01 00:00:00 AD, although
they did not know it at the time. This definition applies to all
Gregorian calendar countries. There is no century number 0,
you go from -1 century to 1 century.
If you disagree with this, please write your complaint to:
Pope, Cathedral Saint-Peter of Roma, Vatican.
day
For timestamp values, the day (of the month) field
(1 - 31) ; for interval values, the number of days
SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 16
SELECT EXTRACT(DAY FROM INTERVAL '40 days 1 minute');
Result: 40decade
The year field divided by 10
SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 200dow
The day of the week as Sunday(0>) to
Saturday(6>)
SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 5
Note that extract's day of the week numbering
differs from that of the to_char(...,
'D') function.
doy
The day of the year (1 - 365/366)
SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 47epoch
For timestamp with time zone values, the
number of seconds since 1970-01-01 00:00:00 UTC (can be negative);
for date and timestamp values, the
number of seconds since 1970-01-01 00:00:00 local time;
for interval values, the total number
of seconds in the interval
SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40.12-08');
Result: 982384720.12
SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours');
Result: 442800
Here is how you can convert an epoch value back to a time
stamp:
SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720.12 * INTERVAL '1 second';
(The to_timestamp> function encapsulates the above
conversion.)
hour
The hour field (0 - 23)
SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 20isodow
The day of the week as Monday(1>) to
Sunday(7>)
SELECT EXTRACT(ISODOW FROM TIMESTAMP '2001-02-18 20:38:40');
Result: 7
This is identical to dow> except for Sunday. This
matches the ISO> 8601 day of the week numbering.
isoyear
The ISO 8601 year that the date falls in (not applicable to intervals)
SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01');
Result: 2005
SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02');
Result: 2006
Each ISO year begins with the Monday of the week containing the 4th of January, so in early January or late December the ISO year may be different from the Gregorian year. See the week field for more information.
This field is not available in PostgreSQL releases prior to 8.3.
microseconds
The seconds field, including fractional parts, multiplied by 1
000 000; note that this includes full seconds
SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5');
Result: 28500000millennium
The millennium
SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 3
Years in the 1900s are in the second millennium.
The third millennium started January 1, 2001.
milliseconds
The seconds field, including fractional parts, multiplied by
1000. Note that this includes full seconds.
SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5');
Result: 28500minute
The minutes field (0 - 59)
SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 38month
For timestamp values, the number of the month
within the year (1 - 12) ; for interval values,
the number of months, modulo 12 (0 - 11)
SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 2
SELECT EXTRACT(MONTH FROM INTERVAL '2 years 3 months');
Result: 3
SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months');
Result: 1quarter
The quarter of the year (1 - 4) that the date is in
SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 1second
The seconds field, including fractional parts (0 -
5960 if leap seconds are
implemented by the operating system)
SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 40
SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');
Result: 28.5timezone
The time zone offset from UTC, measured in seconds. Positive values
correspond to time zones east of UTC, negative values to
zones west of UTC. (Technically,
PostgreSQL uses UT1> because
leap seconds are not handled.)
timezone_hour
The hour component of the time zone offset
timezone_minute
The minute component of the time zone offset
week
The number of the week of the year that the day is in. By definition
(ISO 8601), weeks start on Mondays and the first
week of a year contains January 4 of that year. In other words, the
first Thursday of a year is in week 1 of that year.
In the ISO definition, it is possible for early-January dates to be
part of the 52nd or 53rd week of the previous year, and for
late-December dates to be part of the first week of the next year.
For example, 2005-01-01> is part of the 53rd week of year
2004, and 2006-01-01> is part of the 52nd week of year
2005, while 2012-12-31> is part of the first week of 2013.
It's recommended to use the isoyear> field together with
week> to get consistent results.
SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 7year
The year field. Keep in mind there is no 0 AD>, so subtracting
BC> years from AD> years should be done with care.
SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40');
Result: 2001
The extract function is primarily intended
for computational processing. For formatting date/time values for
display, see .
The date_part function is modeled on the traditional
Ingres equivalent to the
SQL-standard function extract:
date_part('field', source)
Note that here the field parameter needs to
be a string value, not a name. The valid field names for
date_part are the same as for
extract.
SELECT date_part('day', TIMESTAMP '2001-02-16 20:38:40');
Result: 16
SELECT date_part('hour', INTERVAL '4 hours 3 minutes');
Result: 4date_truncdate_trunc
The function date_trunc is conceptually
similar to the trunc function for numbers.
date_trunc('field', source)
source is a value expression of type
timestamp or interval>.
(Values of type date and
time are cast automatically to timestamp or
interval>, respectively.)
field selects to which precision to
truncate the input value. The return value is of type
timestamp or interval>
with all fields that are less significant than the
selected one set to zero (or one, for day and month).
Valid values for field are:
microsecondsmillisecondssecondminutehourdayweekmonthquarteryeardecadecenturymillennium
Examples:
SELECT date_trunc('hour', TIMESTAMP '2001-02-16 20:38:40');
Result: 2001-02-16 20:00:00
SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40');
Result: 2001-01-01 00:00:00AT TIME ZONEtime zoneconversionAT TIME ZONE
The AT TIME ZONE construct allows conversions
of time stamps to different time zones. shows its
variants.
AT TIME ZONE VariantsExpressionReturn TypeDescriptiontimestamp without time zone AT TIME ZONE zone>timestamp with time zoneTreat given time stamp without time zone> as located in the specified time zonetimestamp with time zone AT TIME ZONE zone>timestamp without time zoneConvert given time stamp with time zone> to the new time
zone, with no time zone designationtime with time zone AT TIME ZONE zone>time with time zoneConvert given time with time zone> to the new time zone
In these expressions, the desired time zone zone> can be
specified either as a text string (e.g., 'PST')
or as an interval (e.g., INTERVAL '-08:00').
In the text case, a time zone name can be specified in any of the ways
described in .
Examples (assuming the local time zone is PST8PDT>):
SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST';
Result: 2001-02-16 19:38:40-08
SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST';
Result: 2001-02-16 18:38:40
The first example takes a time stamp without time zone and interprets it as MST time
(UTC-7), which is then converted to PST (UTC-8) for display. The second example takes
a time stamp specified in EST (UTC-5) and converts it to local time in MST (UTC-7).
The function timezone(zone>,
timestamp>) is equivalent to the SQL-conforming construct
timestamp> AT TIME ZONE
zone>.
Current Date/TimedatecurrenttimecurrentPostgreSQL provides a number of functions
that return values related to the current date and time. These
SQL-standard functions all return values based on the start time of
the current transaction:
CURRENT_DATE
CURRENT_TIME
CURRENT_TIMESTAMP
CURRENT_TIME(precision)
CURRENT_TIMESTAMP(precision)
LOCALTIME
LOCALTIMESTAMP
LOCALTIME(precision)
LOCALTIMESTAMP(precision)
CURRENT_TIME and
CURRENT_TIMESTAMP deliver values with time zone;
LOCALTIME and
LOCALTIMESTAMP deliver values without time zone.
CURRENT_TIME,
CURRENT_TIMESTAMP,
LOCALTIME, and
LOCALTIMESTAMP
can optionally take
a precision parameter, which causes the result to be rounded
to that many fractional digits in the seconds field. Without a precision parameter,
the result is given to the full available precision.
Some examples:
SELECT CURRENT_TIME;
Result: 14:39:53.662522-05
SELECT CURRENT_DATE;
Result: 2001-12-23
SELECT CURRENT_TIMESTAMP;
Result: 2001-12-23 14:39:53.662522-05
SELECT CURRENT_TIMESTAMP(2);
Result: 2001-12-23 14:39:53.66-05
SELECT LOCALTIMESTAMP;
Result: 2001-12-23 14:39:53.662522
Since these functions return
the start time of the current transaction, their values do not
change during the transaction. This is considered a feature:
the intent is to allow a single transaction to have a consistent
notion of the current time, so that multiple
modifications within the same transaction bear the same
time stamp.
Other database systems might advance these values more
frequently.
PostgreSQL also provides functions that
return the start time of the current statement, as well as the actual
current time at the instant the function is called. The complete list
of non-SQL-standard time functions is:
transaction_timestamp()
statement_timestamp()
clock_timestamp()
timeofday()
now()
transaction_timestamp()> is equivalent to
CURRENT_TIMESTAMP, but is named to clearly reflect
what it returns.
statement_timestamp()> returns the start time of the current
statement (more specifically, the time of receipt of the latest command
message from the client).
statement_timestamp()> and transaction_timestamp()>
return the same value during the first command of a transaction, but might
differ during subsequent commands.
clock_timestamp()> returns the actual current time, and
therefore its value changes even within a single SQL command.
timeofday()> is a historical
PostgreSQL function. Like
clock_timestamp()>, it returns the actual current time,
but as a formatted text> string rather than a timestamp
with time zone> value.
now()> is a traditional PostgreSQL
equivalent to transaction_timestamp().
All the date/time data types also accept the special literal value
now to specify the current date and time (again,
interpreted as the transaction start time). Thus,
the following three all return the same result:
SELECT CURRENT_TIMESTAMP;
SELECT now();
SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT
You do not want to use the third form when specifying a DEFAULT>
clause while creating a table. The system will convert now
to a timestamp as soon as the constant is parsed, so that when
the default value is needed,
the time of the table creation would be used! The first two
forms will not be evaluated until the default value is used,
because they are function calls. Thus they will give the desired
behavior of defaulting to the time of row insertion.
Delaying Executionpg_sleeppg_sleep_forpg_sleep_untilsleepdelay
The following functions are available to delay execution of the server
process:
pg_sleep(seconds)
pg_sleep_for(interval>)
pg_sleep_until(timestamp with time zone>)
pg_sleep makes the current session's process
sleep until seconds seconds have
elapsed. seconds is a value of type
double precision>, so fractional-second delays can be specified.
pg_sleep_for is a convenience function for larger
sleep times specified as an interval>.
pg_sleep_until is a convenience function for when
a specific wake-up time is desired.
For example:
SELECT pg_sleep(1.5);
SELECT pg_sleep_for('5 minutes');
SELECT pg_sleep_until('tomorrow 03:00');
The effective resolution of the sleep interval is platform-specific;
0.01 seconds is a common value. The sleep delay will be at least as long
as specified. It might be longer depending on factors such as server load.
In particular, pg_sleep_until is not guaranteed to
wake up exactly at the specified time, but it will not wake up any earlier.
Make sure that your session does not hold more locks than necessary
when calling pg_sleep or its variants. Otherwise
other sessions might have to wait for your sleeping process, slowing down
the entire system.
Enum Support Functions
For enum types (described in ),
there are several functions that allow cleaner programming without
hard-coding particular values of an enum type.
These are listed in . The examples
assume an enum type created as:
CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple');
Enum Support FunctionsFunctionDescriptionExampleExample Resultenum_firstenum_first(anyenum)Returns the first value of the input enum typeenum_first(null::rainbow)redenum_lastenum_last(anyenum)Returns the last value of the input enum typeenum_last(null::rainbow)purpleenum_rangeenum_range(anyenum)Returns all values of the input enum type in an ordered arrayenum_range(null::rainbow){red,orange,yellow,green,blue,purple}enum_range(anyenum, anyenum)
Returns the range between the two given enum values, as an ordered
array. The values must be from the same enum type. If the first
parameter is null, the result will start with the first value of
the enum type.
If the second parameter is null, the result will end with the last
value of the enum type.
enum_range('orange'::rainbow, 'green'::rainbow){orange,yellow,green}enum_range(NULL, 'green'::rainbow){red,orange,yellow,green}enum_range('orange'::rainbow, NULL){orange,yellow,green,blue,purple}
Notice that except for the two-argument form of enum_range>,
these functions disregard the specific value passed to them; they care
only about its declared data type. Either null or a specific value of
the type can be passed, with the same result. It is more common to
apply these functions to a table column or function argument than to
a hardwired type name as suggested by the examples.
Geometric Functions and Operators
The geometric types point, box,
lseg, line, path,
polygon, and circle have a large set of
native support functions and operators, shown in , , and .
Note that the same as> operator, ~=>, represents
the usual notion of equality for the point,
box, polygon, and circle types.
Some of these types also have an => operator, but
=> compares
for equal areas> only. The other scalar comparison operators
(<=> and so on) likewise compare areas for these types.
Geometric OperatorsOperatorDescriptionExample+Translationbox '((0,0),(1,1))' + point '(2.0,0)'-Translationbox '((0,0),(1,1))' - point '(2.0,0)'*Scaling/rotationbox '((0,0),(1,1))' * point '(2.0,0)'/Scaling/rotationbox '((0,0),(2,2))' / point '(2.0,0)'#Point or box of intersection'((1,-1),(-1,1))' # '((1,1),(-1,-1))'#Number of points in path or polygon# '((1,0),(0,1),(-1,0))'@-@Length or circumference@-@ path '((0,0),(1,0))'@@Center@@ circle '((0,0),10)'##Closest point to first operand on second operandpoint '(0,0)' ## lseg '((2,0),(0,2))'<->Distance betweencircle '((0,0),1)' <-> circle '((5,0),1)'&&Overlaps? (One point in common makes this true.)box '((0,0),(1,1))' && box '((0,0),(2,2))'<<Is strictly left of?circle '((0,0),1)' << circle '((5,0),1)'>>Is strictly right of?circle '((5,0),1)' >> circle '((0,0),1)'&<Does not extend to the right of?box '((0,0),(1,1))' &< box '((0,0),(2,2))'&>Does not extend to the left of?box '((0,0),(3,3))' &> box '((0,0),(2,2))'<<|Is strictly below?box '((0,0),(3,3))' <<| box '((3,4),(5,5))'|>>Is strictly above?box '((3,4),(5,5))' |>> box '((0,0),(3,3))'&<|Does not extend above?box '((0,0),(1,1))' &<| box '((0,0),(2,2))'|&>Does not extend below?box '((0,0),(3,3))' |&> box '((0,0),(2,2))'<^Is below (allows touching)?circle '((0,0),1)' <^ circle '((0,5),1)'>^Is above (allows touching)?circle '((0,5),1)' >^ circle '((0,0),1)'?#Intersects?lseg '((-1,0),(1,0))' ?# box '((-2,-2),(2,2))'?-Is horizontal??- lseg '((-1,0),(1,0))'?-Are horizontally aligned?point '(1,0)' ?- point '(0,0)'?|Is vertical??| lseg '((-1,0),(1,0))'?|Are vertically aligned?point '(0,1)' ?| point '(0,0)'?-|Is perpendicular?lseg '((0,0),(0,1))' ?-| lseg '((0,0),(1,0))'?||Are parallel?lseg '((-1,0),(1,0))' ?|| lseg '((-1,2),(1,2))'@>Contains?circle '((0,0),2)' @> point '(1,1)'<@Contained in or on?point '(1,1)' <@ circle '((0,0),2)'~=Same as?polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))'
Before PostgreSQL 8.2, the containment
operators @>> and <@> were respectively
called ~> and @>. These names are still
available, but are deprecated and will eventually be removed.
areacenterdiameterheightisclosedisopenlengthnpointspclosepopenradiuswidth
Geometric FunctionsFunctionReturn TypeDescriptionExamplearea(object>)double precisionareaarea(box '((0,0),(1,1))')center(object>)pointcentercenter(box '((0,0),(1,2))')diameter(circle>)double precisiondiameter of circlediameter(circle '((0,0),2.0)')height(box>)double precisionvertical size of boxheight(box '((0,0),(1,1))')isclosed(path>)booleana closed path?isclosed(path '((0,0),(1,1),(2,0))')isopen(path>)booleanan open path?isopen(path '[(0,0),(1,1),(2,0)]')length(object>)double precisionlengthlength(path '((-1,0),(1,0))')npoints(path>)intnumber of pointsnpoints(path '[(0,0),(1,1),(2,0)]')npoints(polygon>)intnumber of pointsnpoints(polygon '((1,1),(0,0))')pclose(path>)pathconvert path to closedpclose(path '[(0,0),(1,1),(2,0)]')point(lseg>, lseg>)pointintersectionpoint(lseg '((-1,0),(1,0))',lseg '((-2,-2),(2,2))')
]]>
popen(path>)pathconvert path to openpopen(path '((0,0),(1,1),(2,0))')radius(circle)double precisionradius of circleradius(circle '((0,0),2.0)')width(box>)double precisionhorizontal size of boxwidth(box '((0,0),(1,1))')
Geometric Type Conversion FunctionsFunctionReturn TypeDescriptionExampleboxbox(circle)boxcircle to boxbox(circle '((0,0),2.0)')box(point, point)boxpoints to boxbox(point '(0,0)', point '(1,1)')box(polygon)boxpolygon to boxbox(polygon '((0,0),(1,1),(2,0))')circlecircle(box)circlebox to circlecircle(box '((0,0),(1,1))')circle(point, double precision)circlecenter and radius to circlecircle(point '(0,0)', 2.0)circle(polygon)circlepolygon to circlecircle(polygon '((0,0),(1,1),(2,0))')line(point, point)linepoints to lineline(point '(-1,0)', point '(1,0)')lseglseg(box)lsegbox diagonal to line segmentlseg(box '((-1,0),(1,0))')lseg(point, point)lsegpoints to line segmentlseg(point '(-1,0)', point '(1,0)')pathpath(polygon)pathpolygon to pathpath(polygon '((0,0),(1,1),(2,0))')pointpoint(double
precision, double precision)pointconstruct pointpoint(23.4, -44.5)point(box)pointcenter of boxpoint(box '((-1,0),(1,0))')point(circle)pointcenter of circlepoint(circle '((0,0),2.0)')point(lseg)pointcenter of line segmentpoint(lseg '((-1,0),(1,0))')point(polygon)pointcenter of polygonpoint(polygon '((0,0),(1,1),(2,0))')polygonpolygon(box)polygonbox to 4-point polygonpolygon(box '((0,0),(1,1))')polygon(circle)polygoncircle to 12-point polygonpolygon(circle '((0,0),2.0)')polygon(npts, circle)polygoncircle to npts-point polygonpolygon(12, circle '((0,0),2.0)')polygon(path)polygonpath to polygonpolygon(path '((0,0),(1,1),(2,0))')
It is possible to access the two component numbers of a point>
as though the point were an array with indexes 0 and 1. For example, if
t.p> is a point> column then
SELECT p[0] FROM t> retrieves the X coordinate and
UPDATE t SET p[1] = ...> changes the Y coordinate.
In the same way, a value of type box> or lseg> can be treated
as an array of two point> values.
The area function works for the types
box, circle, and path.
The area function only works on the
path data type if the points in the
path are non-intersecting. For example, the
path'((0,0),(0,1),(2,1),(2,2),(1,2),(1,0),(0,0))'::PATH
will not work; however, the following visually identical
path'((0,0),(0,1),(1,1),(1,2),(2,2),(2,1),(1,1),(1,0),(0,0))'::PATH
will work. If the concept of an intersecting versus
non-intersecting path is confusing, draw both of the
above paths side by side on a piece of graph paper.
Network Address Functions and Operators shows the operators
available for the cidr and inet types.
The operators <<,
<<=, >>, and
>>= test for subnet inclusion. They
consider only the network parts of the two addresses (ignoring any
host part) and determine whether one network is identical to
or a subnet of the other.
cidr and inet OperatorsOperatorDescriptionExample<is less thaninet '192.168.1.5' < inet '192.168.1.6'<=is less than or equalinet '192.168.1.5' <= inet '192.168.1.5'=equalsinet '192.168.1.5' = inet '192.168.1.5'>=is greater or equalinet '192.168.1.5' >= inet '192.168.1.5'>is greater thaninet '192.168.1.5' > inet '192.168.1.4'<>is not equalinet '192.168.1.5' <> inet '192.168.1.4'<<is contained withininet '192.168.1.5' << inet '192.168.1/24'<<=is contained within or equalsinet '192.168.1/24' <<= inet '192.168.1/24'>>containsinet '192.168.1/24' >> inet '192.168.1.5'>>=contains or equalsinet '192.168.1/24' >>= inet '192.168.1/24'~bitwise NOT~ inet '192.168.1.6'&bitwise ANDinet '192.168.1.6' & inet '0.0.0.255'|bitwise ORinet '192.168.1.6' | inet '0.0.0.255'+additioninet '192.168.1.6' + 25-subtractioninet '192.168.1.43' - 36-subtractioninet '192.168.1.43' - inet '192.168.1.19'
shows the functions
available for use with the cidr and inet
types. The abbrev, host,
and text
functions are primarily intended to offer alternative display
formats.
cidr and inet FunctionsFunctionReturn TypeDescriptionExampleResultabbrevabbrev(inet)textabbreviated display format as textabbrev(inet '10.1.0.0/16')10.1.0.0/16abbrev(cidr)textabbreviated display format as textabbrev(cidr '10.1.0.0/16')10.1/16broadcastbroadcast(inet)inetbroadcast address for networkbroadcast('192.168.1.5/24')192.168.1.255/24familyfamily(inet)intextract family of address; 4 for IPv4,
6 for IPv6family('::1')6hosthost(inet)textextract IP address as texthost('192.168.1.5/24')192.168.1.5hostmaskhostmask(inet)inetconstruct host mask for networkhostmask('192.168.23.20/30')0.0.0.3masklenmasklen(inet)intextract netmask lengthmasklen('192.168.1.5/24')24netmasknetmask(inet)inetconstruct netmask for networknetmask('192.168.1.5/24')255.255.255.0networknetwork(inet)cidrextract network part of addressnetwork('192.168.1.5/24')192.168.1.0/24set_masklenset_masklen(inet, int)inetset netmask length for inet valueset_masklen('192.168.1.5/24', 16)192.168.1.5/16set_masklen(cidr, int)cidrset netmask length for cidr valueset_masklen('192.168.1.0/24'::cidr, 16)192.168.0.0/16texttext(inet)textextract IP address and netmask length as texttext(inet '192.168.1.5')192.168.1.5/32
Any cidr> value can be cast to inet> implicitly
or explicitly; therefore, the functions shown above as operating on
inet> also work on cidr> values. (Where there are
separate functions for inet> and cidr>, it is because
the behavior should be different for the two cases.)
Also, it is permitted to cast an inet> value to cidr>.
When this is done, any bits to the right of the netmask are silently zeroed
to create a valid cidr> value.
In addition,
you can cast a text value to inet> or cidr>
using normal casting syntax: for example,
inet(expression>) or
colname>::cidr.
shows the functions
available for use with the macaddr type. The function
trunc(macaddr) returns a MAC
address with the last 3 bytes set to zero. This can be used to
associate the remaining prefix with a manufacturer.
macaddr FunctionsFunctionReturn TypeDescriptionExampleResulttrunctrunc(macaddr)macaddrset last 3 bytes to zerotrunc(macaddr '12:34:56:78:90:ab')12:34:56:00:00:00
The macaddr type also supports the standard relational
operators (>, <=, etc.) for
lexicographical ordering, and the bitwise arithmetic operators
(~, & and |)
for NOT, AND and OR.
Text Search Functions and Operatorsfull text searchfunctions and operatorstext searchfunctions and operators,
and
summarize the functions and operators that are provided
for full text searching. See for a detailed
explanation of PostgreSQL's text search
facility.
Text Search OperatorsOperatorDescriptionExampleResult@@tsvector> matches tsquery> ?to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')t@@@deprecated synonym for @@>to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat')t||concatenate tsvector>s'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector'a':1 'b':2,5 'c':3 'd':4&&AND tsquery>s together'fat | rat'::tsquery && 'cat'::tsquery( 'fat' | 'rat' ) & 'cat'||OR tsquery>s together'fat | rat'::tsquery || 'cat'::tsquery( 'fat' | 'rat' ) | 'cat'!!negate a tsquery>!! 'cat'::tsquery!'cat'@>tsquery> contains another ?'cat'::tsquery @> 'cat & rat'::tsqueryf<@tsquery> is contained in ?'cat'::tsquery <@ 'cat & rat'::tsqueryt
The tsquery> containment operators consider only the lexemes
listed in the two queries, ignoring the combining operators.
In addition to the operators shown in the table, the ordinary B-tree
comparison operators (=>, <>, etc) are defined
for types tsvector> and tsquery>. These are not very
useful for text searching but allow, for example, unique indexes to be
built on columns of these types.
Text Search FunctionsFunctionReturn TypeDescriptionExampleResultget_current_ts_configget_current_ts_config()regconfigget default text search configurationget_current_ts_config()englishlengthlength(tsvector>)integernumber of lexemes in tsvector>length('fat:2,4 cat:3 rat:5A'::tsvector)3numnodenumnode(tsquery>)integernumber of lexemes plus operators in tsquery> numnode('(fat & rat) | cat'::tsquery)5plainto_tsqueryplainto_tsquery(config> regconfig> , query> text)tsqueryproduce tsquery> ignoring punctuationplainto_tsquery('english', 'The Fat Rats')'fat' & 'rat'querytreequerytree(querytsquery>)textget indexable part of a tsquery>querytree('foo & ! bar'::tsquery)'foo'setweightsetweight(tsvector>, "char">)tsvectorassign weight to each element of tsvector>setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A')'cat':3A 'fat':2A,4A 'rat':5Astripstrip(tsvector>)tsvectorremove positions and weights from tsvector>strip('fat:2,4 cat:3 rat:5A'::tsvector)'cat' 'fat' 'rat'to_tsqueryto_tsquery(config> regconfig> , query> text)tsquerynormalize words and convert to tsquery>to_tsquery('english', 'The & Fat & Rats')'fat' & 'rat'to_tsvectorto_tsvector(config> regconfig> , document> text)tsvectorreduce document text to tsvector>to_tsvector('english', 'The Fat Rats')'fat':2 'rat':3ts_headlinets_headline(configregconfig>, documenttext>, querytsquery> , optionstext> )textdisplay a query matchts_headline('x y z', 'z'::tsquery)x y <b>z</b>ts_rankts_rank(weightsfloat4[]>, vectortsvector>, querytsquery> , normalizationinteger> )float4rank document for queryts_rank(textsearch, query)0.818ts_rank_cdts_rank_cd(weightsfloat4[]>, vectortsvector>, querytsquery> , normalizationinteger> )float4rank document for query using cover densityts_rank_cd('{0.1, 0.2, 0.4, 1.0}', textsearch, query)2.01317ts_rewritets_rewrite(querytsquery>, targettsquery>, substitutetsquery>)tsqueryreplace target with substitute within queryts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery)'b' & ( 'foo' | 'bar' )ts_rewrite(querytsquery>, selecttext>)tsqueryreplace using targets and substitutes from a SELECT> commandSELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases')'b' & ( 'foo' | 'bar' )tsvector_update_triggertsvector_update_trigger()triggertrigger function for automatic tsvector> column updateCREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body)tsvector_update_trigger_columntsvector_update_trigger_column()triggertrigger function for automatic tsvector> column updateCREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body)
All the text search functions that accept an optional regconfig>
argument will use the configuration specified by
when that argument is omitted.
The functions in
are listed separately because they are not usually used in everyday text
searching operations. They are helpful for development and debugging
of new text search configurations.
Text Search Debugging FunctionsFunctionReturn TypeDescriptionExampleResultts_debugts_debug(configregconfig>, documenttext>, OUT alias> text>, OUT description> text>, OUT token> text>, OUT dictionaries> regdictionary[]>, OUT dictionary> regdictionary>, OUT lexemes> text[]>)setof recordtest a configurationts_debug('english', 'The Brightest supernovaes')(asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) ...ts_lexizets_lexize(dictregdictionary>, tokentext>)text[]test a dictionaryts_lexize('english_stem', 'stars'){star}ts_parsets_parse(parser_nametext>, documenttext>, OUT tokid> integer>, OUT token> text>)setof recordtest a parserts_parse('default', 'foo - bar')(1,foo) ...ts_parse(parser_oidoid>, documenttext>, OUT tokid> integer>, OUT token> text>)setof recordtest a parserts_parse(3722, 'foo - bar')(1,foo) ...ts_token_typets_token_type(parser_name> text>, OUT tokid> integer>, OUT alias> text>, OUT description> text>)setof recordget token types defined by parserts_token_type('default')(1,asciiword,"Word, all ASCII") ...ts_token_type(parser_oid> oid>, OUT tokid> integer>, OUT alias> text>, OUT description> text>)setof recordget token types defined by parserts_token_type(3722)(1,asciiword,"Word, all ASCII") ...ts_statts_stat(sqlquerytext>, weightstext>, OUT wordtext>, OUT ndocinteger>, OUT nentryinteger>)setof recordget statistics of a tsvector> columnts_stat('SELECT vector from apod')(foo,10,15) ...
XML Functions
The functions and function-like expressions described in this
section operate on values of type xml. Check for information about the xml
type. The function-like expressions xmlparse
and xmlserialize for converting to and from
type xml are not repeated here. Use of most of these
functions requires the installation to have been built
with configure --with-libxml>.
Producing XML Content
A set of functions and function-like expressions are available for
producing XML content from SQL data. As such, they are
particularly suitable for formatting query results into XML
documents for processing in client applications.
xmlcommentxmlcommentxmlcomment(text)
The function xmlcomment creates an XML value
containing an XML comment with the specified text as content.
The text cannot contain -- or end with a
- so that the resulting construct is a valid
XML comment. If the argument is null, the result is null.
Example:
]]>xmlconcatxmlconcatxmlconcat(xml, ...)
The function xmlconcat concatenates a list
of individual XML values to create a single value containing an
XML content fragment. Null values are omitted; the result is
only null if there are no nonnull arguments.
Example:
', 'foo');
xmlconcat
----------------------
foo
]]>
XML declarations, if present, are combined as follows. If all
argument values have the same XML version declaration, that
version is used in the result, else no version is used. If all
argument values have the standalone declaration value
yes, then that value is used in the result. If
all argument values have a standalone declaration value and at
least one is no, then that is used in the result.
Else the result will have no standalone declaration. If the
result is determined to require a standalone declaration but no
version declaration, a version declaration with version 1.0 will
be used because XML requires an XML declaration to contain a
version declaration. Encoding declarations are ignored and
removed in all cases.
Example:
', '');
xmlconcat
-----------------------------------
]]>xmlelementxmlelementxmlelement(name name, xmlattributes(valueAS attname, ... ), content, ...)
The xmlelement expression produces an XML
element with the given name, attributes, and content.
Examples:
SELECT xmlelement(name foo, xmlattributes('xyz' as bar));
xmlelement
------------------
SELECT xmlelement(name foo, xmlattributes(current_date as bar), 'cont', 'ent');
xmlelement
-------------------------------------
content
]]>
Element and attribute names that are not valid XML names are
escaped by replacing the offending characters by the sequence
_xHHHH_, where
HHHH is the character's Unicode
codepoint in hexadecimal notation. For example:
]]>
An explicit attribute name need not be specified if the attribute
value is a column reference, in which case the column's name will
be used as the attribute name by default. In other cases, the
attribute must be given an explicit name. So this example is
valid:
CREATE TABLE test (a xml, b xml);
SELECT xmlelement(name test, xmlattributes(a, b)) FROM test;
But these are not:
SELECT xmlelement(name test, xmlattributes('constant'), a, b) FROM test;
SELECT xmlelement(name test, xmlattributes(func(a, b))) FROM test;
Element content, if specified, will be formatted according to
its data type. If the content is itself of type xml,
complex XML documents can be constructed. For example:
]]>
Content of other types will be formatted into valid XML character
data. This means in particular that the characters <, >,
and & will be converted to entities. Binary data (data type
bytea) will be represented in base64 or hex
encoding, depending on the setting of the configuration parameter
. The particular behavior for
individual data types is expected to evolve in order to align the
SQL and PostgreSQL data types with the XML Schema specification,
at which point a more precise description will appear.
xmlforestxmlforestxmlforest(contentAS name, ...)
The xmlforest expression produces an XML
forest (sequence) of elements using the given names and content.
Examples:
abc123
SELECT xmlforest(table_name, column_name)
FROM information_schema.columns
WHERE table_schema = 'pg_catalog';
xmlforest
-------------------------------------------------------------------------------------------
pg_authidrolnamepg_authidrolsuper
...
]]>
As seen in the second example, the element name can be omitted if
the content value is a column reference, in which case the column
name is used by default. Otherwise, a name must be specified.
Element names that are not valid XML names are escaped as shown
for xmlelement above. Similarly, content
data is escaped to make valid XML content, unless it is already
of type xml.
Note that XML forests are not valid XML documents if they consist
of more than one element, so it might be useful to wrap
xmlforest expressions in
xmlelement.
xmlpixmlpixmlpi(name target, content)
The xmlpi expression creates an XML
processing instruction. The content, if present, must not
contain the character sequence ?>.
Example:
]]>xmlrootxmlrootxmlroot(xml, version text | no value , standalone yes|no|no value)
The xmlroot expression alters the properties
of the root node of an XML value. If a version is specified,
it replaces the value in the root node's version declaration; if a
standalone setting is specified, it replaces the value in the
root node's standalone declaration.
abc'),
version '1.0', standalone yes);
xmlroot
----------------------------------------
abc
]]>xmlaggxmlaggxmlagg(xml)
The function xmlagg is, unlike the other
functions described here, an aggregate function. It concatenates the
input values to the aggregate function call,
much like xmlconcat does, except that concatenation
occurs across rows rather than across expressions in a single row.
See for additional information
about aggregate functions.
Example:
abc');
INSERT INTO test VALUES (2, '');
SELECT xmlagg(x) FROM test;
xmlagg
----------------------
abc
]]>
To determine the order of the concatenation, an ORDER BY>
clause may be added to the aggregate call as described in
. For example:
abc
]]>
The following non-standard approach used to be recommended
in previous versions, and may still be useful in specific
cases:
abc
]]>XML Predicates
The expressions described in this section check properties
of xml values.
IS DOCUMENTIS DOCUMENTxml IS DOCUMENT
The expression IS DOCUMENT returns true if the
argument XML value is a proper XML document, false if it is not
(that is, it is a content fragment), or null if the argument is
null. See about the difference
between documents and content fragments.
XMLEXISTSXMLEXISTSXMLEXISTS(text PASSING BY REFxmlBY REF)
The function xmlexists returns true if the
XPath expression in the first argument returns any nodes, and
false otherwise. (If either argument is null, the result is
null.)
Example:
TorontoOttawa');
xmlexists
------------
t
(1 row)
]]>
The BY REF clauses have no effect in
PostgreSQL, but are allowed for SQL conformance and compatibility
with other implementations. Per SQL standard, the
first BY REF is required, the second is
optional. Also note that the SQL standard specifies
the xmlexists construct to take an XQuery
expression as first argument, but PostgreSQL currently only
supports XPath, which is a subset of XQuery.
xml_is_well_formedxml_is_well_formedxml_is_well_formed_documentxml_is_well_formed_contentxml_is_well_formed(text)
xml_is_well_formed_document(text)
xml_is_well_formed_content(text)
These functions check whether a text> string is well-formed XML,
returning a Boolean result.
xml_is_well_formed_document checks for a well-formed
document, while xml_is_well_formed_content checks
for well-formed content. xml_is_well_formed does
the former if the configuration
parameter is set to DOCUMENT>, or the latter if it is set to
CONTENT>. This means that
xml_is_well_formed is useful for seeing whether
a simple cast to type xml> will succeed, whereas the other two
functions are useful for seeing whether the corresponding variants of
XMLPARSE> will succeed.
Examples:
');
xml_is_well_formed
--------------------
f
(1 row)
SELECT xml_is_well_formed('');
xml_is_well_formed
--------------------
t
(1 row)
SET xmloption TO CONTENT;
SELECT xml_is_well_formed('abc');
xml_is_well_formed
--------------------
t
(1 row)
SELECT xml_is_well_formed_document('bar');
xml_is_well_formed_document
-----------------------------
t
(1 row)
SELECT xml_is_well_formed_document('bar');
xml_is_well_formed_document
-----------------------------
f
(1 row)
]]>
The last example shows that the checks include whether
namespaces are correctly matched.
Processing XMLXPath
To process values of data type xml, PostgreSQL offers
the functions xpath and
xpath_exists, which evaluate XPath 1.0
expressions.
xpath(xpath, xml, nsarray)
The function xpath evaluates the XPath
expression xpath (a text> value)
against the XML value
xml. It returns an array of XML values
corresponding to the node set produced by the XPath expression.
If the XPath expression returns a scalar value rather than a node set,
a single-element array is returned.
The second argument must be a well formed XML document. In particular,
it must have a single root node element.
The optional third argument of the function is an array of namespace
mappings. This array should be a two-dimensional text> array with
the length of the second axis being equal to 2 (i.e., it should be an
array of arrays, each of which consists of exactly 2 elements).
The first element of each array entry is the namespace name (alias), the
second the namespace URI. It is not required that aliases provided in
this array be the same as those being used in the XML document itself (in
other words, both in the XML document and in the xpath
function context, aliases are local>).
Example:
test',
ARRAY[ARRAY['my', 'http://example.com']]);
xpath
--------
{test}
(1 row)
]]>
To deal with default (anonymous) namespaces, do something like this:
test',
ARRAY[ARRAY['mydefns', 'http://example.com']]);
xpath
--------
{test}
(1 row)
]]>xpath_existsxpath_exists(xpath, xml, nsarray)
The function xpath_exists is a specialized form
of the xpath function. Instead of returning the
individual XML values that satisfy the XPath, this function returns a
Boolean indicating whether the query was satisfied or not. This
function is equivalent to the standard XMLEXISTS> predicate,
except that it also offers support for a namespace mapping argument.
Example:
test',
ARRAY[ARRAY['my', 'http://example.com']]);
xpath_exists
--------------
t
(1 row)
]]>Mapping Tables to XMLXML export
The following functions map the contents of relational tables to
XML values. They can be thought of as XML export functionality:
table_to_xml(tbl regclass, nulls boolean, tableforest boolean, targetns text)
query_to_xml(query text, nulls boolean, tableforest boolean, targetns text)
cursor_to_xml(cursor refcursor, count int, nulls boolean,
tableforest boolean, targetns text)
The return type of each function is xml.
table_to_xml maps the content of the named
table, passed as parameter tbl. The
regclass type accepts strings identifying tables using the
usual notation, including optional schema qualifications and
double quotes. query_to_xml executes the
query whose text is passed as parameter
query and maps the result set.
cursor_to_xml fetches the indicated number of
rows from the cursor specified by the parameter
cursor. This variant is recommended if
large tables have to be mapped, because the result value is built
up in memory by each function.
If tableforest is false, then the resulting
XML document looks like this:
datadata
...
...
]]>
If tableforest is true, the result is an
XML content fragment that looks like this:
datadata
...
...
]]>
If no table name is available, that is, when mapping a query or a
cursor, the string table is used in the first
format, row in the second format.
The choice between these formats is up to the user. The first
format is a proper XML document, which will be important in many
applications. The second format tends to be more useful in the
cursor_to_xml function if the result values are to be
reassembled into one document later on. The functions for
producing XML content discussed above, in particular
xmlelement, can be used to alter the results
to taste.
The data values are mapped in the same way as described for the
function xmlelement above.
The parameter nulls determines whether null
values should be included in the output. If true, null values in
columns are represented as:
]]>
where xsi is the XML namespace prefix for XML
Schema Instance. An appropriate namespace declaration will be
added to the result value. If false, columns containing null
values are simply omitted from the output.
The parameter targetns specifies the
desired XML namespace of the result. If no particular namespace
is wanted, an empty string should be passed.
The following functions return XML Schema documents describing the
mappings performed by the corresponding functions above:
table_to_xmlschema(tbl regclass, nulls boolean, tableforest boolean, targetns text)
query_to_xmlschema(query text, nulls boolean, tableforest boolean, targetns text)
cursor_to_xmlschema(cursor refcursor, nulls boolean, tableforest boolean, targetns text)
It is essential that the same parameters are passed in order to
obtain matching XML data mappings and XML Schema documents.
The following functions produce XML data mappings and the
corresponding XML Schema in one document (or forest), linked
together. They can be useful where self-contained and
self-describing results are wanted:
table_to_xml_and_xmlschema(tbl regclass, nulls boolean, tableforest boolean, targetns text)
query_to_xml_and_xmlschema(query text, nulls boolean, tableforest boolean, targetns text)
In addition, the following functions are available to produce
analogous mappings of entire schemas or the entire current
database:
schema_to_xml(schema name, nulls boolean, tableforest boolean, targetns text)
schema_to_xmlschema(schema name, nulls boolean, tableforest boolean, targetns text)
schema_to_xml_and_xmlschema(schema name, nulls boolean, tableforest boolean, targetns text)
database_to_xml(nulls boolean, tableforest boolean, targetns text)
database_to_xmlschema(nulls boolean, tableforest boolean, targetns text)
database_to_xml_and_xmlschema(nulls boolean, tableforest boolean, targetns text)
Note that these potentially produce a lot of data, which needs to
be built up in memory. When requesting content mappings of large
schemas or databases, it might be worthwhile to consider mapping the
tables separately instead, possibly even through a cursor.
The result of a schema content mapping looks like this:
table1-mapping
table2-mapping
...
]]>
where the format of a table mapping depends on the
tableforest parameter as explained above.
The result of a database content mapping looks like this:
...
...
...
]]>
where the schema mapping is as above.
As an example of using the output produced by these functions,
shows an XSLT stylesheet that
converts the output of
table_to_xml_and_xmlschema to an HTML
document containing a tabular rendition of the table data. In a
similar manner, the results from these functions can be
converted into other XML-based formats.
XSLT Stylesheet for Converting SQL/XML Output to HTML
]]>
JSON Functions and OperatorsJSONFunctions and operators shows the operators that
are available for use with the two JSON datatypes (see ).
json> and jsonb> OperatorsOperatorRight Operand TypeDescriptionExample->intGet JSON array element'[1,2,3]'::json->2->textGet JSON object field'{"a":1,"b":2}'::json->'b'->>intGet JSON array element as text'[1,2,3]'::json->>2->>textGet JSON object field as text'{"a":1,"b":2}'::json->>'b'#>text[]Get JSON object at specified path'{"a":[1,2,3],"b":[4,5,6]}'::json#>'{a,2}'#>>text[]Get JSON object at specified path as text'{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}'
There are parallel variants of these operators for both the
json and jsonb types. In addition to
those operators common to both types, a further set of operators
exists for jsonb (which comprise the default
GIN operator class).
The following are jsonb>-only operators, used by
jsonb> operator classes. For a full description of
jsonb> containment semantics and nesting, see .
describes how these operators can be used to effectively index
jsonb>.
Additonal JSONB OperatorsOperatorRight Operand TypeDescriptionExample=jsonbIs the jsonb equal to this jsonb?'[1,2,3]'::jsonb = '[1,2,3]'::jsonb@>jsonbDoes the jsonb contain within it this jsonb?'{"a":1, "b":2}'::jsonb @> '{"b":2}'::jsonb<@jsonbDoes the jsonb have contained within it this jsonb?'{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb?textDoes this key/element string exist?'{"a":1, "b":2}'::jsonb ? 'b'?|text[]Do any of these key/element strings exist?'{"a":1, "b":2, "c":3}'::jsonb ?| array['b', 'c']?&text[]Do all of these key/element strings exist?'["a", "b"]'::jsonb ?& array['a', 'b']
shows the functions that are
available for creating json values.
(see )
array_to_jsonrow_to_jsonto_jsonjson_build_arrayjson_build_objectjson_object
JSON Creation FunctionsFunctionReturn TypeDescriptionExampleExample Resultarray_to_json(anyarray [, pretty_bool])json
Returns the array as JSON. A PostgreSQL multidimensional array
becomes a JSON array of arrays. Line feeds will be added between
dimension 1 elements if pretty_bool is true.
array_to_json('{{1,5},{99,100}}'::int[])[[1,5],[99,100]]row_to_json(record [, pretty_bool])json
Returns the row as JSON. Line feeds will be added between level
1 elements if pretty_bool is true.
row_to_json(row(1,'foo')){"f1":1,"f2":"foo"}to_json(anyelement)json
Returns the value as JSON. If the data type is not built in, and there
is a cast from the type to json, the cast function will be used to
perform the conversion. Otherwise, for any value other than a number,
a Boolean, or a null value, the text representation will be used, escaped and
quoted so that it is legal JSON.
to_json('Fred said "Hi."'::text)"Fred said \"Hi.\""json_build_array(VARIADIC "any")json
Builds a heterogeneously-typed json array out of a variadic argument list.
SELECT json_build_array(1,2,'3',4,5);
json_build_array
-------------------
[1, 2, "3", 4, 5]
json_build_object(VARIADIC "any")json
Builds a JSON array out of a variadic argument list. By
convention, the object is constructed out of alternating
name/value arguments.
SELECT json_build_object('foo',1,'bar',2);
json_build_object
------------------------
{"foo" : 1, "bar" : 2}
json_object(text[])json
Builds a JSON object out of a text array. The array must have either
exactly one dimension with an even number of members, in which case
they are taken as alternating name/value pairs, or two dimensions
such that each inner array has exactly two elements, which
are taken as a name/value pair.
select * from json_object('{a, 1, b, "def", c, 3.5}') or select json_object('{{a, 1},{b, "def"},{c, 3.5}}')
json_object
---------------------------------------
{"a" : "1", "b" : "def", "c" : "3.5"}
json_object(keys text[], values text[])json
The two-argument form of JSON object takes keys and values pairwise from two separate
arrays. In all other respects it is identical to the one-argument form.
select json_object('{a, b}', '{1,2}');
json_object
------------------------
{"a" : "1", "b" : "2"}
shows the functions that
are available for processing json and jsonb values.
(see )
json_array_lengthjsonb_array_lengthjson_eachjsonb_eachjson_each_textjsonb_each_textjson_extract_pathjsonb_extract_pathjson_extract_path_textjsonb_extract_path_textjson_object_keysjsonb_object_keysjson_populate_recordjsonb_populate_recordjson_populate_recordsetjsonb_populate_recordsetjson_array_elementsjsonb_array_elementsjson_array_elements_textjsonb_array_elements_textjson_typeofjsonb_typeofjson_to_recordjsonb_to_recordjson_to_recordsetjsonb_to_recordset
JSON Processing FunctionsFunctionReturn TypeDescriptionExampleExample Resultjson_array_length(json)jsonb_array_length(jsonb)int
Returns the number of elements in the outermost JSON array.
json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')5json_each(json)jsonb_each(jsonb)SETOF key text, value jsonSETOF key text, value jsonb
Expands the outermost JSON object into a set of key/value pairs.
select * from json_each('{"a":"foo", "b":"bar"}')
key | value
-----+-------
a | "foo"
b | "bar"
json_each_text(from_json json)jsonb_each_text(from_json jsonb)SETOF key text, value text
Expands the outermost JSON object into a set of key/value pairs. The
returned value will be of type text.
select * from json_each_text('{"a":"foo", "b":"bar"}')
key | value
-----+-------
a | foo
b | bar
json_extract_path(from_json json, VARIADIC path_elems text[])jsonb_extract_path(from_jsonb jsonb, VARIADIC path_elems text[])jsonjsonb
Returns JSON value pointed to by path_elems.
json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4'){"f5":99,"f6":"foo"}json_extract_path_text(from_json json, VARIADIC path_elems text[])json_extract_path_text(from_json json, VARIADIC path_elems text[])text
Returns JSON value pointed to by path_elems.
json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')foojson_object_keys(json)jsonb_object_keys(jsonb)SETOF text
Returns set of keys in the JSON object. Only the outer object will be displayed.
json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
json_object_keys
------------------
f1
f2
json_populate_record(base anyelement, from_json json, [, use_json_as_text bool=false])jsonb_populate_record(base anyelement, from_json jsonb, [, use_json_as_text bool=false])anyelement
Expands the object in from_json to a row whose columns match
the record type defined by base. Conversion will be best
effort; columns in base with no corresponding key in from_json
will be left null. When processing json, if a
column is specified more than once, the last value is used.
select * from json_populate_record(null::x, '{"a":1,"b":2}')
a | b
---+---
1 | 2
json_populate_recordset(base anyelement, from_json json, [, use_json_as_text bool=false])jsonb_populate_recordset(base anyelement, from_json jsonb, [, use_json_as_text bool=false])SETOF anyelement
Expands the outermost set of objects in from_json to a set
whose columns match the record type defined by base.
Conversion will be best effort; columns in base with no
corresponding key in from_json will be left null.
When processing json, if a column is specified more
than once, the last value is used.
select * from json_populate_recordset(null::x, '[{"a":1,"b":2},{"a":3,"b":4}]')
a | b
---+---
1 | 2
3 | 4
json_array_elements(json)jsonb_array_elements(jsonb)SETOF jsonSETOF jsonb
Expands a JSON array to a set of JSON values.
SELECT * FROM json_array_elements('[1,true, [2,false]]')
value
-----------
1
true
[2,false]
json_array_elements_text(json)jsonb_array_elements_text(jsonb)SETOF text
Expands a JSON array to a set of text values.
SELECT * FROM json_array_elements_text('["foo", "bar"]')
value
-----------
foo
bar
json_typeof(json)jsonb_typeof(jsonb)text
Returns the type of the outermost JSON value as a text string. The types are
object>, array>, string>, number>,
boolean>, and null>. (See note below regarding the
distinction between a JSON null> and a SQL NULL.)
json_typeof('-123.4')numberjson_to_record(json [, nested_as_text bool=false])jsonb_to_record(jsonb [, nested_as_text bool=false])record
Returns an arbitrary record from a JSON object. As with all functions
returning 'record', the caller must explicitly define the structure of the record
when making the call. The input JSON must be an object, not a scalar or an array.
If nested_as_text is true, the function coerces nested complex elements to text.
Also, see notes below on columns and types.
select * from json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}',true) as x(a int, b text, d text)
a | b | d
---+---------+---
1 | [1,2,3] |
json_to_recordset(json [, nested_as_text bool=false])jsonb_to_recordset(jsonb [, nested_as_text bool=false])setof record
Returns an arbitrary set of records from a JSON object. As with
json_to_record, the structure of the record must be explicitly defined when making the
call. However, with json_to_recordset the input JSON must be an array containing
objects. nested_as_text works as with json_to_record.
select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]',true) as x(a int, b text);
a | b
---+-----
1 | foo
2 |
The json functions and operators can impose stricter validity requirements
than the type's input functions. In particular, they check much more closely that any use
of Unicode surrogate pairs to designate characters outside the Unicode Basic Multilingual
Plane is correct.
Many of these functions and operators will convert Unicode escapes
in the JSON text to the appropriate UTF8 character when the database encoding is UTF8. In
other encodings the escape sequence must be for an ASCII character, and any other code point
in a Unicode escape sequence will result in an error.
In general, it is best to avoid mixing Unicode escapes in JSON with a non-UTF8 database
encoding, if possible.
In json_to_record and json_to_recordset, type-coercion from the JSON is
"best effort" and may not result in desired values for some types. JSON
elements are matched to identical field names in the record definition,
and elements which do not exist in the JSON will simply be NULL. JSON
elements which are not defined in the record template will
be omitted from the output.
The extension has a cast from hstore to
json, so that converted hstore values are represented as JSON objects,
not as string values.
The json_typeof> function's null> return value should not be confused
with a SQL NULL. While calling json_typeof('null'::json)> will return null>,
calling json_typeof(NULL::json)> will return a SQL NULL.
See also about the aggregate
function json_agg which aggregates record
values as JSON efficiently, and the aggregate function
json_object_agg, which aggregates pairs of values
into a JSON object.
Sequence Manipulation Functionssequencenextvalcurrvallastvalsetval
This section describes functions for operating on sequence
objects, also called sequence generators or just sequences.
Sequence objects are special single-row tables created with .
Sequence objects are commonly used to generate unique identifiers
for rows of a table. The sequence functions, listed in , provide simple, multiuser-safe
methods for obtaining successive sequence values from sequence
objects.
Sequence FunctionsFunctionReturn TypeDescriptioncurrval(regclass)bigintReturn value most recently obtained with
nextval for specified sequencelastval()bigintReturn value most recently obtained with
nextval for any sequencenextval(regclass)bigintAdvance sequence and return new valuesetval(regclass, bigint)bigintSet sequence's current valuesetval(regclass, bigint, boolean)bigintSet sequence's current value and is_called flag
The sequence to be operated on by a sequence function is specified by
a regclass> argument, which is simply the OID of the sequence in the
pg_class> system catalog. You do not have to look up the
OID by hand, however, since the regclass> data type's input
converter will do the work for you. Just write the sequence name enclosed
in single quotes so that it looks like a literal constant. For
compatibility with the handling of ordinary
SQL names, the string will be converted to lower case
unless it contains double quotes around the sequence name. Thus:
nextval('foo') operates on sequence foo>
nextval('FOO') operates on sequence foo>
nextval('"Foo"') operates on sequence Foo>
The sequence name can be schema-qualified if necessary:
nextval('myschema.foo') operates on myschema.foo>
nextval('"myschema".foo') same as above
nextval('foo') searches search path for foo>
See for more information about
regclass>.
Before PostgreSQL 8.1, the arguments of the
sequence functions were of type text>, not regclass>, and
the above-described conversion from a text string to an OID value would
happen at run time during each call. For backward compatibility, this
facility still exists, but internally it is now handled as an implicit
coercion from text> to regclass> before the function is
invoked.
When you write the argument of a sequence function as an unadorned
literal string, it becomes a constant of type regclass>.
Since this is really just an OID, it will track the originally
identified sequence despite later renaming, schema reassignment,
etc. This early binding> behavior is usually desirable for
sequence references in column defaults and views. But sometimes you might
want late binding> where the sequence reference is resolved
at run time. To get late-binding behavior, force the constant to be
stored as a text> constant instead of regclass>:
nextval('foo'::text) foo is looked up at runtime>
Note that late binding was the only behavior supported in
PostgreSQL releases before 8.1, so you
might need to do this to preserve the semantics of old applications.
Of course, the argument of a sequence function can be an expression
as well as a constant. If it is a text expression then the implicit
coercion will result in a run-time lookup.
The available sequence functions are:
nextval
Advance the sequence object to its next value and return that
value. This is done atomically: even if multiple sessions
execute nextval concurrently, each will safely receive
a distinct sequence value.
If a sequence object has been created with default parameters,
successive nextval calls will return successive
values beginning with 1. Other behaviors can be obtained by using
special parameters in the command;
see its command reference page for more information.
To avoid blocking concurrent transactions that obtain numbers from the
same sequence, a nextval operation is never
rolled back; that is, once a value has been fetched it is considered
used, even if the transaction that did the
nextval later aborts. This means that aborted
transactions might leave unused holes in the sequence
of assigned values.
currval
Return the value most recently obtained by nextval
for this sequence in the current session. (An error is
reported if nextval has never been called for this
sequence in this session.) Because this is returning
a session-local value, it gives a predictable answer whether or not
other sessions have executed nextval since the
current session did.
lastval
Return the value most recently returned by
nextval> in the current session. This function is
identical to currval, except that instead
of taking the sequence name as an argument it fetches the
value of the last sequence used by nextval
in the current session. It is an error to call
lastval if nextval
has not yet been called in the current session.
setval
Reset the sequence object's counter value. The two-parameter
form sets the sequence's last_value field to the
specified value and sets its is_called field to
true, meaning that the next
nextval will advance the sequence before
returning a value. The value reported by currval> is
also set to the specified value. In the three-parameter form,
is_called can be set to either true
or false. true> has the same effect as
the two-parameter form. If it is set to false, the
next nextval will return exactly the specified
value, and sequence advancement commences with the following
nextval. Furthermore, the value reported by
currval> is not changed in this case. For example,
SELECT setval('foo', 42); Next nextval> will return 43
SELECT setval('foo', 42, true); Same as above
SELECT setval('foo', 42, false); Next nextval> will return 42
The result returned by setval is just the value of its
second argument.
Because sequences are non-transactional, changes made by
setval are not undone if the transaction rolls
back.
Conditional ExpressionsCASEconditional expression
This section describes the SQL-compliant conditional expressions
available in PostgreSQL.
If your needs go beyond the capabilities of these conditional
expressions, you might want to consider writing a stored procedure
in a more expressive programming language.
CASE>
The SQL CASE expression is a
generic conditional expression, similar to if/else statements in
other programming languages:
CASE WHEN condition THEN resultWHEN ...ELSE result
END
CASE clauses can be used wherever
an expression is valid. Each condition is an
expression that returns a boolean result. If the condition's
result is true, the value of the CASE expression is the
result that follows the condition, and the
remainder of the CASE expression is not processed. If the
condition's result is not true, any subsequent WHEN clauses
are examined in the same manner. If no WHENcondition yields true, the value of the
CASE> expression is the result of the
ELSE clause. If the ELSE clause is
omitted and no condition is true, the result is null.
An example:
SELECT * FROM test;
a
---
1
2
3
SELECT a,
CASE WHEN a=1 THEN 'one'
WHEN a=2 THEN 'two'
ELSE 'other'
END
FROM test;
a | case
---+-------
1 | one
2 | two
3 | other
The data types of all the result
expressions must be convertible to a single output type.
See for more details.
There is a simple> form of CASE expression
that is a variant of the general form above:
CASE expression
WHEN value THEN resultWHEN ...ELSE result
END
The first
expression is computed, then compared to
each of the value expressions in the
WHEN clauses until one is found that is equal to it. If
no match is found, the result of the
ELSE clause (or a null value) is returned. This is similar
to the switch statement in C.
The example above can be written using the simple
CASE syntax:
SELECT a,
CASE a WHEN 1 THEN 'one'
WHEN 2 THEN 'two'
ELSE 'other'
END
FROM test;
a | case
---+-------
1 | one
2 | two
3 | other
A CASE expression does not evaluate any subexpressions
that are not needed to determine the result. For example, this is a
possible way of avoiding a division-by-zero failure:
SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END;
As described in , functions and
operators marked IMMUTABLE can be evaluated when
the query is planned rather than when it is executed. This means
that constant parts of a subexpression that is not evaluated during
query execution might still be evaluated during query planning.
COALESCE>COALESCENVLIFNULLCOALESCE(value, ...)
The COALESCE function returns the first of its
arguments that is not null. Null is returned only if all arguments
are null. It is often used to substitute a default value for
null values when data is retrieved for display, for example:
SELECT COALESCE(description, short_description, '(none)') ...
This returns description> if it is not null, otherwise
short_description> if it is not null, otherwise (none)>.
Like a CASE expression, COALESCE only
evaluates the arguments that are needed to determine the result;
that is, arguments to the right of the first non-null argument are
not evaluated. This SQL-standard function provides capabilities similar
to NVL> and IFNULL>, which are used in some other
database systems.
NULLIF>NULLIFNULLIF(value1, value2)
The NULLIF function returns a null value if
value1 equals value2;
otherwise it returns value1.
This can be used to perform the inverse operation of the
COALESCE example given above:
SELECT NULLIF(value, '(none)') ...
In this example, if value is (none)>,
null is returned, otherwise the value of value
is returned.
GREATEST and LEASTGREATESTLEASTGREATEST(value, ...)
LEAST(value, ...)
The GREATEST> and LEAST> functions select the
largest or smallest value from a list of any number of expressions.
The expressions must all be convertible to a common data type, which
will be the type of the result
(see for details). NULL values
in the list are ignored. The result will be NULL only if all the
expressions evaluate to NULL.
Note that GREATEST> and LEAST> are not in
the SQL standard, but are a common extension. Some other databases
make them return NULL if any argument is NULL, rather than only when
all are NULL.
Array Functions and Operators shows the operators
available for array types.
Array OperatorsOperatorDescriptionExampleResult=equalARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3]t<>not equalARRAY[1,2,3] <> ARRAY[1,2,4]t<less thanARRAY[1,2,3] < ARRAY[1,2,4]t>greater thanARRAY[1,4,3] > ARRAY[1,2,4]t<=less than or equalARRAY[1,2,3] <= ARRAY[1,2,3]t>=greater than or equalARRAY[1,4,3] >= ARRAY[1,4,3]t@>containsARRAY[1,4,3] @> ARRAY[3,1]t<@is contained byARRAY[2,7] <@ ARRAY[1,7,4,2,6]t&&overlap (have elements in common)ARRAY[1,4,3] && ARRAY[2,1]t||array-to-array concatenationARRAY[1,2,3] || ARRAY[4,5,6]{1,2,3,4,5,6}||array-to-array concatenationARRAY[1,2,3] || ARRAY[[4,5,6],[7,8,9]]{{1,2,3},{4,5,6},{7,8,9}}||element-to-array concatenation3 || ARRAY[4,5,6]{3,4,5,6}||array-to-element concatenationARRAY[4,5,6] || 7{4,5,6,7}
Array comparisons compare the array contents element-by-element,
using the default B-tree comparison function for the element data type.
In multidimensional arrays the elements are visited in row-major order
(last subscript varies most rapidly).
If the contents of two arrays are equal but the dimensionality is
different, the first difference in the dimensionality information
determines the sort order. (This is a change from versions of
PostgreSQL> prior to 8.2: older versions would claim
that two arrays with the same contents were equal, even if the
number of dimensions or subscript ranges were different.)
See for more details about array operator
behavior. See for more details about
which operators support indexed operations.
shows the functions
available for use with array types. See
for more information and examples of the use of these functions.
array_appendarray_catarray_ndimsarray_dimsarray_fillarray_lengtharray_lowerarray_prependarray_removearray_replacearray_to_stringarray_uppercardinalitystring_to_arrayunnest
Array FunctionsFunctionReturn TypeDescriptionExampleResultarray_append(anyarray, anyelement)
anyarrayappend an element to the end of an arrayarray_append(ARRAY[1,2], 3){1,2,3}array_cat(anyarray, anyarray)
anyarrayconcatenate two arraysarray_cat(ARRAY[1,2,3], ARRAY[4,5]){1,2,3,4,5}array_ndims(anyarray)
intreturns the number of dimensions of the arrayarray_ndims(ARRAY[[1,2,3], [4,5,6]])2array_dims(anyarray)
textreturns a text representation of array's dimensionsarray_dims(ARRAY[[1,2,3], [4,5,6]])[1:2][1:3]array_fill(anyelement, int[],
, int[])
anyarrayreturns an array initialized with supplied value and
dimensions, optionally with lower bounds other than 1array_fill(7, ARRAY[3], ARRAY[2])[2:4]={7,7,7}array_length(anyarray, int)
intreturns the length of the requested array dimensionarray_length(array[1,2,3], 1)3array_lower(anyarray, int)
intreturns lower bound of the requested array dimensionarray_lower('[0:2]={1,2,3}'::int[], 1)0array_prepend(anyelement, anyarray)
anyarrayappend an element to the beginning of an arrayarray_prepend(1, ARRAY[2,3]){1,2,3}array_remove(anyarray, anyelement)
anyarrayremove all elements equal to the given value from the array
(array must be one-dimensional)array_remove(ARRAY[1,2,3,2], 2){1,3}array_replace(anyarray, anyelement, anyelement)
anyarrayreplace each array element equal to the given value with a new valuearray_replace(ARRAY[1,2,5,4], 5, 3){1,2,3,4}array_to_string(anyarray, text, text)
textconcatenates array elements using supplied delimiter and
optional null stringarray_to_string(ARRAY[1, 2, 3, NULL, 5], ',', '*')1,2,3,*,5array_upper(anyarray, int)
intreturns upper bound of the requested array dimensionarray_upper(ARRAY[1,8,3,7], 1)4cardinality(anyarray)
intreturns the total number of elements in the array, or 0 if the array is emptycardinality(ARRAY[[1,2],[3,4]])4string_to_array(text, text, text)
text[]splits string into array elements using supplied delimiter and
optional null stringstring_to_array('xx~^~yy~^~zz', '~^~', 'yy'){xx,NULL,zz}unnest(anyarray)
setof anyelementexpand an array to a set of rowsunnest(ARRAY[1,2])1
2(2 rows)unnest(anyarray, anyarray [, ...])
setof anyelement, anyelement [, ...]expand multiple arrays (possibly of different types) to a set
of rows. This is only allowed in the FROM clause; see
unnest(ARRAY[1,2],ARRAY['foo','bar','baz'])1 foo
2 bar
NULL baz(3 rows)
In string_to_array, if the delimiter parameter is
NULL, each character in the input string will become a separate element in
the resulting array. If the delimiter is an empty string, then the entire
input string is returned as a one-element array. Otherwise the input
string is split at each occurrence of the delimiter string.
In string_to_array, if the null-string parameter
is omitted or NULL, none of the substrings of the input will be replaced
by NULL.
In array_to_string, if the null-string parameter
is omitted or NULL, any null elements in the array are simply skipped
and not represented in the output string.
There are two differences in the behavior of string_to_array>
from pre-9.1 versions of PostgreSQL>.
First, it will return an empty (zero-element) array rather than NULL when
the input string is of zero length. Second, if the delimiter string is
NULL, the function splits the input into individual characters, rather
than returning NULL as before.
See also about the aggregate
function array_agg for use with arrays.
Range Functions and Operators
See for an overview of range types.
shows the operators
available for range types.
Range OperatorsOperatorDescriptionExampleResult=equalint4range(1,5) = '[1,4]'::int4ranget<>not equalnumrange(1.1,2.2) <> numrange(1.1,2.3)t<less thanint4range(1,10) < int4range(2,3)t>greater thanint4range(1,10) > int4range(1,5)t<=less than or equalnumrange(1.1,2.2) <= numrange(1.1,2.2)t>=greater than or equalnumrange(1.1,2.2) >= numrange(1.1,2.0)t@>contains rangeint4range(2,4) @> int4range(2,3)t@>contains element'[2011-01-01,2011-03-01)'::tsrange @> '2011-01-10'::timestampt<@range is contained byint4range(2,4) <@ int4range(1,7)t<@element is contained by42 <@ int4range(1,7)f&&overlap (have points in common)int8range(3,7) && int8range(4,12)t<<strictly left ofint8range(1,10) << int8range(100,110)t>>strictly right ofint8range(50,60) >> int8range(20,30)t&<does not extend to the right ofint8range(1,20) &< int8range(18,20)t&>does not extend to the left ofint8range(7,20) &> int8range(5,10)t-|-is adjacent tonumrange(1.1,2.2) -|- numrange(2.2,3.3)t+unionnumrange(5,15) + numrange(10,20)[5,20)*intersectionint8range(5,15) * int8range(10,20)[10,15)-differenceint8range(5,15) - int8range(10,20)[5,10)
The simple comparison operators <,
>, <=, and
>= compare the lower bounds first, and only if those
are equal, compare the upper bounds. These comparisons are not usually
very useful for ranges, but are provided to allow B-tree indexes to be
constructed on ranges.
The left-of/right-of/adjacent operators always return false when an empty
range is involved; that is, an empty range is not considered to be either
before or after any other range.
The union and difference operators will fail if the resulting range would
need to contain two disjoint sub-ranges, as such a range cannot be
represented.
shows the functions
available for use with range types.
lowerupperisemptylower_incupper_inclower_infupper_inf
Range FunctionsFunctionReturn TypeDescriptionExampleResultlower(anyrange)
range's element typelower bound of rangelower(numrange(1.1,2.2))1.1upper(anyrange)
range's element typeupper bound of rangeupper(numrange(1.1,2.2))2.2isempty(anyrange)
booleanis the range empty?isempty(numrange(1.1,2.2))falselower_inc(anyrange)
booleanis the lower bound inclusive?lower_inc(numrange(1.1,2.2))trueupper_inc(anyrange)
booleanis the upper bound inclusive?upper_inc(numrange(1.1,2.2))falselower_inf(anyrange)
booleanis the lower bound infinite?lower_inf('(,)'::daterange)trueupper_inf(anyrange)
booleanis the upper bound infinite?upper_inf('(,)'::daterange)true
The lower> and upper> functions return null
if the range is empty or the requested bound is infinite.
The lower_inc, upper_inc,
lower_inf, and upper_inf
functions all return false for an empty range.
Aggregate Functionsaggregate functionbuilt-inAggregate functions compute a single result
from a set of input values. The built-in normal aggregate functions
are listed in
and
.
The built-in ordered-set aggregate functions
are listed in and
.
The special syntax considerations for aggregate
functions are explained in .
Consult for additional introductory
information.
General-Purpose Aggregate FunctionsFunctionArgument Type(s)Return TypeDescriptionarray_aggarray_agg(expression)
any
array of the argument type
input values, including nulls, concatenated into an arrayaverageavgavg(expression)smallint, int,
bigint, real, double
precision, numeric, or intervalnumeric for any integer-type argument,
double precision for a floating-point argument,
otherwise the same as the argument data type
the average (arithmetic mean) of all input valuesbit_andbit_and(expression)smallint, int, bigint, or
bit
same as argument data type
the bitwise AND of all non-null input values, or null if nonebit_orbit_or(expression)smallint, int, bigint, or
bit
same as argument data type
the bitwise OR of all non-null input values, or null if nonebool_andbool_and(expression)boolbooltrue if all input values are true, otherwise falsebool_orbool_or(expression)boolbooltrue if at least one input value is true, otherwise falsecountcount(*)bigintnumber of input rowscount(expression)anybigint
number of input rows for which the value of expression is not null
everyevery(expression)boolboolequivalent to bool_andjson_aggjson_agg(record)recordjsonaggregates records as a JSON array of objectsjson_object_aggjson_object_agg(expression)("any", "any")jsonaggregates name/value pairs as a JSON objectmaxmax(expression)any array, numeric, string, or date/time typesame as argument type
maximum value of expression across all input
values
minmin(expression)any array, numeric, string, or date/time typesame as argument type
minimum value of expression across all input
values
string_agg
string_agg(expression,
delimiter)
(text, text) or (bytea, bytea)
same as argument types
input values concatenated into a string, separated by delimitersumsum(expression)smallint, int,
bigint, real, double
precision, numeric, or
intervalbigint for smallint or
int arguments, numeric for
bigint arguments, double precision
for floating-point arguments, otherwise the same as the
argument data type
sum of expression across all input valuesxmlaggxmlagg(expression)xmlxmlconcatenation of XML values (see also )
It should be noted that except for count,
these functions return a null value when no rows are selected. In
particular, sum of no rows returns null, not
zero as one might expect, and array_agg
returns null rather than an empty array when there are no input
rows. The coalesce function can be used to
substitute zero or an empty array for null when necessary.
ANYSOME
Boolean aggregates bool_and and
bool_or correspond to standard SQL aggregates
every and any or
some.
As for any and some,
it seems that there is an ambiguity built into the standard syntax:
SELECT b1 = ANY((SELECT b2 FROM t2 ...)) FROM t1 ...;
Here ANY can be considered either as introducing
a subquery, or as being an aggregate function, if the subquery
returns one row with a Boolean value.
Thus the standard name cannot be given to these aggregates.
Users accustomed to working with other SQL database management
systems might be disappointed by the performance of the
count aggregate when it is applied to the
entire table. A query like:
SELECT count(*) FROM sometable;
will require effort proportional to the size of the table:
PostgreSQL will need to scan either the
entire table or the entirety of an index which includes all rows in
the table.
The aggregate functions array_agg,
json_agg,
json_object_agg,
string_agg,
and xmlagg, as well as similar user-defined
aggregate functions, produce meaningfully different result values
depending on the order of the input values. This ordering is
unspecified by default, but can be controlled by writing an
ORDER BY> clause within the aggregate call, as shown in
.
Alternatively, supplying the input values from a sorted subquery
will usually work. For example:
But this syntax is not allowed in the SQL standard, and is
not portable to other database systems.
shows
aggregate functions typically used in statistical analysis.
(These are separated out merely to avoid cluttering the listing
of more-commonly-used aggregates.) Where the description mentions
N, it means the
number of input rows for which all the input expressions are non-null.
In all cases, null is returned if the computation is meaningless,
for example when N is zero.
statisticslinear regression
Aggregate Functions for StatisticsFunctionArgument TypeReturn TypeDescriptioncorrelationcorrcorr(Y, X)double precisiondouble precisioncorrelation coefficientcovariancepopulationcovar_popcovar_pop(Y, X)double precisiondouble precisionpopulation covariancecovariancesamplecovar_sampcovar_samp(Y, X)double precisiondouble precisionsample covarianceregr_avgxregr_avgx(Y, X)double precisiondouble precisionaverage of the independent variable
(sum(X)/N)regr_avgyregr_avgy(Y, X)double precisiondouble precisionaverage of the dependent variable
(sum(Y)/N)regr_countregr_count(Y, X)double precisionbigintnumber of input rows in which both expressions are nonnullregression interceptregr_interceptregr_intercept(Y, X)double precisiondouble precisiony-intercept of the least-squares-fit linear equation
determined by the (X, Y) pairsregr_r2regr_r2(Y, X)double precisiondouble precisionsquare of the correlation coefficientregression sloperegr_sloperegr_slope(Y, X)double precisiondouble precisionslope of the least-squares-fit linear equation determined
by the (X,
Y) pairsregr_sxxregr_sxx(Y, X)double precisiondouble precisionsum(X^2) - sum(X)^2/N (sum of
squares of the independent variable)regr_sxyregr_sxy(Y, X)double precisiondouble precisionsum(X*Y) - sum(X) * sum(Y)/N (sum of
products of independent times dependent
variable)regr_syyregr_syy(Y, X)double precisiondouble precisionsum(Y^2) - sum(Y)^2/N (sum of
squares of the dependent variable)standard deviationstddevstddev(expression)smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numerichistorical alias for stddev_sampstandard deviationpopulationstddev_popstddev_pop(expression)smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numericpopulation standard deviation of the input valuesstandard deviationsamplestddev_sampstddev_samp(expression)smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numericsample standard deviation of the input valuesvariancevariance(expression)
smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numerichistorical alias for var_sampvariancepopulationvar_popvar_pop(expression)
smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numericpopulation variance of the input values (square of the population standard deviation)variancesamplevar_sampvar_samp(expression)
smallint, int,
bigint, real, double
precision, or numericdouble precision for floating-point arguments,
otherwise numericsample variance of the input values (square of the sample standard deviation)
shows some
aggregate functions that use the ordered-set aggregate>
syntax. These functions are sometimes referred to as inverse
distribution> functions.
ordered-set aggregatebuilt-ininverse distribution
Ordered-Set Aggregate FunctionsFunctionDirect Argument Type(s)Aggregated Argument Type(s)Return TypeDescriptionmodestatisticalmode() WITHIN GROUP (ORDER BY sort_expression)
any sortable type
same as sort expression
returns the most frequent input value (arbitrarily choosing the first
one if there are multiple equally-frequent results)
percentilecontinuousmedianpercentile_cont(fraction) WITHIN GROUP (ORDER BY sort_expression)double precisiondouble precision or interval
same as sort expression
continuous percentile: returns a value corresponding to the specified
fraction in the ordering, interpolating between adjacent input items if
needed
percentile_cont(fractions) WITHIN GROUP (ORDER BY sort_expression)double precision[]double precision or interval
array of sort expression's type
multiple continuous percentile: returns an array of results matching
the shape of the fractions parameter, with each
non-null element replaced by the value corresponding to that percentile
percentilediscretepercentile_disc(fraction) WITHIN GROUP (ORDER BY sort_expression)double precision
any sortable type
same as sort expression
discrete percentile: returns the first input value whose position in
the ordering equals or exceeds the specified fraction
percentile_disc(fractions) WITHIN GROUP (ORDER BY sort_expression)double precision[]
any sortable type
array of sort expression's type
multiple discrete percentile: returns an array of results matching the
shape of the fractions parameter, with each non-null
element replaced by the input value corresponding to that percentile
All the aggregates listed in
ignore null values in their sorted input. For those that take
a fraction parameter, the fraction value must be
between 0 and 1; an error is thrown if not. However, a null fraction value
simply produces a null result.
Each of the aggregates listed in
is associated with a
window function of the same name defined in
. In each case, the aggregate result
is the value that the associated window function would have
returned for the hypothetical> row constructed from
args, if such a row had been added to the sorted
group of rows computed from the sorted_args.
Hypothetical-Set Aggregate FunctionsFunctionDirect Argument Type(s)Aggregated Argument Type(s)Return TypeDescriptionrankhypotheticalrank(args) WITHIN GROUP (ORDER BY sorted_args)VARIADIC> "any"VARIADIC> "any"bigint
rank of the hypothetical row, with gaps for duplicate rows
dense_rankhypotheticaldense_rank(args) WITHIN GROUP (ORDER BY sorted_args)VARIADIC> "any"VARIADIC> "any"bigint
rank of the hypothetical row, without gaps
percent_rankhypotheticalpercent_rank(args) WITHIN GROUP (ORDER BY sorted_args)VARIADIC> "any"VARIADIC> "any"double precision
relative rank of the hypothetical row, ranging from 0 to 1
cume_disthypotheticalcume_dist(args) WITHIN GROUP (ORDER BY sorted_args)VARIADIC> "any"VARIADIC> "any"double precision
relative rank of the hypothetical row, ranging from
1/N> to 1
For each of these hypothetical-set aggregates, the list of direct arguments
given in args must match the number and types of
the aggregated arguments given in sorted_args.
Unlike most built-in aggregates, these aggregates are not strict, that is
they do not drop input rows containing nulls. Null values sort according
to the rule specified in the ORDER BY> clause.
Window Functionswindow functionbuilt-inWindow functions provide the ability to perform
calculations across sets of rows that are related to the current query
row. See for an introduction to this
feature.
The built-in window functions are listed in
. Note that these functions
must> be invoked using window function syntax; that is an
OVER> clause is required.
In addition to these functions, any built-in or user-defined normal
aggregate function (but not ordered-set or hypothetical-set aggregates)
can be used as a window function; see
for a list of the built-in aggregates.
Aggregate functions act as window functions only when an OVER>
clause follows the call; otherwise they act as regular aggregates.
General-Purpose Window FunctionsFunctionReturn TypeDescriptionrow_numberrow_number()bigintnumber of the current row within its partition, counting from 1rankrank()bigintrank of the current row with gaps; same as row_number> of its first peerdense_rankdense_rank()bigintrank of the current row without gaps; this function counts peer groupspercent_rankpercent_rank()double precisionrelative rank of the current row: (rank> - 1) / (total rows - 1)cume_distcume_dist()double precisionrelative rank of the current row: (number of rows preceding or peer with current row) / (total rows)ntilentile(num_bucketsinteger>)integerinteger ranging from 1 to the argument value, dividing the
partition as equally as possiblelag
lag(valueany>
[, offsetinteger>
[, defaultany> ]])
same type as value
returns value evaluated at
the row that is offset
rows before the current row within the partition; if there is no such
row, instead return default.
Both offset and
default are evaluated
with respect to the current row. If omitted,
offset defaults to 1 and
default to null
lead
lead(valueany>
[, offsetinteger>
[, defaultany> ]])
same type as value
returns value evaluated at
the row that is offset
rows after the current row within the partition; if there is no such
row, instead return default.
Both offset and
default are evaluated
with respect to the current row. If omitted,
offset defaults to 1 and
default to null
first_valuefirst_value(valueany>)same type as value
returns value evaluated
at the row that is the first row of the window frame
last_valuelast_value(valueany>)same type as value
returns value evaluated
at the row that is the last row of the window frame
nth_value
nth_value(valueany>, nthinteger>)
same type as value
returns value evaluated
at the row that is the nth
row of the window frame (counting from 1); null if no such row
All of the functions listed in
depend on the sort ordering
specified by the ORDER BY> clause of the associated window
definition. Rows that are not distinct in the ORDER BY>
ordering are said to be peers>; the four ranking functions
are defined so that they give the same answer for any two peer rows.
Note that first_value>, last_value>, and
nth_value> consider only the rows within the window
frame>, which by default contains the rows from the start of the
partition through the last peer of the current row. This is
likely to give unhelpful results for last_value> and
sometimes also nth_value>. You can redefine the frame by
adding a suitable frame specification (RANGE> or
ROWS>) to the OVER> clause.
See for more information
about frame specifications.
When an aggregate function is used as a window function, it aggregates
over the rows within the current row's window frame.
An aggregate used with ORDER BY> and the default window frame
definition produces a running sum> type of behavior, which may or
may not be what's wanted. To obtain
aggregation over the whole partition, omit ORDER BY> or use
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING>.
Other frame specifications can be used to obtain other effects.
The SQL standard defines a RESPECT NULLS> or
IGNORE NULLS> option for lead>, lag>,
first_value>, last_value>, and
nth_value>. This is not implemented in
PostgreSQL: the behavior is always the
same as the standard's default, namely RESPECT NULLS>.
Likewise, the standard's FROM FIRST> or FROM LAST>
option for nth_value> is not implemented: only the
default FROM FIRST> behavior is supported. (You can achieve
the result of FROM LAST> by reversing the ORDER BY>
ordering.)
Subquery ExpressionsEXISTSINNOT INANYALLSOMEsubquery
This section describes the SQL-compliant subquery
expressions available in PostgreSQL.
All of the expression forms documented in this section return
Boolean (true/false) results.
EXISTS
EXISTS (subquery)
The argument of EXISTS is an arbitrary SELECT> statement,
or subquery. The
subquery is evaluated to determine whether it returns any rows.
If it returns at least one row, the result of EXISTS is
true>; if the subquery returns no rows, the result of EXISTS
is false>.
The subquery can refer to variables from the surrounding query,
which will act as constants during any one evaluation of the subquery.
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
It is unwise to write a subquery that has side effects (such as
calling sequence functions); whether the side effects occur
might be unpredictable.
Since the result depends only on whether any rows are returned,
and not on the contents of those rows, the output list of the
subquery is normally unimportant. A common coding convention is
to write all EXISTS> tests in the form
EXISTS(SELECT 1 WHERE ...). There are exceptions to
this rule however, such as subqueries that use INTERSECT.
This simple example is like an inner join on col2>, but
it produces at most one output row for each tab1> row,
even if there are several matching tab2> rows:
SELECT col1
FROM tab1
WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2);
INexpression IN (subquery)
The right-hand side is a parenthesized
subquery, which must return exactly one column. The left-hand expression
is evaluated and compared to each row of the subquery result.
The result of IN is true> if any equal subquery row is found.
The result is false> if no equal row is found (including the
case where the subquery returns no rows).
Note that if the left-hand expression yields null, or if there are
no equal right-hand values and at least one right-hand row yields
null, the result of the IN construct will be null, not false.
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
As with EXISTS, it's unwise to assume that the subquery will
be evaluated completely.
row_constructor IN (subquery)
The left-hand side of this form of IN is a row constructor,
as described in .
The right-hand side is a parenthesized
subquery, which must return exactly as many columns as there are
expressions in the left-hand row. The left-hand expressions are
evaluated and compared row-wise to each row of the subquery result.
The result of IN is true> if any equal subquery row is found.
The result is false> if no equal row is found (including the
case where the subquery returns no rows).
As usual, null values in the rows are combined per
the normal rules of SQL Boolean expressions. Two rows are considered
equal if all their corresponding members are non-null and equal; the rows
are unequal if any corresponding members are non-null and unequal;
otherwise the result of that row comparison is unknown (null).
If all the per-row results are either unequal or null, with at least one
null, then the result of IN is null.
NOT INexpression NOT IN (subquery)
The right-hand side is a parenthesized
subquery, which must return exactly one column. The left-hand expression
is evaluated and compared to each row of the subquery result.
The result of NOT IN is true> if only unequal subquery rows
are found (including the case where the subquery returns no rows).
The result is false> if any equal row is found.
Note that if the left-hand expression yields null, or if there are
no equal right-hand values and at least one right-hand row yields
null, the result of the NOT IN construct will be null, not true.
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
As with EXISTS, it's unwise to assume that the subquery will
be evaluated completely.
row_constructor NOT IN (subquery)
The left-hand side of this form of NOT IN is a row constructor,
as described in .
The right-hand side is a parenthesized
subquery, which must return exactly as many columns as there are
expressions in the left-hand row. The left-hand expressions are
evaluated and compared row-wise to each row of the subquery result.
The result of NOT IN is true> if only unequal subquery rows
are found (including the case where the subquery returns no rows).
The result is false> if any equal row is found.
As usual, null values in the rows are combined per
the normal rules of SQL Boolean expressions. Two rows are considered
equal if all their corresponding members are non-null and equal; the rows
are unequal if any corresponding members are non-null and unequal;
otherwise the result of that row comparison is unknown (null).
If all the per-row results are either unequal or null, with at least one
null, then the result of NOT IN is null.
ANY/SOMEexpressionoperator ANY (subquery)
expressionoperator SOME (subquery)
The right-hand side is a parenthesized
subquery, which must return exactly one column. The left-hand expression
is evaluated and compared to each row of the subquery result using the
given operator, which must yield a Boolean
result.
The result of ANY is true> if any true result is obtained.
The result is false> if no true result is found (including the
case where the subquery returns no rows).
SOME is a synonym for ANY.
IN is equivalent to = ANY.
Note that if there are no successes and at least one right-hand row yields
null for the operator's result, the result of the ANY construct
will be null, not false.
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
As with EXISTS, it's unwise to assume that the subquery will
be evaluated completely.
row_constructoroperator> ANY (subquery)
row_constructoroperator> SOME (subquery)
The left-hand side of this form of ANY is a row constructor,
as described in .
The right-hand side is a parenthesized
subquery, which must return exactly as many columns as there are
expressions in the left-hand row. The left-hand expressions are
evaluated and compared row-wise to each row of the subquery result,
using the given operator.
The result of ANY is true> if the comparison
returns true for any subquery row.
The result is false> if the comparison returns false for every
subquery row (including the case where the subquery returns no
rows).
The result is NULL if the comparison does not return true for any row,
and it returns NULL for at least one row.
See for details about the meaning
of a row constructor comparison.
ALLexpressionoperator ALL (subquery)
The right-hand side is a parenthesized
subquery, which must return exactly one column. The left-hand expression
is evaluated and compared to each row of the subquery result using the
given operator, which must yield a Boolean
result.
The result of ALL is true> if all rows yield true
(including the case where the subquery returns no rows).
The result is false> if any false result is found.
The result is NULL if the comparison does not return false for any row,
and it returns NULL for at least one row.
NOT IN is equivalent to <> ALL.
As with EXISTS, it's unwise to assume that the subquery will
be evaluated completely.
row_constructoroperator ALL (subquery)
The left-hand side of this form of ALL is a row constructor,
as described in .
The right-hand side is a parenthesized
subquery, which must return exactly as many columns as there are
expressions in the left-hand row. The left-hand expressions are
evaluated and compared row-wise to each row of the subquery result,
using the given operator.
The result of ALL is true> if the comparison
returns true for all subquery rows (including the
case where the subquery returns no rows).
The result is false> if the comparison returns false for any
subquery row.
The result is NULL if the comparison does not return false for any
subquery row, and it returns NULL for at least one row.
See for details about the meaning
of a row constructor comparison.
Single-row Comparisoncomparisonsubquery result rowrow_constructoroperator (subquery)
The left-hand side is a row constructor,
as described in .
The right-hand side is a parenthesized subquery, which must return exactly
as many columns as there are expressions in the left-hand row. Furthermore,
the subquery cannot return more than one row. (If it returns zero rows,
the result is taken to be null.) The left-hand side is evaluated and
compared row-wise to the single subquery result row.
See for details about the meaning
of a row constructor comparison.
Row and Array ComparisonsINNOT INANYALLSOMEcomposite typecomparisonrow-wise comparisoncomparisoncomposite typecomparisonrow constructorIS DISTINCT FROMIS NOT DISTINCT FROM
This section describes several specialized constructs for making
multiple comparisons between groups of values. These forms are
syntactically related to the subquery forms of the previous section,
but do not involve subqueries.
The forms involving array subexpressions are
PostgreSQL extensions; the rest are
SQL-compliant.
All of the expression forms documented in this section return
Boolean (true/false) results.
INexpression IN (value, ...)
The right-hand side is a parenthesized list
of scalar expressions. The result is true> if the left-hand expression's
result is equal to any of the right-hand expressions. This is a shorthand
notation for
expression = value1
OR
expression = value2
OR
...
Note that if the left-hand expression yields null, or if there are
no equal right-hand values and at least one right-hand expression yields
null, the result of the IN construct will be null, not false.
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
NOT INexpression NOT IN (value, ...)
The right-hand side is a parenthesized list
of scalar expressions. The result is true if the left-hand expression's
result is unequal to all of the right-hand expressions. This is a shorthand
notation for
expression <> value1
AND
expression <> value2
AND
...
Note that if the left-hand expression yields null, or if there are
no equal right-hand values and at least one right-hand expression yields
null, the result of the NOT IN construct will be null, not true
as one might naively expect.
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
x NOT IN y is equivalent to NOT (x IN y) in all
cases. However, null values are much more likely to trip up the novice when
working with NOT IN than when working with IN.
It is best to express your condition positively if possible.
ANY/SOME (array)expressionoperator ANY (array expression)
expressionoperator SOME (array expression)
The right-hand side is a parenthesized expression, which must yield an
array value.
The left-hand expression
is evaluated and compared to each element of the array using the
given operator, which must yield a Boolean
result.
The result of ANY is true> if any true result is obtained.
The result is false> if no true result is found (including the
case where the array has zero elements).
If the array expression yields a null array, the result of
ANY will be null. If the left-hand expression yields null,
the result of ANY is ordinarily null (though a non-strict
comparison operator could possibly yield a different result).
Also, if the right-hand array contains any null elements and no true
comparison result is obtained, the result of ANY
will be null, not false (again, assuming a strict comparison operator).
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
SOME is a synonym for ANY.
ALL (array)expressionoperator ALL (array expression)
The right-hand side is a parenthesized expression, which must yield an
array value.
The left-hand expression
is evaluated and compared to each element of the array using the
given operator, which must yield a Boolean
result.
The result of ALL is true> if all comparisons yield true
(including the case where the array has zero elements).
The result is false> if any false result is found.
If the array expression yields a null array, the result of
ALL will be null. If the left-hand expression yields null,
the result of ALL is ordinarily null (though a non-strict
comparison operator could possibly yield a different result).
Also, if the right-hand array contains any null elements and no false
comparison result is obtained, the result of ALL
will be null, not true (again, assuming a strict comparison operator).
This is in accordance with SQL's normal rules for Boolean combinations
of null values.
Row Constructor Comparisonrow_constructoroperatorrow_constructor
Each side is a row constructor,
as described in .
The two row values must have the same number of fields.
Each side is evaluated and they are compared row-wise. Row constructor
comparisons are allowed when the operator is
=>,
<>>,
<>,
<=>,
>> or
>=>.
Every row element must be of a type which has a default B-tree operator
class or the attempted comparison may generate an error.
Errors related to the number or types of elements might not occur if
the comparison is resolved using earlier columns.
The => and <>> cases work slightly differently
from the others. Two rows are considered
equal if all their corresponding members are non-null and equal; the rows
are unequal if any corresponding members are non-null and unequal;
otherwise the result of the row comparison is unknown (null).
For the <>, <=>, >> and
>=> cases, the row elements are compared left-to-right,
stopping as soon as an unequal or null pair of elements is found.
If either of this pair of elements is null, the result of the
row comparison is unknown (null); otherwise comparison of this pair
of elements determines the result. For example,
ROW(1,2,NULL) < ROW(1,3,0)>
yields true, not null, because the third pair of elements are not
considered.
Prior to PostgreSQL 8.2, the
<>, <=>, >> and >=>
cases were not handled per SQL specification. A comparison like
ROW(a,b) < ROW(c,d)>
was implemented as
a < c AND b < d>
whereas the correct behavior is equivalent to
a < c OR (a = c AND b < d)>.
row_constructor IS DISTINCT FROM row_constructor
This construct is similar to a <> row comparison,
but it does not yield null for null inputs. Instead, any null value is
considered unequal to (distinct from) any non-null value, and any two
nulls are considered equal (not distinct). Thus the result will
either be true or false, never null.
row_constructor IS NOT DISTINCT FROM row_constructor
This construct is similar to a = row comparison,
but it does not yield null for null inputs. Instead, any null value is
considered unequal to (distinct from) any non-null value, and any two
nulls are considered equal (not distinct). Thus the result will always
be either true or false, never null.
Composite Type Comparisonrecordoperatorrecord
The SQL specification requires row-wise comparison to return NULL if the
result depends on comparing two NULL values or a NULL and a non-NULL.
PostgreSQL does this only when comparing the
results of two row constructors (as in
) or comparing a row constructor
to the output of a subquery (as in ).
In other contexts where two composite-type values are compared, two
NULL field values are considered equal, and a NULL is considered larger
than a non-NULL. This is necessary in order to have consistent sorting
and indexing behavior for composite types.
Each side is evaluated and they are compared row-wise. Composite type
comparisons are allowed when the operator is
=>,
<>>,
<>,
<=>,
>> or
>=>,
or has semantics similar to one of these. (To be specific, an operator
can be a row comparison operator if it is a member of a B-tree operator
class, or is the negator of the => member of a B-tree operator
class.) The default behavior of the above operators is the same as for
IS [ NOT ] DISTINCT FROM for row constructors (see
).
To support matching of rows which include elements without a default
B-tree operator class, the following operators are defined for composite
type comparison:
*=>,
*<>>,
*<>,
*<=>,
*>>, and
*>=>.
These operators compare the internal binary representation of the two
rows. Two rows might have a different binary representation even
though comparisons of the two rows with the equality operator is true.
The ordering of rows under these comparision operators is deterministic
but not otherwise meaningful. These operators are used internally for
materialized views and might be useful for other specialized purposes
such as replication but are not intended to be generally useful for
writing queries.
Set Returning Functionsset returning functionsfunctionsgenerate_series
This section describes functions that possibly return more than one row.
The most widely used functions in this class are series generating
functions, as detailed in and
. Other, more specialized
set-returning functions are described elsewhere in this manual.
See for ways to combine multiple
set-returning functions.
Series Generating FunctionsFunctionArgument TypeReturn TypeDescriptiongenerate_series(start, stop)int or bigintsetof int or setof bigint (same as argument type)
Generate a series of values, from start to stop
with a step size of one
generate_series(start, stop, step)int or bigintsetof int or setof bigint (same as argument type)
Generate a series of values, from start to stop
with a step size of stepgenerate_series(start, stop, stepinterval>)timestamp or timestamp with time zonesetof timestamp or setof timestamp with time zone (same as argument type)
Generate a series of values, from start to stop
with a step size of step
When step is positive, zero rows are returned if
start is greater than stop.
Conversely, when step is negative, zero rows are
returned if start is less than stop.
Zero rows are also returned for NULL inputs. It is an error
for step to be zero. Some examples follow:
SELECT * FROM generate_series(2,4);
generate_series
-----------------
2
3
4
(3 rows)
SELECT * FROM generate_series(5,1,-2);
generate_series
-----------------
5
3
1
(3 rows)
SELECT * FROM generate_series(4,3);
generate_series
-----------------
(0 rows)
-- this example relies on the date-plus-integer operator
SELECT current_date + s.a AS dates FROM generate_series(0,14,7) AS s(a);
dates
------------
2004-02-05
2004-02-12
2004-02-19
(3 rows)
SELECT * FROM generate_series('2008-03-01 00:00'::timestamp,
'2008-03-04 12:00', '10 hours');
generate_series
---------------------
2008-03-01 00:00:00
2008-03-01 10:00:00
2008-03-01 20:00:00
2008-03-02 06:00:00
2008-03-02 16:00:00
2008-03-03 02:00:00
2008-03-03 12:00:00
2008-03-03 22:00:00
2008-03-04 08:00:00
(9 rows)
Subscript Generating FunctionsFunctionReturn TypeDescriptiongenerate_subscripts(array anyarray, dim int)setof int
Generate a series comprising the given array's subscripts.
generate_subscripts(array anyarray, dim int, reverse boolean)setof int
Generate a series comprising the given array's subscripts. When
reverse is true, the series is returned in
reverse order.
generate_subscriptsgenerate_subscripts> is a convenience function that generates
the set of valid subscripts for the specified dimension of the given
array.
Zero rows are returned for arrays that do not have the requested dimension,
or for NULL arrays (but valid subscripts are returned for NULL array
elements). Some examples follow:
-- basic usage
SELECT generate_subscripts('{NULL,1,NULL,2}'::int[], 1) AS s;
s
---
1
2
3
4
(4 rows)
-- presenting an array, the subscript and the subscripted
-- value requires a subquery
SELECT * FROM arrays;
a
--------------------
{-1,-2}
{100,200,300}
(2 rows)
SELECT a AS array, s AS subscript, a[s] AS value
FROM (SELECT generate_subscripts(a, 1) AS s, a FROM arrays) foo;
array | subscript | value
---------------+-----------+-------
{-1,-2} | 1 | -1
{-1,-2} | 2 | -2
{100,200,300} | 1 | 100
{100,200,300} | 2 | 200
{100,200,300} | 3 | 300
(5 rows)
-- unnest a 2D array
CREATE OR REPLACE FUNCTION unnest2(anyarray)
RETURNS SETOF anyelement AS $$
select $1[i][j]
from generate_subscripts($1,1) g1(i),
generate_subscripts($1,2) g2(j);
$$ LANGUAGE sql IMMUTABLE;
CREATE FUNCTION
SELECT * FROM unnest2(ARRAY[[1,2],[3,4]]);
unnest2
---------
1
2
3
4
(4 rows)
ordinality
When a function in the FROM clause is suffixed
by WITH ORDINALITY, a bigint column is
appended to the output which starts from 1 and increments by 1 for each row
of the function's output. This is most useful in the case of set returning
functions such as unnest()>.
-- set returning function WITH ORDINALITY
SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
ls | n
-----------------+----
pg_serial | 1
pg_twophase | 2
postmaster.opts | 3
pg_notify | 4
postgresql.conf | 5
pg_tblspc | 6
logfile | 7
base | 8
postmaster.pid | 9
pg_ident.conf | 10
global | 11
pg_clog | 12
pg_snapshots | 13
pg_multixact | 14
PG_VERSION | 15
pg_xlog | 16
pg_hba.conf | 17
pg_stat_tmp | 18
pg_subtrans | 19
(19 rows)
System Information Functions shows several
functions that extract session and system information.
In addition to the functions listed in this section, there are a number of
functions related to the statistics system that also provide system
information. See for more
information.
Session Information FunctionsNameReturn TypeDescriptioncurrent_catalognamename of current database (called catalog in the SQL standard)current_database()namename of current databasecurrent_query()texttext of the currently executing query, as submitted
by the client (might contain more than one statement)current_schema[()]namename of current schemacurrent_schemas(boolean)name[]names of schemas in search path, optionally including implicit schemascurrent_usernameuser name of current execution contextinet_client_addr()inetaddress of the remote connectioninet_client_port()intport of the remote connectioninet_server_addr()inetaddress of the local connectioninet_server_port()intport of the local connectionpg_backend_pid()int
Process ID of the server process attached to the current session
pg_conf_load_time()timestamp with time zoneconfiguration load timepg_is_other_temp_schema(oid)booleanis schema another session's temporary schema?pg_listening_channels()setof textchannel names that the session is currently listening onpg_my_temp_schema()oidOID of session's temporary schema, or 0 if nonepg_postmaster_start_time()timestamp with time zoneserver start timepg_trigger_depth()intcurrent nesting level of PostgreSQL> triggers
(0 if not called, directly or indirectly, from inside a trigger)session_usernamesession user nameusernameequivalent to current_userversion()textPostgreSQL> version information
current_catalog, current_schema,
current_user, session_user,
and user have special syntactic status
in SQL: they must be called without trailing
parentheses. (In PostgreSQL, parentheses can optionally be used with
current_schema, but not with the others.)
current_catalogcurrent_databasecurrent_querycurrent_schemacurrent_schemascurrent_userpg_backend_pidschemacurrentsearch pathcurrentsession_userusercurrentuser
The session_user is normally the user who initiated
the current database connection; but superusers can change this setting
with .
The current_user is the user identifier
that is applicable for permission checking. Normally it is equal
to the session user, but it can be changed with
.
It also changes during the execution of
functions with the attribute SECURITY DEFINER.
In Unix parlance, the session user is the real user and
the current user is the effective user.
current_schema returns the name of the schema that is
first in the search path (or a null value if the search path is
empty). This is the schema that will be used for any tables or
other named objects that are created without specifying a target schema.
current_schemas(boolean) returns an array of the names of all
schemas presently in the search path. The Boolean option determines whether or not
implicitly included system schemas such as pg_catalog> are included in the
returned search path.
The search path can be altered at run time. The command is:
SET search_path TO schema> , schema>, ...pg_listening_channelspg_listening_channels returns a set of names of
channels that the current session is listening to. See for more information.
inet_client_addrinet_client_portinet_server_addrinet_server_portinet_client_addr returns the IP address of the
current client, and inet_client_port returns the
port number.
inet_server_addr returns the IP address on which
the server accepted the current connection, and
inet_server_port returns the port number.
All these functions return NULL if the current connection is via a
Unix-domain socket.
pg_my_temp_schemapg_is_other_temp_schemapg_my_temp_schema returns the OID of the current
session's temporary schema, or zero if it has none (because it has not
created any temporary tables).
pg_is_other_temp_schema returns true if the
given OID is the OID of another session's temporary schema.
(This can be useful, for example, to exclude other sessions' temporary
tables from a catalog display.)
pg_postmaster_start_timepg_postmaster_start_time returns the
timestamp with time zone when the
server started.
pg_conf_load_timepg_conf_load_time returns the
timestamp with time zone when the
server configuration files were last loaded.
(If the current session was alive at the time, this will be the time
when the session itself re-read the configuration files, so the
reading will vary a little in different sessions. Otherwise it is
the time when the postmaster process re-read the configuration files.)
versionversion returns a string describing the
PostgreSQL server's version.
privilegequerying lists functions that
allow the user to query object access privileges programmatically.
See for more information about
privileges.
Access Privilege Inquiry FunctionsNameReturn TypeDescriptionhas_any_column_privilege(user,
table,
privilege)booleandoes user have privilege for any column of tablehas_any_column_privilege(table,
privilege)booleandoes current user have privilege for any column of tablehas_column_privilege(user,
table,
column,
privilege)booleandoes user have privilege for columnhas_column_privilege(table,
column,
privilege)booleandoes current user have privilege for columnhas_database_privilege(user,
database,
privilege)booleandoes user have privilege for databasehas_database_privilege(database,
privilege)booleandoes current user have privilege for databasehas_foreign_data_wrapper_privilege(user,
fdw,
privilege)booleandoes user have privilege for foreign-data wrapperhas_foreign_data_wrapper_privilege(fdw,
privilege)booleandoes current user have privilege for foreign-data wrapperhas_function_privilege(user,
function,
privilege)booleandoes user have privilege for functionhas_function_privilege(function,
privilege)booleandoes current user have privilege for functionhas_language_privilege(user,
language,
privilege)booleandoes user have privilege for languagehas_language_privilege(language,
privilege)booleandoes current user have privilege for languagehas_schema_privilege(user,
schema,
privilege)booleandoes user have privilege for schemahas_schema_privilege(schema,
privilege)booleandoes current user have privilege for schemahas_sequence_privilege(user,
sequence,
privilege)booleandoes user have privilege for sequencehas_sequence_privilege(sequence,
privilege)booleandoes current user have privilege for sequencehas_server_privilege(user,
server,
privilege)booleandoes user have privilege for foreign serverhas_server_privilege(server,
privilege)booleandoes current user have privilege for foreign serverhas_table_privilege(user,
table,
privilege)booleandoes user have privilege for tablehas_table_privilege(table,
privilege)booleandoes current user have privilege for tablehas_tablespace_privilege(user,
tablespace,
privilege)booleandoes user have privilege for tablespacehas_tablespace_privilege(tablespace,
privilege)booleandoes current user have privilege for tablespacepg_has_role(user,
role,
privilege)booleandoes user have privilege for rolepg_has_role(role,
privilege)booleandoes current user have privilege for role
has_any_column_privilegehas_column_privilegehas_database_privilegehas_function_privilegehas_foreign_data_wrapper_privilegehas_language_privilegehas_schema_privilegehas_server_privilegehas_sequence_privilegehas_table_privilegehas_tablespace_privilegepg_has_rolehas_table_privilege checks whether a user
can access a table in a particular way. The user can be
specified by name, by OID (pg_authid.oid),
public> to indicate the PUBLIC pseudo-role, or if the argument is
omitted
current_user is assumed. The table can be specified
by name or by OID. (Thus, there are actually six variants of
has_table_privilege, which can be distinguished by
the number and types of their arguments.) When specifying by name,
the name can be schema-qualified if necessary.
The desired access privilege type
is specified by a text string, which must evaluate to one of the
values SELECT, INSERT,
UPDATE, DELETE, TRUNCATE>,
REFERENCES, or TRIGGER. Optionally,
WITH GRANT OPTION> can be added to a privilege type to test
whether the privilege is held with grant option. Also, multiple privilege
types can be listed separated by commas, in which case the result will
be true> if any of the listed privileges is held.
(Case of the privilege string is not significant, and extra whitespace
is allowed between but not within privilege names.)
Some examples:
SELECT has_table_privilege('myschema.mytable', 'select');
SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION');
has_sequence_privilege checks whether a user
can access a sequence in a particular way. The possibilities for its
arguments are analogous to has_table_privilege.
The desired access privilege type must evaluate to one of
USAGE,
SELECT, or
UPDATE.
has_any_column_privilege checks whether a user can
access any column of a table in a particular way.
Its argument possibilities
are analogous to has_table_privilege>,
except that the desired access privilege type must evaluate to some
combination of
SELECT,
INSERT,
UPDATE, or
REFERENCES. Note that having any of these privileges
at the table level implicitly grants it for each column of the table,
so has_any_column_privilege will always return
true> if has_table_privilege> does for the same
arguments. But has_any_column_privilege> also succeeds if
there is a column-level grant of the privilege for at least one column.
has_column_privilege checks whether a user
can access a column in a particular way.
Its argument possibilities
are analogous to has_table_privilege,
with the addition that the column can be specified either by name
or attribute number.
The desired access privilege type must evaluate to some combination of
SELECT,
INSERT,
UPDATE, or
REFERENCES. Note that having any of these privileges
at the table level implicitly grants it for each column of the table.
has_database_privilege checks whether a user
can access a database in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to some combination of
CREATE,
CONNECT,
TEMPORARY, or
TEMP (which is equivalent to
TEMPORARY).
has_function_privilege checks whether a user
can access a function in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
When specifying a function by a text string rather than by OID,
the allowed input is the same as for the regprocedure> data type
(see ).
The desired access privilege type must evaluate to
EXECUTE.
An example is:
SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute');
has_foreign_data_wrapper_privilege checks whether a user
can access a foreign-data wrapper in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to
USAGE.
has_language_privilege checks whether a user
can access a procedural language in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to
USAGE.
has_schema_privilege checks whether a user
can access a schema in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to some combination of
CREATE or
USAGE.
has_server_privilege checks whether a user
can access a foreign server in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to
USAGE.
has_tablespace_privilege checks whether a user
can access a tablespace in a particular way.
Its argument possibilities
are analogous to has_table_privilege.
The desired access privilege type must evaluate to
CREATE.
pg_has_role checks whether a user
can access a role in a particular way.
Its argument possibilities
are analogous to has_table_privilege,
except that public> is not allowed as a user name.
The desired access privilege type must evaluate to some combination of
MEMBER or
USAGE.
MEMBER denotes direct or indirect membership in
the role (that is, the right to do SET ROLE>), while
USAGE denotes whether the privileges of the role
are immediately available without doing SET ROLE>.
shows functions that
determine whether a certain object is visible> in the
current schema search path.
For example, a table is said to be visible if its
containing schema is in the search path and no table of the same
name appears earlier in the search path. This is equivalent to the
statement that the table can be referenced by name without explicit
schema qualification. To list the names of all visible tables:
SELECT relname FROM pg_class WHERE pg_table_is_visible(oid);
search pathobject visibility
Schema Visibility Inquiry FunctionsNameReturn TypeDescriptionpg_collation_is_visible(collation_oid)booleanis collation visible in search pathpg_conversion_is_visible(conversion_oid)booleanis conversion visible in search pathpg_function_is_visible(function_oid)booleanis function visible in search pathpg_opclass_is_visible(opclass_oid)booleanis operator class visible in search pathpg_operator_is_visible(operator_oid)booleanis operator visible in search pathpg_opfamily_is_visible(opclass_oid)booleanis operator family visible in search pathpg_table_is_visible(table_oid)booleanis table visible in search pathpg_ts_config_is_visible(config_oid)booleanis text search configuration visible in search pathpg_ts_dict_is_visible(dict_oid)booleanis text search dictionary visible in search pathpg_ts_parser_is_visible(parser_oid)booleanis text search parser visible in search pathpg_ts_template_is_visible(template_oid)booleanis text search template visible in search pathpg_type_is_visible(type_oid)booleanis type (or domain) visible in search path
pg_collation_is_visiblepg_conversion_is_visiblepg_function_is_visiblepg_opclass_is_visiblepg_operator_is_visiblepg_opfamily_is_visiblepg_table_is_visiblepg_ts_config_is_visiblepg_ts_dict_is_visiblepg_ts_parser_is_visiblepg_ts_template_is_visiblepg_type_is_visible
Each function performs the visibility check for one type of database
object. Note that pg_table_is_visible can also be used
with views, indexes and sequences; pg_type_is_visible
can also be used with domains. For functions and operators, an object in
the search path is visible if there is no object of the same name
and argument data type(s)> earlier in the path. For operator
classes, both name and associated index access method are considered.
All these functions require object OIDs to identify the object to be
checked. If you want to test an object by name, it is convenient to use
the OID alias types (regclass>, regtype>,
regprocedure>, regoperator>, regconfig>,
or regdictionary>),
for example:
SELECT pg_type_is_visible('myschema.widget'::regtype);
Note that it would not make much sense to test a non-schema-qualified
type name in this way — if the name can be recognized at all, it must be visible.
format_typepg_describe_objectpg_identify_objectpg_get_constraintdefpg_get_exprpg_get_functiondefpg_get_function_argumentspg_get_function_identity_argumentspg_get_function_resultpg_get_indexdefpg_get_keywordspg_get_ruledefpg_get_serial_sequencepg_get_triggerdefpg_get_userbyidpg_get_viewdefpg_options_to_tablepg_tablespace_databasespg_tablespace_locationpg_typeofcollation for lists functions that
extract information from the system catalogs.
System Catalog Information FunctionsNameReturn TypeDescriptionformat_type(type_oid, typemod>)textget SQL name of a data typepg_describe_object(catalog_id, object_id, object_sub_id)textget description of a database objectpg_identify_object(catalog_idoid>, object_idoid>, object_sub_idinteger>)type> text>, schema> text>, name> text>, identity> text>get identity of a database objectpg_get_constraintdef(constraint_oid)textget definition of a constraintpg_get_constraintdef(constraint_oid, pretty_bool>)textget definition of a constraintpg_get_expr(pg_node_tree, relation_oid>)textdecompile internal form of an expression, assuming that any Vars
in it refer to the relation indicated by the second parameterpg_get_expr(pg_node_tree, relation_oid>, pretty_bool>)textdecompile internal form of an expression, assuming that any Vars
in it refer to the relation indicated by the second parameterpg_get_functiondef(func_oid)textget definition of a functionpg_get_function_arguments(func_oid)textget argument list of function's definition (with default values)pg_get_function_identity_arguments(func_oid)textget argument list to identify a function (without default values)pg_get_function_result(func_oid)textget RETURNS> clause for functionpg_get_indexdef(index_oid)textget CREATE INDEX> command for indexpg_get_indexdef(index_oid, column_no>, pretty_bool>)textget CREATE INDEX> command for index,
or definition of just one index column when
column_no> is not zeropg_get_keywords()setof recordget list of SQL keywords and their categoriespg_get_ruledef(rule_oid)textget CREATE RULE> command for rulepg_get_ruledef(rule_oid, pretty_bool>)textget CREATE RULE> command for rulepg_get_serial_sequence(table_name, column_name)textget name of the sequence that a serial, smallserial or bigserial column
usespg_get_triggerdef(trigger_oid)textget CREATE [ CONSTRAINT ] TRIGGER> command for triggerpg_get_triggerdef(trigger_oid, pretty_bool>)textget CREATE [ CONSTRAINT ] TRIGGER> command for triggerpg_get_userbyid(role_oid)nameget role name with given OIDpg_get_viewdef(view_name)textget underlying SELECT command for view or materialized view (deprecated)pg_get_viewdef(view_name, pretty_bool>)textget underlying SELECT command for view or materialized view (deprecated)pg_get_viewdef(view_oid)textget underlying SELECT command for view or materialized viewpg_get_viewdef(view_oid, pretty_bool>)textget underlying SELECT command for view or materialized viewpg_get_viewdef(view_oid, wrap_column_int>)textget underlying SELECT command for view or
materialized view; lines with fields are wrapped to specified
number of columns, pretty-printing is impliedpg_options_to_table(reloptions)setof recordget the set of storage option name/value pairspg_tablespace_databases(tablespace_oid)setof oidget the set of database OIDs that have objects in the tablespacepg_tablespace_location(tablespace_oid)textget the path in the file system that this tablespace is located inpg_typeof(any)regtypeget the data type of any valuecollation for (any)textget the collation of the argument
format_type returns the SQL name of a data type that
is identified by its type OID and possibly a type modifier. Pass NULL
for the type modifier if no specific modifier is known.
pg_get_keywords returns a set of records describing
the SQL keywords recognized by the server. The word> column
contains the keyword. The catcode> column contains a
category code: U> for unreserved, C> for column name,
T> for type or function name, or R> for reserved.
The catdesc> column contains a possibly-localized string
describing the category.
pg_get_constraintdef,
pg_get_indexdef, pg_get_ruledef,
and pg_get_triggerdef, respectively reconstruct the
creating command for a constraint, index, rule, or trigger. (Note that this
is a decompiled reconstruction, not the original text of the command.)
pg_get_expr decompiles the internal form of an
individual expression, such as the default value for a column. It can be
useful when examining the contents of system catalogs. If the expression
might contain Vars, specify the OID of the relation they refer to as the
second parameter; if no Vars are expected, zero is sufficient.
pg_get_viewdef reconstructs the SELECT>
query that defines a view. Most of these functions come in two variants,
one of which can optionally pretty-print> the result. The
pretty-printed format is more readable, but the default format is more
likely to be interpreted the same way by future versions of
PostgreSQL>; avoid using pretty-printed output for dump
purposes. Passing false> for the pretty-print parameter yields
the same result as the variant that does not have the parameter at all.
pg_get_functiondef> returns a complete
CREATE OR REPLACE FUNCTION> statement for a function.
pg_get_function_arguments returns the argument list
of a function, in the form it would need to appear in within
CREATE FUNCTION>.
pg_get_function_result similarly returns the
appropriate RETURNS> clause for the function.
pg_get_function_identity_arguments returns the
argument list necessary to identify a function, in the form it
would need to appear in within ALTER FUNCTION>, for
instance. This form omits default values.
pg_get_serial_sequence returns the name of the
sequence associated with a column, or NULL if no sequence is associated
with the column. The first input parameter is a table name with
optional schema, and the second parameter is a column name. Because
the first parameter is potentially a schema and table, it is not treated
as a double-quoted identifier, meaning it is lower cased by default,
while the second parameter, being just a column name, is treated as
double-quoted and has its case preserved. The function returns a value
suitably formatted for passing to sequence functions (see ). This association can be modified or
removed with ALTER SEQUENCE OWNED BY>. (The function
probably should have been called
pg_get_owned_sequence; its current name reflects the fact
that it's typically used with serial> or bigserial>
columns.)
pg_get_userbyid extracts a role's name given
its OID.
pg_options_to_table returns the set of storage
option name/value pairs
(option_name>/option_value>) when passed
pg_class>.reloptions> or
pg_attribute>.attoptions>.
pg_tablespace_databases allows a tablespace to be
examined. It returns the set of OIDs of databases that have objects stored
in the tablespace. If this function returns any rows, the tablespace is not
empty and cannot be dropped. To display the specific objects populating the
tablespace, you will need to connect to the databases identified by
pg_tablespace_databases and query their
pg_class> catalogs.
pg_describe_object returns a textual description of a database
object specified by catalog OID, object OID and a (possibly zero) sub-object ID.
This description is intended to be human-readable, and might be translated,
depending on server configuration.
This is useful to determine the identity of an object as stored in the
pg_depend catalog.
pg_identify_object returns a row containing enough information
to uniquely identify the database object specified by catalog OID, object OID and a
(possibly zero) sub-object ID. This information is intended to be machine-readable,
and is never translated.
type> identifies the type of database object;
schema> is the schema name that the object belongs in, or
NULL> for object types that do not belong to schemas;
name> is the name of the object, quoted if necessary, only
present if it can be used (alongside schema name, if pertinent) as an unique
identifier of the object, otherwise NULL>;
identity> is the complete object identity, with the precise format
depending on object type, and each part within the format being
schema-qualified and quoted as necessary.
pg_typeof returns the OID of the data type of the
value that is passed to it. This can be helpful for troubleshooting or
dynamically constructing SQL queries. The function is declared as
returning regtype>, which is an OID alias type (see
); this means that it is the same as an
OID for comparison purposes but displays as a type name. For example:
SELECT pg_typeof(33);
pg_typeof
-----------
integer
(1 row)
SELECT typlen FROM pg_type WHERE oid = pg_typeof(33);
typlen
--------
4
(1 row)
The expression collation for returns the collation of the
value that is passed to it. Example:
SELECT collation for (description) FROM pg_description LIMIT 1;
pg_collation_for
------------------
"default"
(1 row)
SELECT collation for ('foo' COLLATE "de_DE");
pg_collation_for
------------------
"de_DE"
(1 row)
The value might be quoted and schema-qualified. If no collation is derived
for the argument expression, then a null value is returned. If the argument
is not of a collatable data type, then an error is raised.
col_descriptionobj_descriptionshobj_descriptioncommentabout database objects
The functions shown in
extract comments previously stored with the
command. A null value is returned if no
comment could be found for the specified parameters.
Comment Information FunctionsNameReturn TypeDescriptioncol_description(table_oid, column_number)textget comment for a table columnobj_description(object_oid, catalog_name)textget comment for a database objectobj_description(object_oid)textget comment for a database object (deprecated)shobj_description(object_oid, catalog_name)textget comment for a shared database object
col_description returns the comment for a table
column, which is specified by the OID of its table and its column number.
(obj_description cannot be used for table columns
since columns do not have OIDs of their own.)
The two-parameter form of obj_description returns the
comment for a database object specified by its OID and the name of the
containing system catalog. For example,
obj_description(123456,'pg_class')
would retrieve the comment for the table with OID 123456.
The one-parameter form of obj_description requires only
the object OID. It is deprecated since there is no guarantee that
OIDs are unique across different system catalogs; therefore, the wrong
comment might be returned.
shobj_description is used just like
obj_description except it is used for retrieving
comments on shared objects. Some system catalogs are global to all
databases within each cluster, and the descriptions for objects in them
are stored globally as well.
txid_currenttxid_current_snapshottxid_snapshot_xiptxid_snapshot_xmaxtxid_snapshot_xmintxid_visible_in_snapshot
The functions shown in
provide server transaction information in an exportable form. The main
use of these functions is to determine which transactions were committed
between two snapshots.
Transaction IDs and SnapshotsNameReturn TypeDescriptiontxid_current()bigintget current transaction IDtxid_current_snapshot()txid_snapshotget current snapshottxid_snapshot_xip(txid_snapshot)setof bigintget in-progress transaction IDs in snapshottxid_snapshot_xmax(txid_snapshot)bigintget xmax of snapshottxid_snapshot_xmin(txid_snapshot)bigintget xmin of snapshottxid_visible_in_snapshot(bigint, txid_snapshot)booleanis transaction ID visible in snapshot? (do not use with subtransaction ids)
The internal transaction ID type (xid>) is 32 bits wide and
wraps around every 4 billion transactions. However, these functions
export a 64-bit format that is extended with an epoch> counter
so it will not wrap around during the life of an installation.
The data type used by these functions, txid_snapshot,
stores information about transaction ID
visibility at a particular moment in time. Its components are
described in .
Snapshot ComponentsNameDescriptionxmin
Earliest transaction ID (txid) that is still active. All earlier
transactions will either be committed and visible, or rolled
back and dead.
xmax
First as-yet-unassigned txid. All txids greater than or equal to this
are not yet started as of the time of the snapshot, and thus invisible.
xip_list
Active txids at the time of the snapshot. The list
includes only those active txids between xmin>
and xmax>; there might be active txids higher
than xmax>. A txid that is xmin <= txid <
xmax and not in this list was already completed
at the time of the snapshot, and thus either visible or
dead according to its commit status. The list does not
include txids of subtransactions.
txid_snapshot>'s textual representation is
xmin>:xmax>:xip_list>.
For example 10:20:10,14,15 means
xmin=10, xmax=20, xip_list=10, 14, 15.
System Administration Functions
The functions described in this section are used to control and
monitor a PostgreSQL> installation.
Configuration Settings Functions shows the functions
available to query and alter run-time configuration parameters.
Configuration Settings FunctionsNameReturn TypeDescriptioncurrent_settingcurrent_setting(setting_name)textget current value of settingset_configset_config(setting_name,
new_value,
is_local)textset parameter and return new value
SETSHOWconfigurationof the serverfunctions
The function current_setting yields the
current value of the setting setting_name.
It corresponds to the SQL command
SHOW. An example:
SELECT current_setting('datestyle');
current_setting
-----------------
ISO, MDY
(1 row)
set_config sets the parameter
setting_name to
new_value. If
is_local is true, the
new value will only apply to the current transaction. If you want
the new value to apply for the current session, use
false instead. The function corresponds to the
SQL command SET. An example:
SELECT set_config('log_statement_stats', 'off', false);
set_config
------------
off
(1 row)
Server Signaling Functionspg_cancel_backendpg_reload_confpg_rotate_logfilepg_terminate_backendsignalbackend processes
The functions shown in send control signals to
other server processes. Use of these functions is usually restricted
to superusers, with noted exceptions.
Server Signaling FunctionsNameReturn TypeDescriptionpg_cancel_backend(pidint>)booleanCancel a backend's current query. You can execute this against
another backend that has exactly the same role as the user calling the
function. In all other cases, you must be a superuser.
pg_reload_conf()booleanCause server processes to reload their configuration filespg_rotate_logfile()booleanRotate server's log filepg_terminate_backend(pidint>)booleanTerminate a backend. You can execute this against
another backend that has exactly the same role as the user
calling the function. In all other cases, you must be a
superuser.
Each of these functions returns true if
successful and false otherwise.
pg_cancel_backend> and pg_terminate_backend>
send signals (SIGINT> or SIGTERM>
respectively) to backend processes identified by process ID.
The process ID of an active backend can be found from
the pid column of the
pg_stat_activity view, or by listing the
postgres processes on the server (using
ps> on Unix or the Task
Manager> on Windows>).
The role of an active backend can be found from the
usename column of the
pg_stat_activity view.
pg_reload_conf> sends a SIGHUP> signal
to the server, causing configuration files
to be reloaded by all server processes.
pg_rotate_logfile> signals the log-file manager to switch
to a new output file immediately. This works only when the built-in
log collector is running, since otherwise there is no log-file manager
subprocess.
Backup Control Functionsbackuppg_create_restore_pointpg_current_xlog_insert_locationpg_current_xlog_locationpg_start_backuppg_stop_backuppg_is_in_backuppg_backup_start_timepg_switch_xlogpg_xlogfile_namepg_xlogfile_name_offsetpg_xlog_location_diff
The functions shown in assist in making on-line backups.
These functions cannot be executed during recovery (except
pg_is_in_backup, pg_backup_start_time
and pg_xlog_location_diff).
Backup Control FunctionsNameReturn TypeDescriptionpg_create_restore_point(name> text>)pg_lsnCreate a named point for performing restore (restricted to superusers)pg_current_xlog_insert_location()pg_lsnGet current transaction log insert locationpg_current_xlog_location()pg_lsnGet current transaction log write locationpg_start_backup(label> text> , fast> boolean> )pg_lsnPrepare for performing on-line backup (restricted to superusers or replication roles)pg_stop_backup()pg_lsnFinish performing on-line backup (restricted to superusers or replication roles)pg_is_in_backup()boolTrue if an on-line exclusive backup is still in progress.pg_backup_start_time()timestamp with time zoneGet start time of an on-line exclusive backup in progress.pg_switch_xlog()pg_lsnForce switch to a new transaction log file (restricted to superusers)pg_xlogfile_name(location> pg_lsn>)textConvert transaction log location string to file namepg_xlogfile_name_offset(location> pg_lsn>)text>, integer>Convert transaction log location string to file name and decimal byte offset within filepg_xlog_location_diff(location> pg_lsn>, location> pg_lsn>)numeric>Calculate the difference between two transaction log locations
pg_start_backup> accepts an
arbitrary user-defined label for the backup. (Typically this would be
the name under which the backup dump file will be stored.) The function
writes a backup label file (backup_label>) into the
database cluster's data directory, performs a checkpoint,
and then returns the backup's starting transaction log location as text.
The user can ignore this result value, but it is
provided in case it is useful.
postgres=# select pg_start_backup('label_goes_here');
pg_start_backup
-----------------
0/D4445B8
(1 row)
There is an optional second parameter of type boolean. If true>,
it specifies executing pg_start_backup> as quickly as
possible. This forces an immediate checkpoint which will cause a
spike in I/O operations, slowing any concurrently executing queries.
pg_stop_backup> removes the label file created by
pg_start_backup>, and creates a backup history file in
the transaction log archive area. The history file includes the label given to
pg_start_backup>, the starting and ending transaction log locations for
the backup, and the starting and ending times of the backup. The return
value is the backup's ending transaction log location (which again
can be ignored). After recording the ending location, the current
transaction log insertion
point is automatically advanced to the next transaction log file, so that the
ending transaction log file can be archived immediately to complete the backup.
pg_switch_xlog> moves to the next transaction log file, allowing the
current file to be archived (assuming you are using continuous archiving).
The return value is the ending transaction log location + 1 within the just-completed transaction log file.
If there has been no transaction log activity since the last transaction log switch,
pg_switch_xlog> does nothing and returns the start location
of the transaction log file currently in use.
pg_create_restore_point> creates a named transaction log
record that can be used as recovery target, and returns the corresponding
transaction log location. The given name can then be used with
to specify the point up to which
recovery will proceed. Avoid creating multiple restore points with the
same name, since recovery will stop at the first one whose name matches
the recovery target.
pg_current_xlog_location> displays the current transaction log write
location in the same format used by the above functions. Similarly,
pg_current_xlog_insert_location> displays the current transaction log
insertion point. The insertion point is the logical> end
of the transaction log
at any instant, while the write location is the end of what has actually
been written out from the server's internal buffers. The write location
is the end of what can be examined from outside the server, and is usually
what you want if you are interested in archiving partially-complete transaction log
files. The insertion point is made available primarily for server
debugging purposes. These are both read-only operations and do not
require superuser permissions.
You can use pg_xlogfile_name_offset> to extract the
corresponding transaction log file name and byte offset from the results of any of the
above functions. For example:
postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
file_name | file_offset
--------------------------+-------------
00000001000000000000000D | 4039624
(1 row)
Similarly, pg_xlogfile_name> extracts just the transaction log file name.
When the given transaction log location is exactly at a transaction log file boundary, both
these functions return the name of the preceding transaction log file.
This is usually the desired behavior for managing transaction log archiving
behavior, since the preceding file is the last one that currently
needs to be archived.
pg_xlog_location_diff> calculates the difference in bytes
between two transaction log locations. It can be used with
pg_stat_replication or some functions shown in
to get the replication lag.
For details about proper usage of these functions, see
.
Recovery Control Functionspg_is_in_recoverypg_last_xlog_receive_locationpg_last_xlog_replay_locationpg_last_xact_replay_timestamp
The functions shown in provide information
about the current status of the standby.
These functions may be executed both during recovery and in normal running.
Recovery Information FunctionsNameReturn TypeDescriptionpg_is_in_recovery()boolTrue if recovery is still in progress.
pg_last_xlog_receive_location()pg_lsnGet last transaction log location received and synced to disk by
streaming replication. While streaming replication is in progress
this will increase monotonically. If recovery has completed this will
remain static at
the value of the last WAL record received and synced to disk during
recovery. If streaming replication is disabled, or if it has not yet
started, the function returns NULL.
pg_last_xlog_replay_location()pg_lsnGet last transaction log location replayed during recovery.
If recovery is still in progress this will increase monotonically.
If recovery has completed then this value will remain static at
the value of the last WAL record applied during that recovery.
When the server has been started normally without recovery
the function returns NULL.
pg_last_xact_replay_timestamp()timestamp with time zoneGet time stamp of last transaction replayed during recovery.
This is the time at which the commit or abort WAL record for that
transaction was generated on the primary.
If no transactions have been replayed during recovery, this function
returns NULL. Otherwise, if recovery is still in progress this will
increase monotonically. If recovery has completed then this value will
remain static at the value of the last transaction applied during that
recovery. When the server has been started normally without recovery
the function returns NULL.
pg_is_xlog_replay_pausedpg_xlog_replay_pausepg_xlog_replay_resume
The functions shown in control the progress of recovery.
These functions may be executed only during recovery.
Recovery Control FunctionsNameReturn TypeDescriptionpg_is_xlog_replay_paused()boolTrue if recovery is paused.
pg_xlog_replay_pause()voidPauses recovery immediately.
pg_xlog_replay_resume()voidRestarts recovery if it was paused.
While recovery is paused no further database changes are applied.
If in hot standby, all new queries will see the same consistent snapshot
of the database, and no further query conflicts will be generated until
recovery is resumed.
If streaming replication is disabled, the paused state may continue
indefinitely without problem. While streaming replication is in
progress WAL records will continue to be received, which will
eventually fill available disk space, depending upon the duration of
the pause, the rate of WAL generation and available disk space.
Snapshot Synchronization Functionspg_export_snapshotPostgreSQL> allows database sessions to synchronize their
snapshots. A snapshot> determines which data is visible to the
transaction that is using the snapshot. Synchronized snapshots are
necessary when two or more sessions need to see identical content in the
database. If two sessions just start their transactions independently,
there is always a possibility that some third transaction commits
between the executions of the two START TRANSACTION> commands,
so that one session sees the effects of that transaction and the other
does not.
To solve this problem, PostgreSQL> allows a transaction to
export> the snapshot it is using. As long as the exporting
transaction remains open, other transactions can import> its
snapshot, and thereby be guaranteed that they see exactly the same view
of the database that the first transaction sees. But note that any
database changes made by any one of these transactions remain invisible
to the other transactions, as is usual for changes made by uncommitted
transactions. So the transactions are synchronized with respect to
pre-existing data, but act normally for changes they make themselves.
Snapshots are exported with the pg_export_snapshot> function,
shown in , and
imported with the command.
Snapshot Synchronization FunctionsNameReturn TypeDescriptionpg_export_snapshot()textSave the current snapshot and return its identifier
The function pg_export_snapshot> saves the current snapshot
and returns a text> string identifying the snapshot. This string
must be passed (outside the database) to clients that want to import the
snapshot. The snapshot is available for import only until the end of the
transaction that exported it. A transaction can export more than one
snapshot, if needed. Note that doing so is only useful in READ
COMMITTED> transactions, since in REPEATABLE READ> and
higher isolation levels, transactions use the same snapshot throughout
their lifetime. Once a transaction has exported any snapshots, it cannot
be prepared with .
See for details of how to use an
exported snapshot.
Replication Functions
PostgreSQL exposes a number of functions for controlling and interacting
with replication features. See
and .
Many of these functions have equivalent commands in the replication
protocol; see .
The sections , and are also relevant for replication.
Replication SQL FunctionsFunctionReturn TypeDescriptionpg_create_physical_replication_slotpg_create_physical_replication_slot(slotnamename)
(slotnamename, xlog_positionpg_lsn)
Creates a new physical replication slot named
slotname. Streaming changes from a physical slot
is only possible with the walsender protocol - see . Corresponds to the walsender protocol
command CREATE_REPLICATION_SLOT ... PHYSICAL.
pg_drop_replication_slotpg_drop_replication_slot(slotnamename)void
Drops the physical or logical replication slot
named slotname. Same as walsender protocol
command DROP_REPLICATION_SLOT>.
pg_create_logical_replication_slotpg_create_logical_replication_slot(slotnamename, pluginname)
(slotnamename, xlog_positionpg_lsn)
Creates a new logical (decoding) replication slot named
slotname using the output plugin
plugin. A call to this function has the same
effect as the replication protocol command
CREATE REPLICATION SLOT ... LOGICAL.
pg_logical_slot_get_changespg_logical_slot_get_changes(slotnamename, upto_lsnpg_lsn, upto_nchangesint, VARIADIC optionstext[])
(locationpg_lsn, xidxid, datatext)
Returns changes in the slot slotname, starting
from the point at which since changes have been consumed last. If
upto_lsn> and upto_nchanges> are NULL,
logical decoding will continue until end of WAL. If
upto_lsn> is non-NULL, decoding will include only
those transactions which commit prior to the specified LSN. If
upto_nchanges is non-NULL, decoding will
stop when the number of rows produced by decoding exceeds
the specified value. Note, however, that the actual number of
rows returned may be larger, since this limit is only checked after
adding the rows produced when decoding each new transaction commit.
pg_logical_slot_peek_changespg_logical_slot_peek_changes(slotnamename, upto_lsnpg_lsn, upto_nchangesint, VARIADIC optionstext[])
(locationtext, xidxid, datatext)
Behaves just like
the pg_logical_slot_get_changes() function,
except that changes are not consumed; that is, they will be returned
again on future calls.
pg_logical_slot_get_binary_changespg_logical_slot_get_binary_changes(slotnamename, upto_lsnpg_lsn, upto_nchangesint, VARIADIC optionstext[])
(locationpg_lsn, xidxid, databytea)
Behaves just like
the pg_logical_slot_get_changes() function,
except that changes are returned as bytea.
pg_logical_slot_peek_binary_changespg_logical_slot_peek_binary_changes(slotnamename, upto_lsnpg_lsn, upto_nchangesint, VARIADIC optionstext[])
(locationpg_lsn, xidxid, databytea)
Behaves just like
the pg_logical_slot_get_changes() function,
except that changes are returned as bytea and that
changes are not consumed; that is, they will be returned again
on future calls.
Database Object Management Functions
The functions shown in calculate
the disk space usage of database objects.
pg_column_sizepg_database_sizepg_indexes_sizepg_relation_sizepg_size_prettypg_table_sizepg_tablespace_sizepg_total_relation_size
Database Object Size FunctionsNameReturn TypeDescriptionpg_column_size(any)intNumber of bytes used to store a particular value (possibly compressed)pg_database_size(oid)bigintDisk space used by the database with the specified OIDpg_database_size(name)bigintDisk space used by the database with the specified namepg_indexes_size(regclass)bigint
Total disk space used by indexes attached to the specified table
pg_relation_size(relationregclass, forktext)bigint
Disk space used by the specified fork ('main',
'fsm' or 'vm'>)
of the specified table or index
pg_relation_size(relationregclass)bigint
Shorthand for pg_relation_size(..., 'main')pg_size_pretty(bigint)text
Converts a size in bytes expressed as a 64-bit integer into a
human-readable format with size units
pg_size_pretty(numeric)text
Converts a size in bytes expressed as a numeric value into a
human-readable format with size units
pg_table_size(regclass)bigint
Disk space used by the specified table, excluding indexes
(but including TOAST, free space map, and visibility map)
pg_tablespace_size(oid)bigintDisk space used by the tablespace with the specified OIDpg_tablespace_size(name)bigintDisk space used by the tablespace with the specified namepg_total_relation_size(regclass)bigint
Total disk space used by the specified table,
including all indexes and TOAST> data
pg_column_size> shows the space used to store any individual
data value.
pg_total_relation_size> accepts the OID or name of a
table or toast table, and returns the total on-disk space used for
that table, including all associated indexes. This function is
equivalent to pg_table_size+> pg_indexes_size.
pg_table_size> accepts the OID or name of a table and
returns the disk space needed for that table, exclusive of indexes.
(TOAST space, free space map, and visibility map are included.)
pg_indexes_size> accepts the OID or name of a table and
returns the total disk space used by all the indexes attached to that
table.
pg_database_size and pg_tablespace_size>
accept the OID or name of a database or tablespace, and return the total
disk space used therein.
pg_relation_size> accepts the OID or name of a table, index or
toast table, and returns the on-disk size in bytes. Specifying
'main' or leaving out the second argument returns the
size of the main data fork of the relation. Specifying
'fsm' returns the size of the
Free Space Map (see ) associated with the
relation. Specifying 'vm' returns the size of the
Visibility Map (see ) associated with the
relation. Note that this function shows the size of only one fork;
for most purposes it is more convenient to use the higher-level
functions pg_total_relation_size> or
pg_table_size>.
pg_size_pretty> can be used to format the result of one of
the other functions in a human-readable way, using kB, MB, GB or TB as
appropriate.
The functions above that operate on tables or indexes accept a
regclass> argument, which is simply the OID of the table or index
in the pg_class> system catalog. You do not have to look up
the OID by hand, however, since the regclass> data type's input
converter will do the work for you. Just write the table name enclosed in
single quotes so that it looks like a literal constant. For compatibility
with the handling of ordinary SQL names, the string
will be converted to lower case unless it contains double quotes around
the table name.
If an OID that does not represent an existing object is passed as
argument to one of the above functions, NULL is returned.
The functions shown in assist
in identifying the specific disk files associated with database objects.
pg_relation_filenodepg_relation_filepathpg_filenode_relation
Database Object Location FunctionsNameReturn TypeDescriptionpg_relation_filenode(relationregclass)oid
Filenode number of the specified relation
pg_relation_filepath(relationregclass)text
File path name of the specified relation
pg_filenode_relation(tablespaceoid, filenodeoid)regclass
Find the relation associated with a given tablespace and filenode
pg_relation_filenode> accepts the OID or name of a table,
index, sequence, or toast table, and returns the filenode> number
currently assigned to it. The filenode is the base component of the file
name(s) used for the relation (see
for more information). For most tables the result is the same as
pg_class>.relfilenode>, but for certain
system catalogs relfilenode> is zero and this function must
be used to get the correct value. The function returns NULL if passed
a relation that does not have storage, such as a view.
pg_relation_filepath> is similar to
pg_relation_filenode>, but it returns the entire file path name
(relative to the database cluster's data directory PGDATA>) of
the relation.
pg_filenode_relation> is the reverse of
pg_relation_filenode>. Given a tablespace> OID and
a filenode>, it returns the associated relation's OID. For a table
in the database's default tablespace, the tablespace can be specified as 0.
Generic File Access Functions
The functions shown in provide native access to
files on the machine hosting the server. Only files within the
database cluster directory and the log_directory> can be
accessed. Use a relative path for files in the cluster directory,
and a path matching the log_directory> configuration setting
for log files. Use of these functions is restricted to superusers.
Generic File Access FunctionsNameReturn TypeDescriptionpg_ls_dir(dirname> text>)setof textList the contents of a directorypg_read_file(filename> text> [, offset> bigint>, length> bigint>])textReturn the contents of a text filepg_read_binary_file(filename> text> [, offset> bigint>, length> bigint>])byteaReturn the contents of a filepg_stat_file(filename> text>)recordReturn information about a file
pg_ls_dirpg_ls_dir> returns all the names in the specified
directory, except the special entries .>> and
..>>.
pg_read_filepg_read_file> returns part of a text file, starting
at the given offset>, returning at most length>
bytes (less if the end of file is reached first). If offset>
is negative, it is relative to the end of the file.
If offset> and length> are omitted, the entire
file is returned. The bytes read from the file are interpreted as a string
in the server encoding; an error is thrown if they are not valid in that
encoding.
pg_read_binary_filepg_read_binary_file> is similar to
pg_read_file>, except that the result is a bytea value;
accordingly, no encoding checks are performed.
In combination with the convert_from> function, this function
can be used to read a file in a specified encoding:
SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8');
pg_stat_filepg_stat_file> returns a record containing the file
size, last accessed time stamp, last modified time stamp,
last file status change time stamp (Unix platforms only),
file creation time stamp (Windows only), and a boolean
indicating if it is a directory. Typical usages include:
SELECT * FROM pg_stat_file('filename');
SELECT (pg_stat_file('filename')).modification;
Advisory Lock Functions
The functions shown in
manage advisory locks. For details about proper use of these functions,
see .
Advisory Lock FunctionsNameReturn TypeDescriptionpg_advisory_lock(key> bigint>)voidObtain exclusive session level advisory lockpg_advisory_lock(key1> int>, key2> int>)voidObtain exclusive session level advisory lockpg_advisory_lock_shared(key> bigint>)voidObtain shared session level advisory lockpg_advisory_lock_shared(key1> int>, key2> int>)voidObtain shared session level advisory lockpg_advisory_unlock(key> bigint>)booleanRelease an exclusive session level advisory lockpg_advisory_unlock(key1> int>, key2> int>)booleanRelease an exclusive session level advisory lockpg_advisory_unlock_all()voidRelease all session level advisory locks held by the current sessionpg_advisory_unlock_shared(key> bigint>)booleanRelease a shared session level advisory lockpg_advisory_unlock_shared(key1> int>, key2> int>)booleanRelease a shared session level advisory lockpg_advisory_xact_lock(key> bigint>)voidObtain exclusive transaction level advisory lockpg_advisory_xact_lock(key1> int>, key2> int>)voidObtain exclusive transaction level advisory lockpg_advisory_xact_lock_shared(key> bigint>)voidObtain shared transaction level advisory lockpg_advisory_xact_lock_shared(key1> int>, key2> int>)voidObtain shared transaction level advisory lockpg_try_advisory_lock(key> bigint>)booleanObtain exclusive session level advisory lock if availablepg_try_advisory_lock(key1> int>, key2> int>)booleanObtain exclusive session level advisory lock if availablepg_try_advisory_lock_shared(key> bigint>)booleanObtain shared session level advisory lock if availablepg_try_advisory_lock_shared(key1> int>, key2> int>)booleanObtain shared session level advisory lock if availablepg_try_advisory_xact_lock(key> bigint>)booleanObtain exclusive transaction level advisory lock if availablepg_try_advisory_xact_lock(key1> int>, key2> int>)booleanObtain exclusive transaction level advisory lock if availablepg_try_advisory_xact_lock_shared(key> bigint>)booleanObtain shared transaction level advisory lock if availablepg_try_advisory_xact_lock_shared(key1> int>, key2> int>)booleanObtain shared transaction level advisory lock if available
pg_advisory_lockpg_advisory_lock> locks an application-defined resource,
which can be identified either by a single 64-bit key value or two
32-bit key values (note that these two key spaces do not overlap).
If another session already holds a lock on the same resource identifier,
this function will wait until the resource becomes available. The lock
is exclusive. Multiple lock requests stack, so that if the same resource
is locked three times it must then be unlocked three times to be
released for other sessions' use.
pg_advisory_lock_sharedpg_advisory_lock_shared> works the same as
pg_advisory_lock>,
except the lock can be shared with other sessions requesting shared locks.
Only would-be exclusive lockers are locked out.
pg_try_advisory_lockpg_try_advisory_lock> is similar to
pg_advisory_lock>, except the function will not wait for the
lock to become available. It will either obtain the lock immediately and
return true>, or return false> if the lock cannot be
acquired immediately.
pg_try_advisory_lock_sharedpg_try_advisory_lock_shared> works the same as
pg_try_advisory_lock>, except it attempts to acquire
a shared rather than an exclusive lock.
pg_advisory_unlockpg_advisory_unlock> will release a previously-acquired
exclusive session level advisory lock. It
returns true> if the lock is successfully released.
If the lock was not held, it will return false>,
and in addition, an SQL warning will be reported by the server.
pg_advisory_unlock_sharedpg_advisory_unlock_shared> works the same as
pg_advisory_unlock>,
except it releases a shared session level advisory lock.
pg_advisory_unlock_allpg_advisory_unlock_all> will release all session level advisory
locks held by the current session. (This function is implicitly invoked
at session end, even if the client disconnects ungracefully.)
pg_advisory_xact_lockpg_advisory_xact_lock> works the same as
pg_advisory_lock>, except the lock is automatically released
at the end of the current transaction and cannot be released explicitly.
pg_advisory_xact_lock_sharedpg_advisory_xact_lock_shared> works the same as
pg_advisory_lock_shared>, except the lock is automatically released
at the end of the current transaction and cannot be released explicitly.
pg_try_advisory_xact_lockpg_try_advisory_xact_lock> works the same as
pg_try_advisory_lock>, except the lock, if acquired,
is automatically released at the end of the current transaction and
cannot be released explicitly.
pg_try_advisory_xact_lock_sharedpg_try_advisory_xact_lock_shared> works the same as
pg_try_advisory_lock_shared>, except the lock, if acquired,
is automatically released at the end of the current transaction and
cannot be released explicitly.
Trigger Functionssuppress_redundant_updates_trigger
Currently PostgreSQL> provides one built in trigger
function, suppress_redundant_updates_trigger>,
which will prevent any update
that does not actually change the data in the row from taking place, in
contrast to the normal behavior which always performs the update
regardless of whether or not the data has changed. (This normal behavior
makes updates run faster, since no checking is required, and is also
useful in certain cases.)
Ideally, you should normally avoid running updates that don't actually
change the data in the record. Redundant updates can cost considerable
unnecessary time, especially if there are lots of indexes to alter,
and space in dead rows that will eventually have to be vacuumed.
However, detecting such situations in client code is not
always easy, or even possible, and writing expressions to detect
them can be error-prone. An alternative is to use
suppress_redundant_updates_trigger>, which will skip
updates that don't change the data. You should use this with care,
however. The trigger takes a small but non-trivial time for each record,
so if most of the records affected by an update are actually changed,
use of this trigger will actually make the update run slower.
The suppress_redundant_updates_trigger> function can be
added to a table like this:
CREATE TRIGGER z_min_update
BEFORE UPDATE ON tablename
FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger();
In most cases, you would want to fire this trigger last for each row.
Bearing in mind that triggers fire in name order, you would then
choose a trigger name that comes after the name of any other trigger
you might have on the table.
For more information about creating triggers, see
.
Event Trigger Functionspg_event_trigger_dropped_objects
Currently PostgreSQL> provides one built-in event trigger
helper function, pg_event_trigger_dropped_objects>.
pg_event_trigger_dropped_objects> returns a list of all object
dropped by the command in whose sql_drop> event it is called.
If called in any other context,
pg_event_trigger_dropped_objects> raises an error.
pg_event_trigger_dropped_objects> returns the following columns:
NameTypeDescriptionclassidOidOID of catalog the object belonged inobjidOidOID the object had within the catalogobjsubidint32Object sub-id (e.g. attribute number for columns)object_typetextType of the objectschema_nametext
Name of the schema the object belonged in, if any; otherwise NULL>.
No quoting is applied.
object_nametext
Name of the object, if the combination of schema and name can be
used as an unique identifier for the object; otherwise NULL>.
No quoting is applied, and name is never schema-qualified.
object_identitytext
Text rendering of the object identity, schema-qualified. Each and every
identifier present in the identity is quoted if necessary.
The pg_event_trigger_dropped_objects> function can be used
in an event trigger like this:
CREATE FUNCTION test_event_trigger_for_drops()
RETURNS event_trigger LANGUAGE plpgsql AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects()
LOOP
RAISE NOTICE '% dropped object: % %.% %',
tg_tag,
obj.object_type,
obj.schema_name,
obj.object_name,
obj.object_identity;
END LOOP;
END
$$;
CREATE EVENT TRIGGER test_event_trigger_for_drops
ON sql_drop
EXECUTE PROCEDURE test_event_trigger_for_drops();
For more information about event triggers,
see .