Proofreading adjustments for first two parts of documentation (Tutorial

and SQL).
This commit is contained in:
Bruce Momjian 2009-04-27 16:27:36 +00:00
parent 23a9ac618e
commit ba36c48e39
39 changed files with 1352 additions and 1271 deletions

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/advanced.sgml,v 1.57 2009/02/04 21:30:41 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/advanced.sgml,v 1.58 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="tutorial-advanced"> <chapter id="tutorial-advanced">
<title>Advanced Features</title> <title>Advanced Features</title>
@ -19,10 +19,10 @@
<para> <para>
This chapter will on occasion refer to examples found in <xref This chapter will on occasion refer to examples found in <xref
linkend="tutorial-sql"> to change or improve them, so it will be linkend="tutorial-sql"> to change or improve them, so it will be
of advantage if you have read that chapter. Some examples from good if you have read that chapter. Some examples from
this chapter can also be found in this chapter can also be found in
<filename>advanced.sql</filename> in the tutorial directory. This <filename>advanced.sql</filename> in the tutorial directory. This
file also contains some example data to load, which is not file also contains some sample data to load, which is not
repeated here. (Refer to <xref linkend="tutorial-sql-intro"> for repeated here. (Refer to <xref linkend="tutorial-sql-intro"> for
how to use the file.) how to use the file.)
</para> </para>
@ -173,7 +173,7 @@ UPDATE branches SET balance = balance + 100.00
</para> </para>
<para> <para>
The details of these commands are not important here; the important The details of these commands are not important; the important
point is that there are several separate updates involved to accomplish point is that there are several separate updates involved to accomplish
this rather simple operation. Our bank's officers will want to be this rather simple operation. Our bank's officers will want to be
assured that either all these updates happen, or none of them happen. assured that either all these updates happen, or none of them happen.
@ -307,7 +307,7 @@ COMMIT;
<para> <para>
This example is, of course, oversimplified, but there's a lot of control This example is, of course, oversimplified, but there's a lot of control
to be had over a transaction block through the use of savepoints. possible in a transaction block through the use of savepoints.
Moreover, <command>ROLLBACK TO</> is the only way to regain control of a Moreover, <command>ROLLBACK TO</> is the only way to regain control of a
transaction block that was put in aborted state by the transaction block that was put in aborted state by the
system due to an error, short of rolling it back completely and starting system due to an error, short of rolling it back completely and starting

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/arch-dev.sgml,v 2.31 2007/12/12 06:23:27 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/arch-dev.sgml,v 2.32 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="overview"> <chapter id="overview">
<title>Overview of PostgreSQL Internals</title> <title>Overview of PostgreSQL Internals</title>
@ -67,7 +67,7 @@
One application of the rewrite system is in the realization of One application of the rewrite system is in the realization of
<firstterm>views</firstterm>. <firstterm>views</firstterm>.
Whenever a query against a view Whenever a query against a view
(i.e. a <firstterm>virtual table</firstterm>) is made, (i.e., a <firstterm>virtual table</firstterm>) is made,
the rewrite system rewrites the user's query to the rewrite system rewrites the user's query to
a query that accesses the <firstterm>base tables</firstterm> given in a query that accesses the <firstterm>base tables</firstterm> given in
the <firstterm>view definition</firstterm> instead. the <firstterm>view definition</firstterm> instead.
@ -145,7 +145,7 @@
<para> <para>
Once a connection is established the client process can send a query Once a connection is established the client process can send a query
to the <firstterm>backend</firstterm> (server). The query is transmitted using plain text, to the <firstterm>backend</firstterm> (server). The query is transmitted using plain text,
i.e. there is no parsing done in the <firstterm>frontend</firstterm> (client). The i.e., there is no parsing done in the <firstterm>frontend</firstterm> (client). The
server parses the query, creates an <firstterm>execution plan</firstterm>, server parses the query, creates an <firstterm>execution plan</firstterm>,
executes the plan and returns the retrieved rows to the client executes the plan and returns the retrieved rows to the client
by transmitting them over the established connection. by transmitting them over the established connection.
@ -442,7 +442,7 @@
relations, a near-exhaustive search is conducted to find the best relations, a near-exhaustive search is conducted to find the best
join sequence. The planner preferentially considers joins between any join sequence. The planner preferentially considers joins between any
two relations for which there exist a corresponding join clause in the two relations for which there exist a corresponding join clause in the
<literal>WHERE</literal> qualification (i.e. for <literal>WHERE</literal> qualification (i.e., for
which a restriction like <literal>where rel1.attr1=rel2.attr2</literal> which a restriction like <literal>where rel1.attr1=rel2.attr2</literal>
exists). Join pairs with no join clause are considered only when there exists). Join pairs with no join clause are considered only when there
is no other choice, that is, a particular relation has no available is no other choice, that is, a particular relation has no available

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/array.sgml,v 1.68 2008/11/12 13:09:27 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/array.sgml,v 1.69 2009/04/27 16:27:35 momjian Exp $ -->
<sect1 id="arrays"> <sect1 id="arrays">
<title>Arrays</title> <title>Arrays</title>
@ -54,23 +54,24 @@ CREATE TABLE tictactoe (
); );
</programlisting> </programlisting>
However, the current implementation does not enforce the array size However, the current implementation ignores any supplied array size
limits &mdash; the behavior is the same as for arrays of unspecified limits, i.e., the behavior is the same as for arrays of unspecified
length. length.
</para> </para>
<para> <para>
Actually, the current implementation does not enforce the declared In addition, the current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type are number of dimensions either. Arrays of a particular element type are
all considered to be of the same type, regardless of size or number all considered to be of the same type, regardless of size or number
of dimensions. So, declaring number of dimensions or sizes in of dimensions. So, declaring the number of dimensions or sizes in
<command>CREATE TABLE</command> is simply documentation, it does not <command>CREATE TABLE</command> is simply documentation, it does not
affect run-time behavior. affect run-time behavior.
</para> </para>
<para> <para>
An alternative syntax, which conforms to the SQL standard, can An alternative syntax, which conforms to the SQL standard by using
be used for one-dimensional arrays. they keyword <literal>ARRAY</>, can
be used for one-dimensional arrays;
<structfield>pay_by_quarter</structfield> could have been defined <structfield>pay_by_quarter</structfield> could have been defined
as: as:
<programlisting> <programlisting>
@ -107,9 +108,9 @@ CREATE TABLE tictactoe (
where <replaceable>delim</replaceable> is the delimiter character where <replaceable>delim</replaceable> is the delimiter character
for the type, as recorded in its <literal>pg_type</literal> entry. for the type, as recorded in its <literal>pg_type</literal> entry.
Among the standard data types provided in the Among the standard data types provided in the
<productname>PostgreSQL</productname> distribution, type <productname>PostgreSQL</productname> distribution, all use a comma
<literal>box</> uses a semicolon (<literal>;</>) but all the others (<literal>,</>), except for the type <literal>box</> which uses a semicolon
use comma (<literal>,</>). Each <replaceable>val</replaceable> is (<literal>;</>). Each <replaceable>val</replaceable> is
either a constant of the array element type, or a subarray. An example either a constant of the array element type, or a subarray. An example
of an array constant is: of an array constant is:
<programlisting> <programlisting>
@ -120,7 +121,7 @@ CREATE TABLE tictactoe (
</para> </para>
<para> <para>
To set an element of an array constant to NULL, write <literal>NULL</> To set an element of an array to NULL, write <literal>NULL</>
for the element value. (Any upper- or lower-case variant of for the element value. (Any upper- or lower-case variant of
<literal>NULL</> will do.) If you want an actual string value <literal>NULL</> will do.) If you want an actual string value
<quote>NULL</>, you must put double quotes around it. <quote>NULL</>, you must put double quotes around it.
@ -163,6 +164,19 @@ SELECT * FROM sal_emp;
</programlisting> </programlisting>
</para> </para>
<para>
Multidimensional arrays must have matching extents for each
dimension. A mismatch causes an error, for example:
<programlisting>
INSERT INTO sal_emp
VALUES ('Bill',
'{10000, 10000, 10000, 10000}',
'{{"meeting", "lunch"}, {"meeting"}}');
ERROR: multidimensional arrays must have array expressions with matching dimensions
</programlisting>
</para>
<para> <para>
The <literal>ARRAY</> constructor syntax can also be used: The <literal>ARRAY</> constructor syntax can also be used:
<programlisting> <programlisting>
@ -182,19 +196,6 @@ INSERT INTO sal_emp
constructor syntax is discussed in more detail in constructor syntax is discussed in more detail in
<xref linkend="sql-syntax-array-constructors">. <xref linkend="sql-syntax-array-constructors">.
</para> </para>
<para>
Multidimensional arrays must have matching extents for each
dimension. A mismatch causes an error report, for example:
<programlisting>
INSERT INTO sal_emp
VALUES ('Bill',
'{10000, 10000, 10000, 10000}',
'{{"meeting", "lunch"}, {"meeting"}}');
ERROR: multidimensional arrays must have array expressions with matching dimensions
</programlisting>
</para>
</sect2> </sect2>
<sect2 id="arrays-accessing"> <sect2 id="arrays-accessing">
@ -207,7 +208,7 @@ ERROR: multidimensional arrays must have array expressions with matching dimens
<para> <para>
Now, we can run some queries on the table. Now, we can run some queries on the table.
First, we show how to access a single element of an array at a time. First, we show how to access a single element of an array.
This query retrieves the names of the employees whose pay changed in This query retrieves the names of the employees whose pay changed in
the second quarter: the second quarter:
@ -221,7 +222,7 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] &lt;&gt; pay_by_quarter[2];
</programlisting> </programlisting>
The array subscript numbers are written within square brackets. The array subscript numbers are written within square brackets.
By default <productname>PostgreSQL</productname> uses the By default <productname>PostgreSQL</productname> uses a
one-based numbering convention for arrays, that is, one-based numbering convention for arrays, that is,
an array of <replaceable>n</> elements starts with <literal>array[1]</literal> and an array of <replaceable>n</> elements starts with <literal>array[1]</literal> and
ends with <literal>array[<replaceable>n</>]</literal>. ends with <literal>array[<replaceable>n</>]</literal>.
@ -257,7 +258,7 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill';
(1 row) (1 row)
</programlisting> </programlisting>
If any dimension is written as a slice, i.e. contains a colon, then all If any dimension is written as a slice, i.e., contains a colon, then all
dimensions are treated as slices. Any dimension that has only a single dimensions are treated as slices. Any dimension that has only a single
number (no colon) is treated as being from <literal>1</> number (no colon) is treated as being from <literal>1</>
to the number specified. For example, <literal>[2]</> is treated as to the number specified. For example, <literal>[2]</> is treated as
@ -288,13 +289,14 @@ SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill';
<para> <para>
An array slice expression likewise yields null if the array itself or An array slice expression likewise yields null if the array itself or
any of the subscript expressions are null. However, in other corner any of the subscript expressions are null. However, in other
cases such as selecting an array slice that cases such as selecting an array slice that
is completely outside the current array bounds, a slice expression is completely outside the current array bounds, a slice expression
yields an empty (zero-dimensional) array instead of null. (This yields an empty (zero-dimensional) array instead of null. (This
does not match non-slice behavior and is done for historical reasons.) does not match non-slice behavior and is done for historical reasons.)
If the requested slice partially overlaps the array bounds, then it If the requested slice partially overlaps the array bounds, then it
is silently reduced to just the overlapping region. is silently reduced to just the overlapping region instead of
returning null.
</para> </para>
<para> <para>
@ -311,7 +313,7 @@ SELECT array_dims(schedule) FROM sal_emp WHERE name = 'Carol';
</programlisting> </programlisting>
<function>array_dims</function> produces a <type>text</type> result, <function>array_dims</function> produces a <type>text</type> result,
which is convenient for people to read but perhaps not so convenient which is convenient for people to read but perhaps inconvenient
for programs. Dimensions can also be retrieved with for programs. Dimensions can also be retrieved with
<function>array_upper</function> and <function>array_lower</function>, <function>array_upper</function> and <function>array_lower</function>,
which return the upper and lower bound of a which return the upper and lower bound of a
@ -380,12 +382,12 @@ UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}'
</para> </para>
<para> <para>
A stored array value can be enlarged by assigning to element(s) not already A stored array value can be enlarged by assigning to elements not already
present. Any positions between those previously present and the newly present. Any positions between those previously present and the newly
assigned element(s) will be filled with nulls. For example, if array assigned elements will be filled with nulls. For example, if array
<literal>myarray</> currently has 4 elements, it will have six <literal>myarray</> currently has 4 elements, it will have six
elements after an update that assigns to <literal>myarray[6]</>, elements after an update that assigns to <literal>myarray[6]</>;
and <literal>myarray[5]</> will contain a null. <literal>myarray[5]</> will contain null.
Currently, enlargement in this fashion is only allowed for one-dimensional Currently, enlargement in this fashion is only allowed for one-dimensional
arrays, not multidimensional arrays. arrays, not multidimensional arrays.
</para> </para>
@ -393,11 +395,11 @@ UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}'
<para> <para>
Subscripted assignment allows creation of arrays that do not use one-based Subscripted assignment allows creation of arrays that do not use one-based
subscripts. For example one might assign to <literal>myarray[-2:7]</> to subscripts. For example one might assign to <literal>myarray[-2:7]</> to
create an array with subscript values running from -2 to 7. create an array with subscript values from -2 to 7.
</para> </para>
<para> <para>
New array values can also be constructed by using the concatenation operator, New array values can also be constructed using the concatenation operator,
<literal>||</literal>: <literal>||</literal>:
<programlisting> <programlisting>
SELECT ARRAY[1,2] || ARRAY[3,4]; SELECT ARRAY[1,2] || ARRAY[3,4];
@ -415,14 +417,14 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]];
</para> </para>
<para> <para>
The concatenation operator allows a single element to be pushed on to the The concatenation operator allows a single element to be pushed to the
beginning or end of a one-dimensional array. It also accepts two beginning or end of a one-dimensional array. It also accepts two
<replaceable>N</>-dimensional arrays, or an <replaceable>N</>-dimensional <replaceable>N</>-dimensional arrays, or an <replaceable>N</>-dimensional
and an <replaceable>N+1</>-dimensional array. and an <replaceable>N+1</>-dimensional array.
</para> </para>
<para> <para>
When a single element is pushed on to either the beginning or end of a When a single element is pushed to either the beginning or end of a
one-dimensional array, the result is an array with the same lower bound one-dimensional array, the result is an array with the same lower bound
subscript as the array operand. For example: subscript as the array operand. For example:
<programlisting> <programlisting>
@ -461,7 +463,7 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]);
</para> </para>
<para> <para>
When an <replaceable>N</>-dimensional array is pushed on to the beginning When an <replaceable>N</>-dimensional array is pushed to the beginning
or end of an <replaceable>N+1</>-dimensional array, the result is or end of an <replaceable>N+1</>-dimensional array, the result is
analogous to the element-array case above. Each <replaceable>N</>-dimensional analogous to the element-array case above. Each <replaceable>N</>-dimensional
sub-array is essentially an element of the <replaceable>N+1</>-dimensional sub-array is essentially an element of the <replaceable>N+1</>-dimensional
@ -482,7 +484,7 @@ SELECT array_dims(ARRAY[1,2] || ARRAY[[3,4],[5,6]]);
arrays, but <function>array_cat</function> supports multidimensional arrays. arrays, but <function>array_cat</function> supports multidimensional arrays.
Note that the concatenation operator discussed above is preferred over Note that the concatenation operator discussed above is preferred over
direct use of these functions. In fact, the functions exist primarily for use direct use of these functions. In fact, these functions primarily exist for use
in implementing the concatenation operator. However, they might be directly in implementing the concatenation operator. However, they might be directly
useful in the creation of user-defined aggregates. Some examples: useful in the creation of user-defined aggregates. Some examples:
@ -528,8 +530,8 @@ SELECT array_cat(ARRAY[5,6], ARRAY[[1,2],[3,4]]);
</indexterm> </indexterm>
<para> <para>
To search for a value in an array, you must check each value of the To search for a value in an array, each value must be checked.
array. This can be done by hand, if you know the size of the array. This can be done manually, if you know the size of the array.
For example: For example:
<programlisting> <programlisting>
@ -540,7 +542,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR
</programlisting> </programlisting>
However, this quickly becomes tedious for large arrays, and is not However, this quickly becomes tedious for large arrays, and is not
helpful if the size of the array is uncertain. An alternative method is helpful if the size of the array is unknown. An alternative method is
described in <xref linkend="functions-comparisons">. The above described in <xref linkend="functions-comparisons">. The above
query could be replaced by: query could be replaced by:
@ -548,7 +550,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR
SELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter); SELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);
</programlisting> </programlisting>
In addition, you could find rows where the array had all values In addition, you can find rows where the array has all values
equal to 10000 with: equal to 10000 with:
<programlisting> <programlisting>
@ -578,7 +580,7 @@ SELECT * FROM
can be a sign of database misdesign. Consider can be a sign of database misdesign. Consider
using a separate table with a row for each item that would be an using a separate table with a row for each item that would be an
array element. This will be easier to search, and is likely to array element. This will be easier to search, and is likely to
scale up better to large numbers of elements. scale better for a large number of elements.
</para> </para>
</tip> </tip>
</sect2> </sect2>
@ -600,9 +602,9 @@ SELECT * FROM
The delimiter character is usually a comma (<literal>,</>) but can be The delimiter character is usually a comma (<literal>,</>) but can be
something else: it is determined by the <literal>typdelim</> setting something else: it is determined by the <literal>typdelim</> setting
for the array's element type. (Among the standard data types provided for the array's element type. (Among the standard data types provided
in the <productname>PostgreSQL</productname> distribution, type in the <productname>PostgreSQL</productname> distribution, all
<literal>box</> uses a semicolon (<literal>;</>) but all the others use a comma, except for <literal>box</>, which uses a semicolon (<literal>;</>).)
use comma.) In a multidimensional array, each dimension (row, plane, In a multidimensional array, each dimension (row, plane,
cube, etc.) gets its own level of curly braces, and delimiters cube, etc.) gets its own level of curly braces, and delimiters
must be written between adjacent curly-braced entities of the same level. must be written between adjacent curly-braced entities of the same level.
</para> </para>
@ -614,7 +616,7 @@ SELECT * FROM
<literal>NULL</>. Double quotes and backslashes <literal>NULL</>. Double quotes and backslashes
embedded in element values will be backslash-escaped. For numeric embedded in element values will be backslash-escaped. For numeric
data types it is safe to assume that double quotes will never appear, but data types it is safe to assume that double quotes will never appear, but
for textual data types one should be prepared to cope with either presence for textual data types one should be prepared to cope with either the presence
or absence of quotes. or absence of quotes.
</para> </para>
@ -647,27 +649,27 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2
or backslashes disables this and allows the literal string value or backslashes disables this and allows the literal string value
<quote>NULL</> to be entered. Also, for backwards compatibility with <quote>NULL</> to be entered. Also, for backwards compatibility with
pre-8.2 versions of <productname>PostgreSQL</>, the <xref pre-8.2 versions of <productname>PostgreSQL</>, the <xref
linkend="guc-array-nulls"> configuration parameter might be turned linkend="guc-array-nulls"> configuration parameter can be turned
<literal>off</> to suppress recognition of <literal>NULL</> as a NULL. <literal>off</> to suppress recognition of <literal>NULL</> as a NULL.
</para> </para>
<para> <para>
As shown previously, when writing an array value you can write double As shown previously, when writing an array value you can use double
quotes around any individual array element. You <emphasis>must</> do so quotes around any individual array element. You <emphasis>must</> do so
if the element value would otherwise confuse the array-value parser. if the element value would otherwise confuse the array-value parser.
For example, elements containing curly braces, commas (or whatever the For example, elements containing curly braces, commas (or the matching
delimiter character is), double quotes, backslashes, or leading or trailing delimiter character), double quotes, backslashes, or leading or trailing
whitespace must be double-quoted. Empty strings and strings matching the whitespace must be double-quoted. Empty strings and strings matching the
word <literal>NULL</> must be quoted, too. To put a double quote or word <literal>NULL</> must be quoted, too. To put a double quote or
backslash in a quoted array element value, use escape string syntax backslash in a quoted array element value, use escape string syntax
and precede it with a backslash. Alternatively, you can use and precede it with a backslash. Alternatively, you can avoid quotes and use
backslash-escaping to protect all data characters that would otherwise backslash-escaping to protect all data characters that would otherwise
be taken as array syntax. be taken as array syntax.
</para> </para>
<para> <para>
You can write whitespace before a left brace or after a right You can use whitespace before a left brace or after a right
brace. You can also write whitespace before or after any individual item brace. You can also add whitespace before or after any individual item
string. In all of these cases the whitespace will be ignored. However, string. In all of these cases the whitespace will be ignored. However,
whitespace within double-quoted elements, or surrounded on both sides by whitespace within double-quoted elements, or surrounded on both sides by
non-whitespace characters of an element, is not ignored. non-whitespace characters of an element, is not ignored.

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.124 2009/04/07 00:31:25 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.125 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="backup"> <chapter id="backup">
<title>Backup and Restore</title> <title>Backup and Restore</title>
@ -1523,7 +1523,7 @@ archive_command = 'local_backup_script.sh'
</para> </para>
<para> <para>
It should be noted that the log shipping is asynchronous, i.e. the WAL It should be noted that the log shipping is asynchronous, i.e., the WAL
records are shipped after transaction commit. As a result there is a records are shipped after transaction commit. As a result there is a
window for data loss should the primary server suffer a catastrophic window for data loss should the primary server suffer a catastrophic
failure: transactions not yet shipped will be lost. The length of the failure: transactions not yet shipped will be lost. The length of the

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.215 2009/04/23 00:23:45 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.216 2009/04/27 16:27:35 momjian Exp $ -->
<chapter Id="runtime-config"> <chapter Id="runtime-config">
<title>Server Configuration</title> <title>Server Configuration</title>
@ -1253,7 +1253,7 @@ SET ENABLE_SEQSCAN TO OFF;
function, which some operating systems lack. If the function is not function, which some operating systems lack. If the function is not
present then setting this parameter to anything but zero will result present then setting this parameter to anything but zero will result
in an error. On some operating systems the function is present but in an error. On some operating systems the function is present but
does not actually do anything (e.g. Solaris). does not actually do anything (e.g., Solaris).
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -4333,7 +4333,7 @@ SET XML OPTION { DOCUMENT | CONTENT };
If a dynamically loadable module needs to be opened and the If a dynamically loadable module needs to be opened and the
file name specified in the <command>CREATE FUNCTION</command> or file name specified in the <command>CREATE FUNCTION</command> or
<command>LOAD</command> command <command>LOAD</command> command
does not have a directory component (i.e. the does not have a directory component (i.e., the
name does not contain a slash), the system will search this name does not contain a slash), the system will search this
path for the required file. path for the required file.
</para> </para>
@ -4503,7 +4503,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
The shared lock table is created to track locks on The shared lock table is created to track locks on
<varname>max_locks_per_transaction</varname> * (<xref <varname>max_locks_per_transaction</varname> * (<xref
linkend="guc-max-connections"> + <xref linkend="guc-max-connections"> + <xref
linkend="guc-max-prepared-transactions">) objects (e.g. tables); linkend="guc-max-prepared-transactions">) objects (e.g., tables);
hence, no more than this many distinct objects can be locked at hence, no more than this many distinct objects can be locked at
any one time. This parameter controls the average number of object any one time. This parameter controls the average number of object
locks allocated for each transaction; individual transactions locks allocated for each transaction; individual transactions

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/contrib.sgml,v 1.12 2009/03/25 23:20:01 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/contrib.sgml,v 1.13 2009/04/27 16:27:35 momjian Exp $ -->
<appendix id="contrib"> <appendix id="contrib">
<title>Additional Supplied Modules</title> <title>Additional Supplied Modules</title>
@ -16,7 +16,7 @@
<para> <para>
When building from the source distribution, these modules are not built When building from the source distribution, these modules are not built
automatically. You can build and install all of them by running automatically. You can build and install all of them by running:
<screen> <screen>
<userinput>gmake</userinput> <userinput>gmake</userinput>
<userinput>gmake install</userinput> <userinput>gmake install</userinput>
@ -25,7 +25,7 @@
or to build and install or to build and install
just one selected module, do the same in that module's subdirectory. just one selected module, do the same in that module's subdirectory.
Many of the modules have regression tests, which can be executed by Many of the modules have regression tests, which can be executed by
running running:
<screen> <screen>
<userinput>gmake installcheck</userinput> <userinput>gmake installcheck</userinput>
</screen> </screen>

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.85 2009/01/08 12:47:58 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.86 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="ddl"> <chapter id="ddl">
<title>Data Definition</title> <title>Data Definition</title>
@ -153,7 +153,7 @@ DROP TABLE products;
</para> </para>
<para> <para>
If you need to modify a table that already exists look into <xref If you need to modify a table that already exists, see <xref
linkend="ddl-alter"> later in this chapter. linkend="ddl-alter"> later in this chapter.
</para> </para>
@ -206,7 +206,7 @@ CREATE TABLE products (
The default value can be an expression, which will be The default value can be an expression, which will be
evaluated whenever the default value is inserted evaluated whenever the default value is inserted
(<emphasis>not</emphasis> when the table is created). A common example (<emphasis>not</emphasis> when the table is created). A common example
is that a <type>timestamp</type> column can have a default of <literal>now()</>, is for a <type>timestamp</type> column to have a default of <literal>CURRENT_TIMESTAMP</>,
so that it gets set to the time of row insertion. Another common so that it gets set to the time of row insertion. Another common
example is generating a <quote>serial number</> for each row. example is generating a <quote>serial number</> for each row.
In <productname>PostgreSQL</productname> this is typically done by In <productname>PostgreSQL</productname> this is typically done by
@ -374,8 +374,8 @@ CREATE TABLE products (
</para> </para>
<para> <para>
Names can be assigned to table constraints in just the same way as Names can be assigned to table constraints in the same way as
for column constraints: column constraints:
<programlisting> <programlisting>
CREATE TABLE products ( CREATE TABLE products (
product_no integer, product_no integer,
@ -550,15 +550,15 @@ CREATE TABLE products (
</indexterm> </indexterm>
<para> <para>
In general, a unique constraint is violated when there are two or In general, a unique constraint is violated when there is more than
more rows in the table where the values of all of the one row in the table where the values of all of the
columns included in the constraint are equal. columns included in the constraint are equal.
However, two null values are not considered equal in this However, two null values are not considered equal in this
comparison. That means even in the presence of a comparison. That means even in the presence of a
unique constraint it is possible to store duplicate unique constraint it is possible to store duplicate
rows that contain a null value in at least one of the constrained rows that contain a null value in at least one of the constrained
columns. This behavior conforms to the SQL standard, but we have columns. This behavior conforms to the SQL standard, but there
heard that other SQL databases might not follow this rule. So be might be other SQL databases might not follow this rule. So be
careful when developing applications that are intended to be careful when developing applications that are intended to be
portable. portable.
</para> </para>
@ -857,7 +857,7 @@ CREATE TABLE order_items (
restrictions are separate from whether the name is a key word or restrictions are separate from whether the name is a key word or
not; quoting a name will not allow you to escape these not; quoting a name will not allow you to escape these
restrictions.) You do not really need to be concerned about these restrictions.) You do not really need to be concerned about these
columns, just know they exist. columns; just know they exist.
</para> </para>
<indexterm> <indexterm>
@ -1037,8 +1037,8 @@ CREATE TABLE order_items (
Command identifiers are also 32-bit quantities. This creates a hard limit Command identifiers are also 32-bit quantities. This creates a hard limit
of 2<superscript>32</> (4 billion) <acronym>SQL</acronym> commands of 2<superscript>32</> (4 billion) <acronym>SQL</acronym> commands
within a single transaction. In practice this limit is not a within a single transaction. In practice this limit is not a
problem &mdash; note that the limit is on number of problem &mdash; note that the limit is on the number of
<acronym>SQL</acronym> commands, not number of rows processed. <acronym>SQL</acronym> commands, not the number of rows processed.
Also, as of <productname>PostgreSQL</productname> 8.3, only commands Also, as of <productname>PostgreSQL</productname> 8.3, only commands
that actually modify the database contents will consume a command that actually modify the database contents will consume a command
identifier. identifier.
@ -1055,7 +1055,7 @@ CREATE TABLE order_items (
<para> <para>
When you create a table and you realize that you made a mistake, or When you create a table and you realize that you made a mistake, or
the requirements of the application change, then you can drop the the requirements of the application change, you can drop the
table and create it again. But this is not a convenient option if table and create it again. But this is not a convenient option if
the table is already filled with data, or if the table is the table is already filled with data, or if the table is
referenced by other database objects (for instance a foreign key referenced by other database objects (for instance a foreign key
@ -1067,31 +1067,31 @@ CREATE TABLE order_items (
</para> </para>
<para> <para>
You can You can:
<itemizedlist spacing="compact"> <itemizedlist spacing="compact">
<listitem> <listitem>
<para>Add columns,</para> <para>Add columns</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Remove columns,</para> <para>Remove columns</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Add constraints,</para> <para>Add constraints</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Remove constraints,</para> <para>Remove constraints</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Change default values,</para> <para>Change default values</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Change column data types,</para> <para>Change column data types</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Rename columns,</para> <para>Rename columns</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Rename tables.</para> <para>Rename tables</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -1110,7 +1110,7 @@ CREATE TABLE order_items (
</indexterm> </indexterm>
<para> <para>
To add a column, use a command like this: To add a column, use a command like:
<programlisting> <programlisting>
ALTER TABLE products ADD COLUMN description text; ALTER TABLE products ADD COLUMN description text;
</programlisting> </programlisting>
@ -1154,7 +1154,7 @@ ALTER TABLE products ADD COLUMN description text CHECK (description &lt;&gt; '')
</indexterm> </indexterm>
<para> <para>
To remove a column, use a command like this: To remove a column, use a command like:
<programlisting> <programlisting>
ALTER TABLE products DROP COLUMN description; ALTER TABLE products DROP COLUMN description;
</programlisting> </programlisting>
@ -1250,7 +1250,7 @@ ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL;
</indexterm> </indexterm>
<para> <para>
To set a new default for a column, use a command like this: To set a new default for a column, use a command like:
<programlisting> <programlisting>
ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77; ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
</programlisting> </programlisting>
@ -1279,7 +1279,7 @@ ALTER TABLE products ALTER COLUMN price DROP DEFAULT;
</indexterm> </indexterm>
<para> <para>
To convert a column to a different data type, use a command like this: To convert a column to a different data type, use a command like:
<programlisting> <programlisting>
ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2); ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2);
</programlisting> </programlisting>
@ -1488,7 +1488,7 @@ REVOKE ALL ON accounts FROM PUBLIC;
<listitem> <listitem>
<para> <para>
Third-party applications can be put into separate schemas so Third-party applications can be put into separate schemas so
they cannot collide with the names of other objects. they do not collide with the names of other objects.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -1603,7 +1603,7 @@ CREATE SCHEMA <replaceable>schemaname</replaceable> AUTHORIZATION <replaceable>u
<para> <para>
In the previous sections we created tables without specifying any In the previous sections we created tables without specifying any
schema names. By default, such tables (and other objects) are schema names. By default such tables (and other objects) are
automatically put into a schema named <quote>public</quote>. Every new automatically put into a schema named <quote>public</quote>. Every new
database contains such a schema. Thus, the following are equivalent: database contains such a schema. Thus, the following are equivalent:
<programlisting> <programlisting>
@ -1746,7 +1746,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4;
<para> <para>
By default, users cannot access any objects in schemas they do not By default, users cannot access any objects in schemas they do not
own. To allow that, the owner of the schema needs to grant the own. To allow that, the owner of the schema must grant the
<literal>USAGE</literal> privilege on the schema. To allow users <literal>USAGE</literal> privilege on the schema. To allow users
to make use of the objects in the schema, additional privileges to make use of the objects in the schema, additional privileges
might need to be granted, as appropriate for the object. might need to be granted, as appropriate for the object.
@ -1802,7 +1802,7 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC;
such names, to ensure that you won't suffer a conflict if some such names, to ensure that you won't suffer a conflict if some
future version defines a system table named the same as your future version defines a system table named the same as your
table. (With the default search path, an unqualified reference to table. (With the default search path, an unqualified reference to
your table name would be resolved as the system table instead.) your table name would be resolved as a system table instead.)
System tables will continue to follow the convention of having System tables will continue to follow the convention of having
names beginning with <literal>pg_</>, so that they will not names beginning with <literal>pg_</>, so that they will not
conflict with unqualified user-table names so long as users avoid conflict with unqualified user-table names so long as users avoid
@ -2024,7 +2024,7 @@ WHERE c.altitude &gt; 500;
<programlisting> <programlisting>
SELECT p.relname, c.name, c.altitude SELECT p.relname, c.name, c.altitude
FROM cities c, pg_class p FROM cities c, pg_class p
WHERE c.altitude &gt; 500 and c.tableoid = p.oid; WHERE c.altitude &gt; 500 AND c.tableoid = p.oid;
</programlisting> </programlisting>
which returns: which returns:
@ -2130,7 +2130,7 @@ VALUES ('New York', NULL, NULL, 'NY');
<para> <para>
Table access permissions are not automatically inherited. Therefore, Table access permissions are not automatically inherited. Therefore,
a user attempting to access a parent table must either have permissions a user attempting to access a parent table must either have permissions
to do the operation on all its child tables as well, or must use the to do the same operation on all its child tables as well, or must use the
<literal>ONLY</literal> notation. When adding a new child table to <literal>ONLY</literal> notation. When adding a new child table to
an existing inheritance hierarchy, be careful to grant all the needed an existing inheritance hierarchy, be careful to grant all the needed
permissions on it. permissions on it.
@ -2197,7 +2197,7 @@ VALUES ('New York', NULL, NULL, 'NY');
These deficiencies will probably be fixed in some future release, These deficiencies will probably be fixed in some future release,
but in the meantime considerable care is needed in deciding whether but in the meantime considerable care is needed in deciding whether
inheritance is useful for your problem. inheritance is useful for your application.
</para> </para>
<note> <note>
@ -2374,7 +2374,7 @@ CHECK ( outletID &gt;= 100 AND outletID &lt; 200 )
</programlisting> </programlisting>
Ensure that the constraints guarantee that there is no overlap Ensure that the constraints guarantee that there is no overlap
between the key values permitted in different partitions. A common between the key values permitted in different partitions. A common
mistake is to set up range constraints like this: mistake is to set up range constraints like:
<programlisting> <programlisting>
CHECK ( outletID BETWEEN 100 AND 200 ) CHECK ( outletID BETWEEN 100 AND 200 )
CHECK ( outletID BETWEEN 200 AND 300 ) CHECK ( outletID BETWEEN 200 AND 300 )
@ -2424,7 +2424,7 @@ CHECK ( outletID BETWEEN 200 AND 300 )
For example, suppose we are constructing a database for a large For example, suppose we are constructing a database for a large
ice cream company. The company measures peak temperatures every ice cream company. The company measures peak temperatures every
day as well as ice cream sales in each region. Conceptually, day as well as ice cream sales in each region. Conceptually,
we want a table like this: we want a table like:
<programlisting> <programlisting>
CREATE TABLE measurement ( CREATE TABLE measurement (
@ -2571,12 +2571,15 @@ CREATE TRIGGER insert_measurement_trigger
CREATE OR REPLACE FUNCTION measurement_insert_trigger() CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS $$ RETURNS TRIGGER AS $$
BEGIN BEGIN
IF ( NEW.logdate &gt;= DATE '2006-02-01' AND NEW.logdate &lt; DATE '2006-03-01' ) THEN IF ( NEW.logdate &gt;= DATE '2006-02-01' AND
NEW.logdate &lt; DATE '2006-03-01' ) THEN
INSERT INTO measurement_y2006m02 VALUES (NEW.*); INSERT INTO measurement_y2006m02 VALUES (NEW.*);
ELSIF ( NEW.logdate &gt;= DATE '2006-03-01' AND NEW.logdate &lt; DATE '2006-04-01' ) THEN ELSIF ( NEW.logdate &gt;= DATE '2006-03-01' AND
NEW.logdate &lt; DATE '2006-04-01' ) THEN
INSERT INTO measurement_y2006m03 VALUES (NEW.*); INSERT INTO measurement_y2006m03 VALUES (NEW.*);
... ...
ELSIF ( NEW.logdate &gt;= DATE '2008-01-01' AND NEW.logdate &lt; DATE '2008-02-01' ) THEN ELSIF ( NEW.logdate &gt;= DATE '2008-01-01' AND
NEW.logdate &lt; DATE '2008-02-01' ) THEN
INSERT INTO measurement_y2008m01 VALUES (NEW.*); INSERT INTO measurement_y2008m01 VALUES (NEW.*);
ELSE ELSE
RAISE EXCEPTION 'Date out of range. Fix the measurement_insert_trigger() function!'; RAISE EXCEPTION 'Date out of range. Fix the measurement_insert_trigger() function!';
@ -2706,9 +2709,9 @@ SELECT count(*) FROM measurement WHERE logdate &gt;= DATE '2008-01-01';
Without constraint exclusion, the above query would scan each of Without constraint exclusion, the above query would scan each of
the partitions of the <structname>measurement</> table. With constraint the partitions of the <structname>measurement</> table. With constraint
exclusion enabled, the planner will examine the constraints of each exclusion enabled, the planner will examine the constraints of each
partition and try to prove that the partition need not partition and try to determine which partitions need not
be scanned because it could not contain any rows meeting the query's be scanned because they cannot not contain any rows meeting the query's
<literal>WHERE</> clause. When the planner can prove this, it <literal>WHERE</> clause. When the planner can determine this, it
excludes the partition from the query plan. excludes the partition from the query plan.
</para> </para>
@ -2875,7 +2878,7 @@ UNION ALL SELECT * FROM measurement_y2008m01;
<para> <para>
If you are using manual <command>VACUUM</command> or If you are using manual <command>VACUUM</command> or
<command>ANALYZE</command> commands, don't forget that <command>ANALYZE</command> commands, don't forget that
you need to run them on each partition individually. A command like you need to run them on each partition individually. A command like:
<programlisting> <programlisting>
ANALYZE measurement; ANALYZE measurement;
</programlisting> </programlisting>
@ -2903,7 +2906,7 @@ ANALYZE measurement;
<listitem> <listitem>
<para> <para>
Keep the partitioning constraints simple, else the planner may not be Keep the partitioning constraints simple or else the planner may not be
able to prove that partitions don't need to be visited. Use simple able to prove that partitions don't need to be visited. Use simple
equality conditions for list partitioning, or simple equality conditions for list partitioning, or simple
range tests for range partitioning, as illustrated in the preceding range tests for range partitioning, as illustrated in the preceding
@ -2937,7 +2940,7 @@ ANALYZE measurement;
that exist in a database. Many other kinds of objects can be that exist in a database. Many other kinds of objects can be
created to make the use and management of the data more efficient created to make the use and management of the data more efficient
or convenient. They are not discussed in this chapter, but we give or convenient. They are not discussed in this chapter, but we give
you a list here so that you are aware of what is possible. you a list here so that you are aware of what is possible:
</para> </para>
<itemizedlist> <itemizedlist>
@ -2988,7 +2991,7 @@ ANALYZE measurement;
<para> <para>
When you create complex database structures involving many tables When you create complex database structures involving many tables
with foreign key constraints, views, triggers, functions, etc. you with foreign key constraints, views, triggers, functions, etc. you
will implicitly create a net of dependencies between the objects. implicitly create a net of dependencies between the objects.
For instance, a table with a foreign key constraint depends on the For instance, a table with a foreign key constraint depends on the
table it references. table it references.
</para> </para>
@ -3008,7 +3011,7 @@ ERROR: cannot drop table products because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too. HINT: Use DROP ... CASCADE to drop the dependent objects too.
</screen> </screen>
The error message contains a useful hint: if you do not want to The error message contains a useful hint: if you do not want to
bother deleting all the dependent objects individually, you can run bother deleting all the dependent objects individually, you can run:
<screen> <screen>
DROP TABLE products CASCADE; DROP TABLE products CASCADE;
</screen> </screen>
@ -3024,7 +3027,7 @@ DROP TABLE products CASCADE;
the possible dependencies varies with the type of the object. You the possible dependencies varies with the type of the object. You
can also write <literal>RESTRICT</literal> instead of can also write <literal>RESTRICT</literal> instead of
<literal>CASCADE</literal> to get the default behavior, which is to <literal>CASCADE</literal> to get the default behavior, which is to
prevent drops of objects that other objects depend on. prevent the dropping of objects that other objects depend on.
</para> </para>
<note> <note>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/dml.sgml,v 1.17 2007/12/03 23:49:50 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/dml.sgml,v 1.18 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="dml"> <chapter id="dml">
<title>Data Manipulation</title> <title>Data Manipulation</title>
@ -14,7 +14,7 @@
table data. We also introduce ways to effect automatic data changes table data. We also introduce ways to effect automatic data changes
when certain events occur: triggers and rewrite rules. The chapter when certain events occur: triggers and rewrite rules. The chapter
after this will finally explain how to extract your long-lost data after this will finally explain how to extract your long-lost data
back out of the database. from the database.
</para> </para>
<sect1 id="dml-insert"> <sect1 id="dml-insert">
@ -33,14 +33,14 @@
do before a database can be of much use is to insert data. Data is do before a database can be of much use is to insert data. Data is
conceptually inserted one row at a time. Of course you can also conceptually inserted one row at a time. Of course you can also
insert more than one row, but there is no way to insert less than insert more than one row, but there is no way to insert less than
one row at a time. Even if you know only some column values, a one row. Even if you know only some column values, a
complete row must be created. complete row must be created.
</para> </para>
<para> <para>
To create a new row, use the <xref linkend="sql-insert" To create a new row, use the <xref linkend="sql-insert"
endterm="sql-insert-title"> command. The command requires the endterm="sql-insert-title"> command. The command requires the
table name and a value for each of the columns of the table. For table name and column values. For
example, consider the products table from <xref linkend="ddl">: example, consider the products table from <xref linkend="ddl">:
<programlisting> <programlisting>
CREATE TABLE products ( CREATE TABLE products (
@ -60,7 +60,7 @@ INSERT INTO products VALUES (1, 'Cheese', 9.99);
<para> <para>
The above syntax has the drawback that you need to know the order The above syntax has the drawback that you need to know the order
of the columns in the table. To avoid that you can also list the of the columns in the table. To avoid this you can also list the
columns explicitly. For example, both of the following commands columns explicitly. For example, both of the following commands
have the same effect as the one above: have the same effect as the one above:
<programlisting> <programlisting>
@ -137,15 +137,15 @@ INSERT INTO products (product_no, name, price) VALUES
To perform an update, you need three pieces of information: To perform an update, you need three pieces of information:
<orderedlist spacing="compact"> <orderedlist spacing="compact">
<listitem> <listitem>
<para>The name of the table and column to update,</para> <para>The name of the table and column to update</para>
</listitem> </listitem>
<listitem> <listitem>
<para>The new value of the column,</para> <para>The new value of the column</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Which row(s) to update.</para> <para>Which row(s) to update</para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
@ -153,10 +153,10 @@ INSERT INTO products (product_no, name, price) VALUES
<para> <para>
Recall from <xref linkend="ddl"> that SQL does not, in general, Recall from <xref linkend="ddl"> that SQL does not, in general,
provide a unique identifier for rows. Therefore it is not provide a unique identifier for rows. Therefore it is not
necessarily possible to directly specify which row to update. always possible to directly specify which row to update.
Instead, you specify which conditions a row must meet in order to Instead, you specify which conditions a row must meet in order to
be updated. Only if you have a primary key in the table (no matter be updated. Only if you have a primary key in the table (independent of
whether you declared it or not) can you reliably address individual rows, whether you declared it or not) can you reliably address individual rows
by choosing a condition that matches the primary key. by choosing a condition that matches the primary key.
Graphical database access tools rely on this fact to allow you to Graphical database access tools rely on this fact to allow you to
update rows individually. update rows individually.
@ -177,7 +177,7 @@ UPDATE products SET price = 10 WHERE price = 5;
<literal>UPDATE</literal> followed by the table name. As usual, <literal>UPDATE</literal> followed by the table name. As usual,
the table name can be schema-qualified, otherwise it is looked up the table name can be schema-qualified, otherwise it is looked up
in the path. Next is the key word <literal>SET</literal> followed in the path. Next is the key word <literal>SET</literal> followed
by the column name, an equals sign and the new column value. The by the column name, an equal sign, and the new column value. The
new column value can be any scalar expression, not just a constant. new column value can be any scalar expression, not just a constant.
For example, if you want to raise the price of all products by 10% For example, if you want to raise the price of all products by 10%
you could use: you could use:
@ -248,7 +248,10 @@ DELETE FROM products WHERE price = 10;
<programlisting> <programlisting>
DELETE FROM products; DELETE FROM products;
</programlisting> </programlisting>
then all rows in the table will be deleted! Caveat programmer. then all rows in the table will be deleted! (<xref
linkend="sql-truncate" endterm="sql-truncate-title"> can also be used
to delete all rows.)
Caveat programmer.
</para> </para>
</sect1> </sect1>
</chapter> </chapter>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/docguide.sgml,v 1.74 2008/11/03 15:39:38 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/docguide.sgml,v 1.75 2009/04/27 16:27:35 momjian Exp $ -->
<appendix id="docguide"> <appendix id="docguide">
<title>Documentation</title> <title>Documentation</title>
@ -358,7 +358,7 @@ CATALOG "dsssl/catalog"
Create the directory Create the directory
<filename>/usr/local/share/sgml/docbook-4.2</filename> and change <filename>/usr/local/share/sgml/docbook-4.2</filename> and change
to it. (The exact location is irrelevant, but this one is to it. (The exact location is irrelevant, but this one is
reasonable within the layout we are following here.) reasonable within the layout we are following here.):
<screen> <screen>
<prompt>$ </prompt><userinput>mkdir /usr/local/share/sgml/docbook-4.2</userinput> <prompt>$ </prompt><userinput>mkdir /usr/local/share/sgml/docbook-4.2</userinput>
<prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput> <prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput>
@ -368,7 +368,7 @@ CATALOG "dsssl/catalog"
<step> <step>
<para> <para>
Unpack the archive. Unpack the archive:
<screen> <screen>
<prompt>$ </prompt><userinput>unzip -a ...../docbook-4.2.zip</userinput> <prompt>$ </prompt><userinput>unzip -a ...../docbook-4.2.zip</userinput>
</screen> </screen>
@ -392,7 +392,7 @@ CATALOG "docbook-4.2/docbook.cat"
<para> <para>
Download the <ulink url="http://www.oasis-open.org/cover/ISOEnts.zip"> Download the <ulink url="http://www.oasis-open.org/cover/ISOEnts.zip">
ISO 8879 character entities archive</ulink>, unpack it, and put the ISO 8879 character entities archive</ulink>, unpack it, and put the
files in the same directory you put the DocBook files in. files in the same directory you put the DocBook files in:
<screen> <screen>
<prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput> <prompt>$ </prompt><userinput>cd /usr/local/share/sgml/docbook-4.2</userinput>
<prompt>$ </prompt><userinput>unzip ...../ISOEnts.zip</userinput> <prompt>$ </prompt><userinput>unzip ...../ISOEnts.zip</userinput>
@ -421,7 +421,7 @@ perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat
To install the style sheets, unzip and untar the distribution and To install the style sheets, unzip and untar the distribution and
move it to a suitable place, for example move it to a suitable place, for example
<filename>/usr/local/share/sgml</filename>. (The archive will <filename>/usr/local/share/sgml</filename>. (The archive will
automatically create a subdirectory.) automatically create a subdirectory.):
<screen> <screen>
<prompt>$</prompt> <userinput>gunzip docbook-dsssl-1.<replaceable>xx</>.tar.gz</userinput> <prompt>$</prompt> <userinput>gunzip docbook-dsssl-1.<replaceable>xx</>.tar.gz</userinput>
<prompt>$</prompt> <userinput>tar -C /usr/local/share/sgml -xf docbook-dsssl-1.<replaceable>xx</>.tar</userinput> <prompt>$</prompt> <userinput>tar -C /usr/local/share/sgml -xf docbook-dsssl-1.<replaceable>xx</>.tar</userinput>
@ -652,7 +652,7 @@ gmake man.tar.gz D2MDIR=<replaceable>directory</replaceable>
<screen> <screen>
<prompt>doc/src/sgml$ </prompt><userinput>gmake postgres-A4.pdf</userinput> <prompt>doc/src/sgml$ </prompt><userinput>gmake postgres-A4.pdf</userinput>
</screen> </screen>
or or:
<screen> <screen>
<prompt>doc/src/sgml$ </prompt><userinput>gmake postgres-US.pdf</userinput> <prompt>doc/src/sgml$ </prompt><userinput>gmake postgres-US.pdf</userinput>
</screen> </screen>
@ -738,7 +738,6 @@ save_size.pdfjadetex = 15000
following one. A utility, <command>fixrtf</command>, is following one. A utility, <command>fixrtf</command>, is
available in <filename>doc/src/sgml</filename> to accomplish available in <filename>doc/src/sgml</filename> to accomplish
these repairs: these repairs:
<screen> <screen>
<prompt>doc/src/sgml$ </prompt><userinput>./fixrtf --refentry postgres.rtf</userinput> <prompt>doc/src/sgml$ </prompt><userinput>./fixrtf --refentry postgres.rtf</userinput>
</screen> </screen>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/ecpg.sgml,v 1.87 2008/12/07 23:46:39 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/ecpg.sgml,v 1.88 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="ecpg"> <chapter id="ecpg">
<title><application>ECPG</application> - Embedded <acronym>SQL</acronym> in C</title> <title><application>ECPG</application> - Embedded <acronym>SQL</acronym> in C</title>
@ -750,7 +750,7 @@ EXEC SQL DEALLOCATE PREPARE <replaceable>name</replaceable>;
<para> <para>
The pgtypes library maps <productname>PostgreSQL</productname> database The pgtypes library maps <productname>PostgreSQL</productname> database
types to C equivalents that can be used in C programs. It also offers types to C equivalents that can be used in C programs. It also offers
functions to do basic calculations with those types within C, i.e. without functions to do basic calculations with those types within C, i.e., without
the help of the <productname>PostgreSQL</productname> server. See the the help of the <productname>PostgreSQL</productname> server. See the
following example: following example:
<programlisting><![CDATA[ <programlisting><![CDATA[
@ -1232,7 +1232,7 @@ date PGTYPESdate_from_asc(char *str, char **endptr);
char *PGTYPESdate_to_asc(date dDate); char *PGTYPESdate_to_asc(date dDate);
</synopsis> </synopsis>
The function receives the date <literal>dDate</> as its only parameter. The function receives the date <literal>dDate</> as its only parameter.
It will output the date in the form <literal>1999-01-18</>, i.e. in the It will output the date in the form <literal>1999-01-18</>, i.e., in the
<literal>YYYY-MM-DD</> format. <literal>YYYY-MM-DD</> format.
</para> </para>
</listitem> </listitem>

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.34 2008/11/19 04:46:37 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.35 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="high-availability"> <chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title> <title>High Availability, Load Balancing, and Replication</title>
@ -414,7 +414,7 @@ protocol to make nodes agree on a serializable transactional order.
<para> <para>
Data partitioning splits tables into data sets. Each set can Data partitioning splits tables into data sets. Each set can
be modified by only one server. For example, data can be be modified by only one server. For example, data can be
partitioned by offices, e.g. London and Paris, with a server partitioned by offices, e.g., London and Paris, with a server
in each office. If queries combining London and Paris data in each office. If queries combining London and Paris data
are necessary, an application can query both servers, or are necessary, an application can query both servers, or
master/slave replication can be used to keep a read-only copy master/slave replication can be used to keep a read-only copy

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/history.sgml,v 1.30 2007/10/30 23:06:06 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/history.sgml,v 1.31 2009/04/27 16:27:35 momjian Exp $ -->
<sect1 id="history"> <sect1 id="history">
<title>A Brief History of <productname>PostgreSQL</productname></title> <title>A Brief History of <productname>PostgreSQL</productname></title>
@ -12,7 +12,7 @@
The object-relational database management system now known as The object-relational database management system now known as
<productname>PostgreSQL</productname> is derived from the <productname>PostgreSQL</productname> is derived from the
<productname>POSTGRES</productname> package written at the <productname>POSTGRES</productname> package written at the
University of California at Berkeley. With over a decade of University of California at Berkeley. With over two decades of
development behind it, <productname>PostgreSQL</productname> is now development behind it, <productname>PostgreSQL</productname> is now
the most advanced open-source database available anywhere. the most advanced open-source database available anywhere.
</para> </para>
@ -93,7 +93,7 @@
</indexterm> </indexterm>
<para> <para>
In 1994, Andrew Yu and Jolly Chen added a SQL language interpreter In 1994, Andrew Yu and Jolly Chen added an SQL language interpreter
to <productname>POSTGRES</productname>. Under a new name, to <productname>POSTGRES</productname>. Under a new name,
<productname>Postgres95</productname> was subsequently released to <productname>Postgres95</productname> was subsequently released to
the web to find its own way in the world as an open-source the web to find its own way in the world as an open-source

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/indices.sgml,v 1.76 2009/02/07 20:05:44 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/indices.sgml,v 1.77 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="indexes"> <chapter id="indexes">
<title id="indexes-title">Indexes</title> <title id="indexes-title">Indexes</title>
@ -27,35 +27,35 @@ CREATE TABLE test1 (
content varchar content varchar
); );
</programlisting> </programlisting>
and the application requires a lot of queries of the form: and the application issues many queries of the form:
<programlisting> <programlisting>
SELECT content FROM test1 WHERE id = <replaceable>constant</replaceable>; SELECT content FROM test1 WHERE id = <replaceable>constant</replaceable>;
</programlisting> </programlisting>
With no advance preparation, the system would have to scan the entire With no advance preparation, the system would have to scan the entire
<structname>test1</structname> table, row by row, to find all <structname>test1</structname> table, row by row, to find all
matching entries. If there are a lot of rows in matching entries. If there are many rows in
<structname>test1</structname> and only a few rows (perhaps only zero <structname>test1</structname> and only a few rows (perhaps zero
or one) that would be returned by such a query, then this is clearly an or one) that would be returned by such a query, this is clearly an
inefficient method. But if the system has been instructed to maintain an inefficient method. But if the system maintains an
index on the <structfield>id</structfield> column, then it can use a more index on the <structfield>id</structfield> column, it can use a more
efficient method for locating matching rows. For instance, it efficient method for locating matching rows. For instance, it
might only have to walk a few levels deep into a search tree. might only have to walk a few levels deep into a search tree.
</para> </para>
<para> <para>
A similar approach is used in most books of non-fiction: terms and A similar approach is used in most non-fiction books: terms and
concepts that are frequently looked up by readers are collected in concepts that are frequently looked up by readers are collected in
an alphabetic index at the end of the book. The interested reader an alphabetic index at the end of the book. The interested reader
can scan the index relatively quickly and flip to the appropriate can scan the index relatively quickly and flip to the appropriate
page(s), rather than having to read the entire book to find the page(s), rather than having to read the entire book to find the
material of interest. Just as it is the task of the author to material of interest. Just as it is the task of the author to
anticipate the items that the readers are likely to look up, anticipate the items that readers are likely to look up,
it is the task of the database programmer to foresee which indexes it is the task of the database programmer to foresee which indexes
will be of advantage. will be useful.
</para> </para>
<para> <para>
The following command would be used to create the index on the The following command can be used to create an index on the
<structfield>id</structfield> column, as discussed: <structfield>id</structfield> column, as discussed:
<programlisting> <programlisting>
CREATE INDEX test1_id_index ON test1 (id); CREATE INDEX test1_id_index ON test1 (id);
@ -73,7 +73,7 @@ CREATE INDEX test1_id_index ON test1 (id);
<para> <para>
Once an index is created, no further intervention is required: the Once an index is created, no further intervention is required: the
system will update the index when the table is modified, and it will system will update the index when the table is modified, and it will
use the index in queries when it thinks this would be more efficient use the index in queries when it thinks it would be more efficient
than a sequential table scan. But you might have to run the than a sequential table scan. But you might have to run the
<command>ANALYZE</command> command regularly to update <command>ANALYZE</command> command regularly to update
statistics to allow the query planner to make educated decisions. statistics to allow the query planner to make educated decisions.
@ -87,14 +87,14 @@ CREATE INDEX test1_id_index ON test1 (id);
<command>DELETE</command> commands with search conditions. <command>DELETE</command> commands with search conditions.
Indexes can moreover be used in join searches. Thus, Indexes can moreover be used in join searches. Thus,
an index defined on a column that is part of a join condition can an index defined on a column that is part of a join condition can
significantly speed up queries with joins. also significantly speed up queries with joins.
</para> </para>
<para> <para>
Creating an index on a large table can take a long time. By default, Creating an index on a large table can take a long time. By default,
<productname>PostgreSQL</productname> allows reads (selects) to occur <productname>PostgreSQL</productname> allows reads (selects) to occur
on the table in parallel with creation of an index, but writes (inserts, on the table in parallel with index creation, but writes (INSERTs,
updates, deletes) are blocked until the index build is finished. UPDATEs, DELETEs) are blocked until the index build is finished.
In production environments this is often unacceptable. In production environments this is often unacceptable.
It is possible to allow writes to occur in parallel with index It is possible to allow writes to occur in parallel with index
creation, but there are several caveats to be aware of &mdash; creation, but there are several caveats to be aware of &mdash;
@ -118,8 +118,8 @@ CREATE INDEX test1_id_index ON test1 (id);
<productname>PostgreSQL</productname> provides several index types: <productname>PostgreSQL</productname> provides several index types:
B-tree, Hash, GiST and GIN. Each index type uses a different B-tree, Hash, GiST and GIN. Each index type uses a different
algorithm that is best suited to different types of queries. algorithm that is best suited to different types of queries.
By default, the <command>CREATE INDEX</command> command will create a By default, the <command>CREATE INDEX</command> command creates
B-tree index, which fits the most common situations. B-tree indexes, which fit the most common situations.
</para> </para>
<para> <para>
@ -159,11 +159,11 @@ CREATE INDEX test1_id_index ON test1 (id);
'foo%'</literal> or <literal>col ~ '^foo'</literal>, but not 'foo%'</literal> or <literal>col ~ '^foo'</literal>, but not
<literal>col LIKE '%bar'</literal>. However, if your database does not <literal>col LIKE '%bar'</literal>. However, if your database does not
use the C locale you will need to create the index with a special use the C locale you will need to create the index with a special
operator class to support indexing of pattern-matching queries. See operator class to support indexing of pattern-matching queries; see
<xref linkend="indexes-opclass"> below. It is also possible to use <xref linkend="indexes-opclass"> below. It is also possible to use
B-tree indexes for <literal>ILIKE</literal> and B-tree indexes for <literal>ILIKE</literal> and
<literal>~*</literal>, but only if the pattern starts with <literal>~*</literal>, but only if the pattern starts with
non-alphabetic characters, i.e. characters that are not affected by non-alphabetic characters, i.e., characters that are not affected by
upper/lower case conversion. upper/lower case conversion.
</para> </para>
@ -180,7 +180,7 @@ CREATE INDEX test1_id_index ON test1 (id);
Hash indexes can only handle simple equality comparisons. Hash indexes can only handle simple equality comparisons.
The query planner will consider using a hash index whenever an The query planner will consider using a hash index whenever an
indexed column is involved in a comparison using the indexed column is involved in a comparison using the
<literal>=</literal> operator. (But hash indexes do not support <literal>=</literal> operator. (Hash indexes do not support
<literal>IS NULL</> searches.) <literal>IS NULL</> searches.)
The following command is used to create a hash index: The following command is used to create a hash index:
<synopsis> <synopsis>
@ -290,11 +290,11 @@ CREATE TABLE test2 (
); );
</programlisting> </programlisting>
(say, you keep your <filename class="directory">/dev</filename> (say, you keep your <filename class="directory">/dev</filename>
directory in a database...) and you frequently make queries like: directory in a database...) and you frequently issue queries like:
<programlisting> <programlisting>
SELECT name FROM test2 WHERE major = <replaceable>constant</replaceable> AND minor = <replaceable>constant</replaceable>; SELECT name FROM test2 WHERE major = <replaceable>constant</replaceable> AND minor = <replaceable>constant</replaceable>;
</programlisting> </programlisting>
then it might be appropriate to define an index on the columns then it might be appropriate to define an index on columns
<structfield>major</structfield> and <structfield>major</structfield> and
<structfield>minor</structfield> together, e.g.: <structfield>minor</structfield> together, e.g.:
<programlisting> <programlisting>
@ -359,7 +359,7 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor);
Indexes with more than three columns are unlikely to be helpful Indexes with more than three columns are unlikely to be helpful
unless the usage of the table is extremely stylized. See also unless the usage of the table is extremely stylized. See also
<xref linkend="indexes-bitmap-scans"> for some discussion of the <xref linkend="indexes-bitmap-scans"> for some discussion of the
merits of different index setups. merits of different index configurations.
</para> </para>
</sect1> </sect1>
@ -375,7 +375,7 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor);
<para> <para>
In addition to simply finding the rows to be returned by a query, In addition to simply finding the rows to be returned by a query,
an index may be able to deliver them in a specific sorted order. an index may be able to deliver them in a specific sorted order.
This allows a query's <literal>ORDER BY</> specification to be met This allows a query's <literal>ORDER BY</> specification to be honored
without a separate sorting step. Of the index types currently without a separate sorting step. Of the index types currently
supported by <productname>PostgreSQL</productname>, only B-tree supported by <productname>PostgreSQL</productname>, only B-tree
can produce sorted output &mdash; the other index types return can produce sorted output &mdash; the other index types return
@ -384,22 +384,23 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor);
<para> <para>
The planner will consider satisfying an <literal>ORDER BY</> specification The planner will consider satisfying an <literal>ORDER BY</> specification
either by scanning any available index that matches the specification, by either scanning an available index that matches the specification,
or by scanning the table in physical order and doing an explicit or by scanning the table in physical order and doing an explicit
sort. For a query that requires scanning a large fraction of the sort. For a query that requires scanning a large fraction of the
table, the explicit sort is likely to be faster because it requires table, the explicit sort is likely to be faster than using an index
less disk I/O due to a better-ordered access pattern. Indexes are because it requires
less disk I/O due to a sequential access pattern. Indexes are
more useful when only a few rows need be fetched. An important more useful when only a few rows need be fetched. An important
special case is <literal>ORDER BY</> in combination with special case is <literal>ORDER BY</> in combination with
<literal>LIMIT</> <replaceable>n</>: an explicit sort will have to process <literal>LIMIT</> <replaceable>n</>: an explicit sort will have to process
all the data to identify the first <replaceable>n</> rows, but if there is all data to identify the first <replaceable>n</> rows, but if there is
an index matching the <literal>ORDER BY</> then the first <replaceable>n</> an index matching the <literal>ORDER BY</>, the first <replaceable>n</>
rows can be retrieved directly, without scanning the remainder at all. rows can be retrieved directly, without scanning the remainder at all.
</para> </para>
<para> <para>
By default, B-tree indexes store their entries in ascending order By default, B-tree indexes store their entries in ascending order
with nulls last. This means that a forward scan of an index on a with nulls last. This means that a forward scan of an index on
column <literal>x</> produces output satisfying <literal>ORDER BY x</> column <literal>x</> produces output satisfying <literal>ORDER BY x</>
(or more verbosely, <literal>ORDER BY x ASC NULLS LAST</>). The (or more verbosely, <literal>ORDER BY x ASC NULLS LAST</>). The
index can also be scanned backward, producing output satisfying index can also be scanned backward, producing output satisfying
@ -432,14 +433,14 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
<literal>ORDER BY x DESC, y DESC</> if we scan backward. <literal>ORDER BY x DESC, y DESC</> if we scan backward.
But it might be that the application frequently needs to use But it might be that the application frequently needs to use
<literal>ORDER BY x ASC, y DESC</>. There is no way to get that <literal>ORDER BY x ASC, y DESC</>. There is no way to get that
ordering from a regular index, but it is possible if the index is defined ordering from a simpler index, but it is possible if the index is defined
as <literal>(x ASC, y DESC)</> or <literal>(x DESC, y ASC)</>. as <literal>(x ASC, y DESC)</> or <literal>(x DESC, y ASC)</>.
</para> </para>
<para> <para>
Obviously, indexes with non-default sort orderings are a fairly Obviously, indexes with non-default sort orderings are a fairly
specialized feature, but sometimes they can produce tremendous specialized feature, but sometimes they can produce tremendous
speedups for certain queries. Whether it's worth keeping such an speedups for certain queries. Whether it's worth creating such an
index depends on how often you use queries that require a special index depends on how often you use queries that require a special
sort ordering. sort ordering.
</para> </para>
@ -468,7 +469,7 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
</para> </para>
<para> <para>
Beginning in release 8.1, Fortunately,
<productname>PostgreSQL</> has the ability to combine multiple indexes <productname>PostgreSQL</> has the ability to combine multiple indexes
(including multiple uses of the same index) to handle cases that cannot (including multiple uses of the same index) to handle cases that cannot
be implemented by single index scans. The system can form <literal>AND</> be implemented by single index scans. The system can form <literal>AND</>
@ -513,7 +514,7 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
more efficient than index combination for queries involving both more efficient than index combination for queries involving both
columns, but as discussed in <xref linkend="indexes-multicolumn">, it columns, but as discussed in <xref linkend="indexes-multicolumn">, it
would be almost useless for queries involving only <literal>y</>, so it would be almost useless for queries involving only <literal>y</>, so it
could not be the only index. A combination of the multicolumn index should not be the only index. A combination of the multicolumn index
and a separate index on <literal>y</> would serve reasonably well. For and a separate index on <literal>y</> would serve reasonably well. For
queries involving only <literal>x</>, the multicolumn index could be queries involving only <literal>x</>, the multicolumn index could be
used, though it would be larger and hence slower than an index on used, though it would be larger and hence slower than an index on
@ -547,16 +548,16 @@ CREATE UNIQUE INDEX <replaceable>name</replaceable> ON <replaceable>table</repla
<para> <para>
When an index is declared unique, multiple table rows with equal When an index is declared unique, multiple table rows with equal
indexed values will not be allowed. Null values are not considered indexed values are not allowed. Null values are not considered
equal. A multicolumn unique index will only reject cases where all equal. A multicolumn unique index will only reject cases where all
of the indexed columns are equal in two rows. indexed columns are equal in multiple rows.
</para> </para>
<para> <para>
<productname>PostgreSQL</productname> automatically creates a unique <productname>PostgreSQL</productname> automatically creates a unique
index when a unique constraint or a primary key is defined for a table. index when a unique constraint or primary key is defined for a table.
The index covers the columns that make up the primary key or unique The index covers the columns that make up the primary key or unique
columns (a multicolumn index, if appropriate), and is the mechanism constraint (a multicolumn index, if appropriate), and is the mechanism
that enforces the constraint. that enforces the constraint.
</para> </para>
@ -583,9 +584,9 @@ CREATE UNIQUE INDEX <replaceable>name</replaceable> ON <replaceable>table</repla
</indexterm> </indexterm>
<para> <para>
An index column need not be just a column of the underlying table, An index column need not be just a column of an underlying table,
but can be a function or scalar expression computed from one or but can be a function or scalar expression computed from one or
more columns of the table. This feature is useful to obtain fast more columns of a table. This feature is useful to obtain fast
access to tables based on the results of computations. access to tables based on the results of computations.
</para> </para>
@ -595,9 +596,9 @@ CREATE UNIQUE INDEX <replaceable>name</replaceable> ON <replaceable>table</repla
<programlisting> <programlisting>
SELECT * FROM test1 WHERE lower(col1) = 'value'; SELECT * FROM test1 WHERE lower(col1) = 'value';
</programlisting> </programlisting>
This query can use an index, if one has been This query can use an index if one has been
defined on the result of the <literal>lower(col1)</literal> defined on the result of the <literal>lower(col1)</literal>
operation: function:
<programlisting> <programlisting>
CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));
</programlisting> </programlisting>
@ -612,7 +613,7 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));
</para> </para>
<para> <para>
As another example, if one often does queries like this: As another example, if one often does queries like:
<programlisting> <programlisting>
SELECT * FROM people WHERE (first_name || ' ' || last_name) = 'John Smith'; SELECT * FROM people WHERE (first_name || ' ' || last_name) = 'John Smith';
</programlisting> </programlisting>
@ -655,7 +656,7 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name));
A <firstterm>partial index</firstterm> is an index built over a A <firstterm>partial index</firstterm> is an index built over a
subset of a table; the subset is defined by a conditional subset of a table; the subset is defined by a conditional
expression (called the <firstterm>predicate</firstterm> of the expression (called the <firstterm>predicate</firstterm> of the
partial index). The index contains entries for only those table partial index). The index contains entries only for those table
rows that satisfy the predicate. Partial indexes are a specialized rows that satisfy the predicate. Partial indexes are a specialized
feature, but there are several situations in which they are useful. feature, but there are several situations in which they are useful.
</para> </para>
@ -665,8 +666,8 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name));
values. Since a query searching for a common value (one that values. Since a query searching for a common value (one that
accounts for more than a few percent of all the table rows) will not accounts for more than a few percent of all the table rows) will not
use the index anyway, there is no point in keeping those rows in the use the index anyway, there is no point in keeping those rows in the
index at all. This reduces the size of the index, which will speed index. A partial index reduces the size of the index, which speeds
up queries that do use the index. It will also speed up many table up queries that use the index. It will also speed up many table
update operations because the index does not need to be update operations because the index does not need to be
updated in all cases. <xref linkend="indexes-partial-ex1"> shows a updated in all cases. <xref linkend="indexes-partial-ex1"> shows a
possible application of this idea. possible application of this idea.
@ -700,39 +701,43 @@ CREATE TABLE access_log (
such as this: such as this:
<programlisting> <programlisting>
CREATE INDEX access_log_client_ip_ix ON access_log (client_ip) CREATE INDEX access_log_client_ip_ix ON access_log (client_ip)
WHERE NOT (client_ip &gt; inet '192.168.100.0' AND client_ip &lt; inet '192.168.100.255'); WHERE NOT (client_ip &gt; inet '192.168.100.0' AND
client_ip &lt; inet '192.168.100.255');
</programlisting> </programlisting>
</para> </para>
<para> <para>
A typical query that can use this index would be: A typical query that can use this index would be:
<programlisting> <programlisting>
SELECT * FROM access_log WHERE url = '/index.html' AND client_ip = inet '212.78.10.32'; SELECT *
FROM access_log
WHERE url = '/index.html' AND client_ip = inet '212.78.10.32';
</programlisting> </programlisting>
A query that cannot use this index is: A query that cannot use this index is:
<programlisting> <programlisting>
SELECT * FROM access_log WHERE client_ip = inet '192.168.100.23'; SELECT *
FROM access_log
WHERE client_ip = inet '192.168.100.23';
</programlisting> </programlisting>
</para> </para>
<para> <para>
Observe that this kind of partial index requires that the common Observe that this kind of partial index requires that the common
values be predetermined. If the distribution of values is values be predetermined, so such partial indexes are best used for
inherent (due to the nature of the application) and static (not data distribution that do not change. The indexes can be recreated
changing over time), this is not difficult, but if the common values are occasionally to adjust for new data distributions, but this adds
merely due to the coincidental data load this can require a lot of maintenance overhead.
maintenance work to change the index definition from time to time.
</para> </para>
</example> </example>
<para> <para>
Another possible use for a partial index is to exclude values from the Another possible use for partial indexes is to exclude values from the
index that the index that the
typical query workload is not interested in; this is shown in <xref typical query workload is not interested in; this is shown in <xref
linkend="indexes-partial-ex2">. This results in the same linkend="indexes-partial-ex2">. This results in the same
advantages as listed above, but it prevents the advantages as listed above, but it prevents the
<quote>uninteresting</quote> values from being accessed via that <quote>uninteresting</quote> values from being accessed via that
index at all, even if an index scan might be profitable in that index, even if an index scan might be profitable in that
case. Obviously, setting up partial indexes for this kind of case. Obviously, setting up partial indexes for this kind of
scenario will require a lot of care and experimentation. scenario will require a lot of care and experimentation.
</para> </para>
@ -774,7 +779,7 @@ SELECT * FROM orders WHERE billed is not true AND amount &gt; 5000.00;
<programlisting> <programlisting>
SELECT * FROM orders WHERE order_nr = 3501; SELECT * FROM orders WHERE order_nr = 3501;
</programlisting> </programlisting>
The order 3501 might be among the billed or among the unbilled The order 3501 might be among the billed or unbilled
orders. orders.
</para> </para>
</example> </example>
@ -799,9 +804,9 @@ SELECT * FROM orders WHERE order_nr = 3501;
<quote>x &lt; 1</quote> implies <quote>x &lt; 2</quote>; otherwise <quote>x &lt; 1</quote> implies <quote>x &lt; 2</quote>; otherwise
the predicate condition must exactly match part of the query's the predicate condition must exactly match part of the query's
<literal>WHERE</> condition <literal>WHERE</> condition
or the index will not be recognized to be usable. Matching takes or the index will not be recognized as usable. Matching takes
place at query planning time, not at run time. As a result, place at query planning time, not at run time. As a result,
parameterized query clauses will not work with a partial index. For parameterized query clauses do not work with a partial index. For
example a prepared query with a parameter might specify example a prepared query with a parameter might specify
<quote>x &lt; ?</quote> which will never imply <quote>x &lt; ?</quote> which will never imply
<quote>x &lt; 2</quote> for all possible values of the parameter. <quote>x &lt; 2</quote> for all possible values of the parameter.
@ -835,7 +840,7 @@ CREATE TABLE tests (
CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target)
WHERE success; WHERE success;
</programlisting> </programlisting>
This is a particularly efficient way of doing it when there are few This is a particularly efficient approach when there are few
successful tests and many unsuccessful ones. successful tests and many unsuccessful ones.
</para> </para>
</example> </example>
@ -859,7 +864,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target)
know when an index might be profitable. Forming this knowledge know when an index might be profitable. Forming this knowledge
requires experience and understanding of how indexes in requires experience and understanding of how indexes in
<productname>PostgreSQL</> work. In most cases, the advantage of a <productname>PostgreSQL</> work. In most cases, the advantage of a
partial index over a regular index will not be much. partial index over a regular index will be minimal.
</para> </para>
<para> <para>
@ -892,7 +897,7 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable>
would use the <literal>int4_ops</literal> class; this operator would use the <literal>int4_ops</literal> class; this operator
class includes comparison functions for values of type <type>int4</type>. class includes comparison functions for values of type <type>int4</type>.
In practice the default operator class for the column's data type is In practice the default operator class for the column's data type is
usually sufficient. The main point of having operator classes is usually sufficient. The main reason for having operator classes is
that for some data types, there could be more than one meaningful that for some data types, there could be more than one meaningful
index behavior. For example, we might want to sort a complex-number data index behavior. For example, we might want to sort a complex-number data
type either by absolute value or by real part. We could do this by type either by absolute value or by real part. We could do this by
@ -931,7 +936,7 @@ CREATE INDEX test_index ON test_table (col varchar_pattern_ops);
to use an index. Such queries cannot use the to use an index. Such queries cannot use the
<literal><replaceable>xxx</replaceable>_pattern_ops</literal> <literal><replaceable>xxx</replaceable>_pattern_ops</literal>
operator classes. (Ordinary equality comparisons can use these operator classes. (Ordinary equality comparisons can use these
operator classes, however.) It is allowed to create multiple operator classes, however.) It is possible to create multiple
indexes on the same column with different operator classes. indexes on the same column with different operator classes.
If you do use the C locale, you do not need the If you do use the C locale, you do not need the
<literal><replaceable>xxx</replaceable>_pattern_ops</literal> <literal><replaceable>xxx</replaceable>_pattern_ops</literal>
@ -990,7 +995,7 @@ SELECT am.amname AS index_method,
<para> <para>
Although indexes in <productname>PostgreSQL</> do not need Although indexes in <productname>PostgreSQL</> do not need
maintenance and tuning, it is still important to check maintenance or tuning, it is still important to check
which indexes are actually used by the real-life query workload. which indexes are actually used by the real-life query workload.
Examining index usage for an individual query is done with the Examining index usage for an individual query is done with the
<xref linkend="sql-explain" endterm="sql-explain-title"> <xref linkend="sql-explain" endterm="sql-explain-title">
@ -1002,10 +1007,10 @@ SELECT am.amname AS index_method,
<para> <para>
It is difficult to formulate a general procedure for determining It is difficult to formulate a general procedure for determining
which indexes to set up. There are a number of typical cases that which indexes to create. There are a number of typical cases that
have been shown in the examples throughout the previous sections. have been shown in the examples throughout the previous sections.
A good deal of experimentation will be necessary in most cases. A good deal of experimentation is often necessary.
The rest of this section gives some tips for that. The rest of this section gives some tips for that:
</para> </para>
<itemizedlist> <itemizedlist>
@ -1014,7 +1019,7 @@ SELECT am.amname AS index_method,
Always run <xref linkend="sql-analyze" endterm="sql-analyze-title"> Always run <xref linkend="sql-analyze" endterm="sql-analyze-title">
first. This command first. This command
collects statistics about the distribution of the values in the collects statistics about the distribution of the values in the
table. This information is required to guess the number of rows table. This information is required to estimate the number of rows
returned by a query, which is needed by the planner to assign returned by a query, which is needed by the planner to assign
realistic costs to each possible query plan. In absence of any realistic costs to each possible query plan. In absence of any
real statistics, some default values are assumed, which are real statistics, some default values are assumed, which are
@ -1035,13 +1040,13 @@ SELECT am.amname AS index_method,
It is especially fatal to use very small test data sets. It is especially fatal to use very small test data sets.
While selecting 1000 out of 100000 rows could be a candidate for While selecting 1000 out of 100000 rows could be a candidate for
an index, selecting 1 out of 100 rows will hardly be, because the an index, selecting 1 out of 100 rows will hardly be, because the
100 rows will probably fit within a single disk page, and there 100 rows probably fit within a single disk page, and there
is no plan that can beat sequentially fetching 1 disk page. is no plan that can beat sequentially fetching 1 disk page.
</para> </para>
<para> <para>
Also be careful when making up test data, which is often Also be careful when making up test data, which is often
unavoidable when the application is not in production use yet. unavoidable when the application is not yet in production.
Values that are very similar, completely random, or inserted in Values that are very similar, completely random, or inserted in
sorted order will skew the statistics away from the distribution sorted order will skew the statistics away from the distribution
that real data would have. that real data would have.
@ -1058,7 +1063,7 @@ SELECT am.amname AS index_method,
(<varname>enable_nestloop</>), which are the most basic plans, (<varname>enable_nestloop</>), which are the most basic plans,
will force the system to use a different plan. If the system will force the system to use a different plan. If the system
still chooses a sequential scan or nested-loop join then there is still chooses a sequential scan or nested-loop join then there is
probably a more fundamental reason why the index is not probably a more fundamental reason why the index is not being
used; for example, the query condition does not match the index. used; for example, the query condition does not match the index.
(What kind of query can use what kind of index is explained in (What kind of query can use what kind of index is explained in
the previous sections.) the previous sections.)

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/info.sgml,v 1.26 2008/01/09 02:37:45 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/info.sgml,v 1.27 2009/04/27 16:27:35 momjian Exp $ -->
<sect1 id="resources"> <sect1 id="resources">
<title>Further Information</title> <title>Further Information</title>
@ -8,12 +8,17 @@
resources about <productname>PostgreSQL</productname>: resources about <productname>PostgreSQL</productname>:
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term>FAQs</term> <term>Wiki</term>
<listitem> <listitem>
<para> <para>
The FAQ list <indexterm><primary>FAQ</></> contains The <productname>PostgreSQL</productname> <ulink
continuously updated answers to frequently asked questions. url="http://wiki.postgresql.org">wiki</ulink> contains the project's <ulink
url="http://wiki.postgresql.org/wiki/Frequently_Asked_Questions">FAQ</>
(Frequently Asked Questions) list, <ulink
url="http://wiki.postgresql.org/wiki/Todo">TODO</> list, and
detailed information about many more topics.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.51 2009/01/09 13:37:18 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.52 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="install-win32"> <chapter id="install-win32">
<title>Installation from Source Code on <productname>Windows</productname></title> <title>Installation from Source Code on <productname>Windows</productname></title>
@ -383,7 +383,7 @@
<para> <para>
To build the <application>libpq</application> client library using To build the <application>libpq</application> client library using
<productname>Visual Studio 7.1 or later</productname>, change into the <productname>Visual Studio 7.1 or later</productname>, change into the
<filename>src</filename> directory and type the command <filename>src</filename> directory and type the command:
<screen> <screen>
<userinput>nmake /f win32.mak</userinput> <userinput>nmake /f win32.mak</userinput>
</screen> </screen>
@ -392,7 +392,7 @@
To build a 64-bit version of the <application>libpq</application> To build a 64-bit version of the <application>libpq</application>
client library using <productname>Visual Studio 8.0 or client library using <productname>Visual Studio 8.0 or
later</productname>, change into the <filename>src</filename> later</productname>, change into the <filename>src</filename>
directory and type in the command directory and type in the command:
<screen> <screen>
<userinput>nmake /f win32.mak CPU=AMD64</userinput> <userinput>nmake /f win32.mak CPU=AMD64</userinput>
</screen> </screen>
@ -403,7 +403,7 @@
<para> <para>
To build the <application>libpq</application> client library using To build the <application>libpq</application> client library using
<productname>Borland C++</productname>, change into the <productname>Borland C++</productname>, change into the
<filename>src</filename> directory and type the command <filename>src</filename> directory and type the command:
<screen> <screen>
<userinput>make -N -DCFG=Release /f bcc32.mak</userinput> <userinput>make -N -DCFG=Release /f bcc32.mak</userinput>
</screen> </screen>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.320 2009/03/23 01:52:38 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.321 2009/04/27 16:27:35 momjian Exp $ -->
<chapter id="installation"> <chapter id="installation">
<title><![%standalone-include[<productname>PostgreSQL</>]]> <title><![%standalone-include[<productname>PostgreSQL</>]]>
@ -11,7 +11,7 @@
<para> <para>
This <![%standalone-include;[document]]> This <![%standalone-include;[document]]>
<![%standalone-ignore;[chapter]]> describes the installation of <![%standalone-ignore;[chapter]]> describes the installation of
<productname>PostgreSQL</productname> from the source code <productname>PostgreSQL</productname> using the source code
distribution. (If you are installing a pre-packaged distribution, distribution. (If you are installing a pre-packaged distribution,
such as an RPM or Debian package, ignore this such as an RPM or Debian package, ignore this
<![%standalone-include;[document]]> <![%standalone-include;[document]]>
@ -75,7 +75,7 @@ su - postgres
refer to it by that name. (On some systems refer to it by that name. (On some systems
<acronym>GNU</acronym> <application>make</> is the default tool with the name <acronym>GNU</acronym> <application>make</> is the default tool with the name
<filename>make</>.) To test for <acronym>GNU</acronym> <filename>make</>.) To test for <acronym>GNU</acronym>
<application>make</application> enter <application>make</application> enter:
<screen> <screen>
<userinput>gmake --version</userinput> <userinput>gmake --version</userinput>
</screen> </screen>
@ -85,9 +85,10 @@ su - postgres
<listitem> <listitem>
<para> <para>
You need an <acronym>ISO</>/<acronym>ANSI</> C compiler. Recent You need an <acronym>ISO</>/<acronym>ANSI</> C compiler (minimum
C89-compliant). Recent
versions of <productname>GCC</> are recommendable, but versions of <productname>GCC</> are recommendable, but
<productname>PostgreSQL</> is known to build with a wide variety <productname>PostgreSQL</> is known to build using a wide variety
of compilers from different vendors. of compilers from different vendors.
</para> </para>
</listitem> </listitem>
@ -95,7 +96,7 @@ su - postgres
<listitem> <listitem>
<para> <para>
<application>tar</> is required to unpack the source <application>tar</> is required to unpack the source
distribution in the first place, in addition to either distribution, in addition to either
<application>gzip</> or <application>bzip2</>. In <application>gzip</> or <application>bzip2</>. In
addition, <application>gzip</> is required to install the addition, <application>gzip</> is required to install the
documentation. documentation.
@ -117,7 +118,7 @@ su - postgres
command you type, and allows you to use arrow keys to recall and command you type, and allows you to use arrow keys to recall and
edit previous commands. This is very helpful and is strongly edit previous commands. This is very helpful and is strongly
recommended. If you don't want to use it then you must specify recommended. If you don't want to use it then you must specify
the <option>--without-readline</option> option for the <option>--without-readline</option> option of
<filename>configure</>. As an alternative, you can often use the <filename>configure</>. As an alternative, you can often use the
BSD-licensed <filename>libedit</filename> library, originally BSD-licensed <filename>libedit</filename> library, originally
developed on <productname>NetBSD</productname>. The developed on <productname>NetBSD</productname>. The
@ -140,7 +141,7 @@ su - postgres
The <productname>zlib</productname> compression library will be The <productname>zlib</productname> compression library will be
used by default. If you don't want to use it then you must used by default. If you don't want to use it then you must
specify the <option>--without-zlib</option> option for specify the <option>--without-zlib</option> option to
<filename>configure</filename>. Using this option disables <filename>configure</filename>. Using this option disables
support for compressed archives in <application>pg_dump</> and support for compressed archives in <application>pg_dump</> and
<application>pg_restore</>. <application>pg_restore</>.
@ -152,7 +153,7 @@ su - postgres
<para> <para>
The following packages are optional. They are not required in the The following packages are optional. They are not required in the
default configuration, but they are needed when certain build default configuration, but they are needed when certain build
options are enabled, as explained below. options are enabled, as explained below:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
@ -172,7 +173,8 @@ su - postgres
<para> <para>
If you don't have the shared library but you need one, a message If you don't have the shared library but you need one, a message
like this will appear during the build to point out this fact: like this will appear during the <productname>PostgreSQL</>
build to point out this fact:
<screen> <screen>
*** Cannot build PL/Perl because libperl is not a shared library. *** Cannot build PL/Perl because libperl is not a shared library.
*** You might have to rebuild your Perl installation. Refer to *** You might have to rebuild your Perl installation. Refer to
@ -206,7 +208,7 @@ su - postgres
<filename>libpython</filename> library must be a shared library <filename>libpython</filename> library must be a shared library
also on most platforms. This is not the case in a default also on most platforms. This is not the case in a default
<productname>Python</productname> installation. If after <productname>Python</productname> installation. If after
building and installing you have a file called building and installing <productname>PostgreSQL</> you have a file called
<filename>plpython.so</filename> (possibly a different <filename>plpython.so</filename> (possibly a different
extension), then everything went well. Otherwise you should extension), then everything went well. Otherwise you should
have seen a notice like this flying by: have seen a notice like this flying by:
@ -216,7 +218,7 @@ su - postgres
*** the documentation for details. *** the documentation for details.
</screen> </screen>
That means you have to rebuild (part of) your That means you have to rebuild (part of) your
<productname>Python</productname> installation to supply this <productname>Python</productname> installation to create this
shared library. shared library.
</para> </para>
@ -272,7 +274,7 @@ su - postgres
<para> <para>
If you are building from a <acronym>CVS</acronym> tree instead of If you are building from a <acronym>CVS</acronym> tree instead of
using a released source package, or if you want to do development, using a released source package, or if you want to do server development,
you also need the following packages: you also need the following packages:
<itemizedlist> <itemizedlist>
@ -314,7 +316,7 @@ su - postgres
Also check that you have sufficient disk space. You will need about Also check that you have sufficient disk space. You will need about
65 MB for the source tree during compilation and about 15 MB for 65 MB for the source tree during compilation and about 15 MB for
the installation directory. An empty database cluster takes about the installation directory. An empty database cluster takes about
25 MB, databases take about five times the amount of space that a 25 MB; databases take about five times the amount of space that a
flat text file with the same data would take. If you are going to flat text file with the same data would take. If you are going to
run the regression tests you will temporarily need up to an extra run the regression tests you will temporarily need up to an extra
90 MB. Use the <command>df</command> command to check free disk 90 MB. Use the <command>df</command> command to check free disk
@ -420,7 +422,7 @@ su - postgres
On systems that have <productname>PostgreSQL</> started at boot time, On systems that have <productname>PostgreSQL</> started at boot time,
there is probably a start-up file that will accomplish the same thing. For there is probably a start-up file that will accomplish the same thing. For
example, on a <systemitem class="osname">Red Hat Linux</> system one example, on a <systemitem class="osname">Red Hat Linux</> system one
might find that might find that:
<screen> <screen>
<userinput>/etc/rc.d/init.d/postgresql stop</userinput> <userinput>/etc/rc.d/init.d/postgresql stop</userinput>
</screen> </screen>
@ -469,7 +471,7 @@ su - postgres
<step> <step>
<para> <para>
Start the database server, again from the special database user Start the database server, again the special database user
account: account:
<programlisting> <programlisting>
<userinput>/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data</> <userinput>/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data</>
@ -479,7 +481,7 @@ su - postgres
<step> <step>
<para> <para>
Finally, restore your data from backup with Finally, restore your data from backup with:
<screen> <screen>
<userinput>/usr/local/pgsql/bin/psql -d postgres -f <replaceable>outputfile</></userinput> <userinput>/usr/local/pgsql/bin/psql -d postgres -f <replaceable>outputfile</></userinput>
</screen> </screen>
@ -514,12 +516,12 @@ su - postgres
The first step of the installation procedure is to configure the The first step of the installation procedure is to configure the
source tree for your system and choose the options you would like. source tree for your system and choose the options you would like.
This is done by running the <filename>configure</> script. For a This is done by running the <filename>configure</> script. For a
default installation simply enter default installation simply enter:
<screen> <screen>
<userinput>./configure</userinput> <userinput>./configure</userinput>
</screen> </screen>
This script will run a number of tests to guess values for various This script will run a number of tests to determine values for various
system dependent variables and detect some quirks of your system dependent variables and detect any quirks of your
operating system, and finally will create several files in the operating system, and finally will create several files in the
build tree to record what it found. (You can also run build tree to record what it found. (You can also run
<filename>configure</filename> in a directory outside the source <filename>configure</filename> in a directory outside the source
@ -719,7 +721,7 @@ su - postgres
internal header files and the server header files are installed internal header files and the server header files are installed
into private directories under <varname>includedir</varname>. See into private directories under <varname>includedir</varname>. See
the documentation of each interface for information about how to the documentation of each interface for information about how to
get at the its header files. Finally, a private subdirectory will access its header files. Finally, a private subdirectory will
also be created, if appropriate, under <varname>libdir</varname> also be created, if appropriate, under <varname>libdir</varname>
for dynamically loadable modules. for dynamically loadable modules.
</para> </para>
@ -769,7 +771,7 @@ su - postgres
Enables Native Language Support (<acronym>NLS</acronym>), Enables Native Language Support (<acronym>NLS</acronym>),
that is, the ability to display a program's messages in a that is, the ability to display a program's messages in a
language other than English. language other than English.
<replaceable>LANGUAGES</replaceable> is a space-separated <replaceable>LANGUAGES</replaceable> is an optional space-separated
list of codes of the languages that you want supported, for list of codes of the languages that you want supported, for
example <literal>--enable-nls='de fr'</>. (The intersection example <literal>--enable-nls='de fr'</>. (The intersection
between your list and the set of actually provided between your list and the set of actually provided
@ -927,11 +929,11 @@ su - postgres
and libpq]]><![%standalone-ignore[<xref linkend="libpq-ldap"> and and libpq]]><![%standalone-ignore[<xref linkend="libpq-ldap"> and
<xref linkend="auth-ldap">]]> for more information). On Unix, <xref linkend="auth-ldap">]]> for more information). On Unix,
this requires the <productname>OpenLDAP</> package to be this requires the <productname>OpenLDAP</> package to be
installed. <filename>configure</> will check for the required installed. On Windows, the default <productname>WinLDAP</>
library is used. <filename>configure</> will check for the required
header files and libraries to make sure that your header files and libraries to make sure that your
<productname>OpenLDAP</> installation is sufficient before <productname>OpenLDAP</> installation is sufficient before
proceeding. On Windows, the default <productname>WinLDAP</> proceeding.
library is used.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -1225,7 +1227,7 @@ su - postgres
<listitem> <listitem>
<para> <para>
Compiles all programs and libraries with debugging symbols. Compiles all programs and libraries with debugging symbols.
This means that you can run the programs through a debugger This means that you can run the programs in a debugger
to analyze problems. This enlarges the size of the installed to analyze problems. This enlarges the size of the installed
executables considerably, and on non-GCC compilers it usually executables considerably, and on non-GCC compilers it usually
also disables compiler optimization, causing slowdowns. However, also disables compiler optimization, causing slowdowns. However,
@ -1293,7 +1295,7 @@ su - postgres
be rebuilt when any header file is changed. This is useful be rebuilt when any header file is changed. This is useful
if you are doing development work, but is just wasted overhead if you are doing development work, but is just wasted overhead
if you intend only to compile once and install. At present, if you intend only to compile once and install. At present,
this option will work only if you use GCC. this option only works with GCC.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -1510,13 +1512,13 @@ su - postgres
<title>Build</title> <title>Build</title>
<para> <para>
To start the build, type To start the build, type:
<screen> <screen>
<userinput>gmake</userinput> <userinput>gmake</userinput>
</screen> </screen>
(Remember to use <acronym>GNU</> <application>make</>.) The build (Remember to use <acronym>GNU</> <application>make</>.) The build
will take a few minutes depending on your will take a few minutes depending on your
hardware. The last line displayed should be hardware. The last line displayed should be:
<screen> <screen>
All of PostgreSQL is successfully made. Ready to install. All of PostgreSQL is successfully made. Ready to install.
</screen> </screen>
@ -1535,7 +1537,7 @@ All of PostgreSQL is successfully made. Ready to install.
you can run the regression tests at this point. The regression you can run the regression tests at this point. The regression
tests are a test suite to verify that <productname>PostgreSQL</> tests are a test suite to verify that <productname>PostgreSQL</>
runs on your machine in the way the developers expected it runs on your machine in the way the developers expected it
to. Type to. Type:
<screen> <screen>
<userinput>gmake check</userinput> <userinput>gmake check</userinput>
</screen> </screen>
@ -1550,7 +1552,7 @@ All of PostgreSQL is successfully made. Ready to install.
</step> </step>
<step id="install"> <step id="install">
<title>Installing The Files</title> <title>Installing the Files</title>
<note> <note>
<para> <para>
@ -1562,14 +1564,14 @@ All of PostgreSQL is successfully made. Ready to install.
</note> </note>
<para> <para>
To install <productname>PostgreSQL</> enter To install <productname>PostgreSQL</> enter:
<screen> <screen>
<userinput>gmake install</userinput> <userinput>gmake install</userinput>
</screen> </screen>
This will install files into the directories that were specified This will install files into the directories that were specified
in <xref linkend="configure">. Make sure that you have appropriate in <xref linkend="configure">. Make sure that you have appropriate
permissions to write into that area. Normally you need to do this permissions to write into that area. Normally you need to do this
step as root. Alternatively, you could create the target step as root. Alternatively, you can create the target
directories in advance and arrange for appropriate permissions to directories in advance and arrange for appropriate permissions to
be granted. be granted.
</para> </para>
@ -1639,14 +1641,14 @@ All of PostgreSQL is successfully made. Ready to install.
<title>Cleaning:</title> <title>Cleaning:</title>
<para> <para>
After the installation you can make room by removing the built After the installation you can free disk space by removing the built
files from the source tree with the command <command>gmake files from the source tree with the command <command>gmake
clean</>. This will preserve the files made by the <command>configure</command> clean</>. This will preserve the files made by the <command>configure</command>
program, so that you can rebuild everything with <command>gmake</> program, so that you can rebuild everything with <command>gmake</>
later on. To reset the source tree to the state in which it was later on. To reset the source tree to the state in which it was
distributed, use <command>gmake distclean</>. If you are going to distributed, use <command>gmake distclean</>. If you are going to
build for several platforms within the same source tree you must do build for several platforms within the same source tree you must do
this and re-configure for each build. (Alternatively, use this and rebuild for each platform. (Alternatively, use
a separate build tree for each platform, so that the source tree a separate build tree for each platform, so that the source tree
remains unmodified.) remains unmodified.)
</para> </para>
@ -1673,8 +1675,8 @@ All of PostgreSQL is successfully made. Ready to install.
</indexterm> </indexterm>
<para> <para>
On some systems that have shared libraries (which most systems do) On several systems with shared libraries
you need to tell your system how to find the newly installed you need to tell the system how to find the newly installed
shared libraries. The systems on which this is shared libraries. The systems on which this is
<emphasis>not</emphasis> necessary include <systemitem <emphasis>not</emphasis> necessary include <systemitem
class="osname">BSD/OS</>, <systemitem class="osname">FreeBSD</>, class="osname">BSD/OS</>, <systemitem class="osname">FreeBSD</>,
@ -1688,7 +1690,7 @@ All of PostgreSQL is successfully made. Ready to install.
<para> <para>
The method to set the shared library search path varies between The method to set the shared library search path varies between
platforms, but the most widely usable method is to set the platforms, but the most widely-used method is to set the
environment variable <envar>LD_LIBRARY_PATH</> like so: In Bourne environment variable <envar>LD_LIBRARY_PATH</> like so: In Bourne
shells (<command>sh</>, <command>ksh</>, <command>bash</>, <command>zsh</>): shells (<command>sh</>, <command>ksh</>, <command>bash</>, <command>zsh</>):
<programlisting> <programlisting>
@ -1724,7 +1726,7 @@ setenv LD_LIBRARY_PATH /usr/local/pgsql/lib
<para> <para>
If in doubt, refer to the manual pages of your system (perhaps If in doubt, refer to the manual pages of your system (perhaps
<command>ld.so</command> or <command>rld</command>). If you later <command>ld.so</command> or <command>rld</command>). If you later
on get a message like get a message like:
<screen> <screen>
psql: error in loading shared libraries psql: error in loading shared libraries
libpq.so.2.1: cannot open shared object file: No such file or directory libpq.so.2.1: cannot open shared object file: No such file or directory
@ -1776,7 +1778,7 @@ libpq.so.2.1: cannot open shared object file: No such file or directory
<para> <para>
To do this, add the following to your shell start-up file, such as To do this, add the following to your shell start-up file, such as
<filename>~/.bash_profile</> (or <filename>/etc/profile</>, if you <filename>~/.bash_profile</> (or <filename>/etc/profile</>, if you
want it to affect every user): want it to affect all users):
<programlisting> <programlisting>
PATH=/usr/local/pgsql/bin:$PATH PATH=/usr/local/pgsql/bin:$PATH
export PATH export PATH
@ -1807,7 +1809,7 @@ export MANPATH
server, overriding the compiled-in defaults. If you are going to server, overriding the compiled-in defaults. If you are going to
run client applications remotely then it is convenient if every run client applications remotely then it is convenient if every
user that plans to use the database sets <envar>PGHOST</>. This user that plans to use the database sets <envar>PGHOST</>. This
is not required, however: the settings can be communicated via command is not required, however; the settings can be communicated via command
line options to most client programs. line options to most client programs.
</para> </para>
</sect2> </sect2>
@ -1902,7 +1904,7 @@ kill `cat /usr/local/pgsql/data/postmaster.pid`
<screen> <screen>
<userinput>createdb testdb</> <userinput>createdb testdb</>
</screen> </screen>
Then enter Then enter:
<screen> <screen>
<userinput>psql testdb</> <userinput>psql testdb</>
</screen> </screen>
@ -2950,7 +2952,7 @@ LIBOBJS = snprintf.o
<para> <para>
If you see the linking of the postgres executable abort with an If you see the linking of the postgres executable abort with an
error message like error message like:
<screen> <screen>
Undefined first referenced Undefined first referenced
symbol in file symbol in file

View File

@ -1,11 +1,11 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/intro.sgml,v 1.34 2009/01/27 12:40:14 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/intro.sgml,v 1.35 2009/04/27 16:27:36 momjian Exp $ -->
<preface id="preface"> <preface id="preface">
<title>Preface</title> <title>Preface</title>
<para> <para>
This book is the official documentation of This book is the official documentation of
<productname>PostgreSQL</productname>. It is being written by the <productname>PostgreSQL</productname>. It has been written by the
<productname>PostgreSQL</productname> developers and other <productname>PostgreSQL</productname> developers and other
volunteers in parallel to the development of the volunteers in parallel to the development of the
<productname>PostgreSQL</productname> software. It describes all <productname>PostgreSQL</productname> software. It describes all
@ -58,7 +58,7 @@
<para> <para>
<xref linkend="server-programming"> contains information for <xref linkend="server-programming"> contains information for
advanced users about the extensibility capabilities of the advanced users about the extensibility capabilities of the
server. Topics are, for instance, user-defined data types and server. Topics include user-defined data types and
functions. functions.
</para> </para>
</listitem> </listitem>
@ -148,7 +148,7 @@
<para> <para>
And because of the liberal license, And because of the liberal license,
<productname>PostgreSQL</productname> can be used, modified, and <productname>PostgreSQL</productname> can be used, modified, and
distributed by everyone free of charge for any purpose, be it distributed by anyone free of charge for any purpose, be it
private, commercial, or academic. private, commercial, or academic.
</para> </para>
</sect1> </sect1>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/libpq.sgml,v 1.287 2009/04/24 14:10:41 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/libpq.sgml,v 1.288 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="libpq"> <chapter id="libpq">
<title><application>libpq</application> - C Library</title> <title><application>libpq</application> - C Library</title>
@ -6633,7 +6633,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
#include &lt;libpq-fe.h&gt; #include &lt;libpq-fe.h&gt;
</programlisting> </programlisting>
If you failed to do that then you will normally get error messages If you failed to do that then you will normally get error messages
from your compiler similar to from your compiler similar to:
<screen> <screen>
foo.c: In function `main': foo.c: In function `main':
foo.c:34: `PGconn' undeclared (first use in this function) foo.c:34: `PGconn' undeclared (first use in this function)
@ -6679,7 +6679,7 @@ CPPFLAGS += -I/usr/local/pgsql/include
<para> <para>
Failure to specify the correct option to the compiler will Failure to specify the correct option to the compiler will
result in an error message such as result in an error message such as:
<screen> <screen>
testlibpq.c:8:22: libpq-fe.h: No such file or directory testlibpq.c:8:22: libpq-fe.h: No such file or directory
</screen> </screen>
@ -6713,7 +6713,7 @@ cc -o testprog testprog1.o testprog2.o -L/usr/local/pgsql/lib -lpq
<para> <para>
Error messages that point to problems in this area could look like Error messages that point to problems in this area could look like
the following. the following:
<screen> <screen>
testlibpq.o: In function `main': testlibpq.o: In function `main':
testlibpq.o(.text+0x60): undefined reference to `PQsetdbLogin' testlibpq.o(.text+0x60): undefined reference to `PQsetdbLogin'

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.68 2009/04/10 03:13:36 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.69 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="monitoring"> <chapter id="monitoring">
<title>Monitoring Database Activity</title> <title>Monitoring Database Activity</title>
@ -929,7 +929,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<function>read()</> calls issued for the table, index, or <function>read()</> calls issued for the table, index, or
database; the number of actual physical reads is usually database; the number of actual physical reads is usually
lower due to kernel-level buffering. The <literal>*_blks_read</> lower due to kernel-level buffering. The <literal>*_blks_read</>
statistics columns uses this subtraction, i.e. fetched minus hit. statistics columns uses this subtraction, i.e., fetched minus hit.
</para> </para>
</note> </note>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/mvcc.sgml,v 2.70 2009/02/04 16:05:50 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/mvcc.sgml,v 2.71 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="mvcc"> <chapter id="mvcc">
<title>Concurrency Control</title> <title>Concurrency Control</title>
@ -43,7 +43,7 @@
</para> </para>
<para> <para>
The main advantage to using the <acronym>MVCC</acronym> model of The main advantage of using the <acronym>MVCC</acronym> model of
concurrency control rather than locking is that in concurrency control rather than locking is that in
<acronym>MVCC</acronym> locks acquired for querying (reading) data <acronym>MVCC</acronym> locks acquired for querying (reading) data
do not conflict with locks acquired for writing data, and so do not conflict with locks acquired for writing data, and so
@ -246,7 +246,7 @@
committed before the query began; it never sees either uncommitted committed before the query began; it never sees either uncommitted
data or changes committed during query execution by concurrent data or changes committed during query execution by concurrent
transactions. In effect, a <command>SELECT</command> query sees transactions. In effect, a <command>SELECT</command> query sees
a snapshot of the database as of the instant the query begins to a snapshot of the database at the instant the query begins to
run. However, <command>SELECT</command> does see the effects run. However, <command>SELECT</command> does see the effects
of previous updates executed within its own transaction, even of previous updates executed within its own transaction, even
though they are not yet committed. Also note that two successive though they are not yet committed. Also note that two successive
@ -260,7 +260,7 @@
FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands
behave the same as <command>SELECT</command> behave the same as <command>SELECT</command>
in terms of searching for target rows: they will only find target rows in terms of searching for target rows: they will only find target rows
that were committed as of the command start time. However, such a target that were committed before the command start time. However, such a target
row might have already been updated (or deleted or locked) by row might have already been updated (or deleted or locked) by
another concurrent transaction by the time it is found. In this case, the another concurrent transaction by the time it is found. In this case, the
would-be updater will wait for the first updating transaction to commit or would-be updater will wait for the first updating transaction to commit or
@ -296,7 +296,7 @@ COMMIT;
</screen> </screen>
If two such transactions concurrently try to change the balance of account If two such transactions concurrently try to change the balance of account
12345, we clearly want the second transaction to start from the updated 12345, we clearly want the second transaction to start with the updated
version of the account's row. Because each command is affecting only a version of the account's row. Because each command is affecting only a
predetermined row, letting it see the updated version of the row does predetermined row, letting it see the updated version of the row does
not create any troublesome inconsistency. not create any troublesome inconsistency.
@ -306,7 +306,7 @@ COMMIT;
More complex usage can produce undesirable results in Read Committed More complex usage can produce undesirable results in Read Committed
mode. For example, consider a <command>DELETE</command> command mode. For example, consider a <command>DELETE</command> command
operating on data that is being both added and removed from its operating on data that is being both added and removed from its
restriction criteria by another command, e.g. assume restriction criteria by another command, e.g., assume
<literal>website</literal> is a two-row table with <literal>website</literal> is a two-row table with
<literal>website.hits</literal> equaling <literal>9</literal> and <literal>website.hits</literal> equaling <literal>9</literal> and
<literal>10</literal>: <literal>10</literal>:
@ -354,7 +354,7 @@ COMMIT;
</indexterm> </indexterm>
<para> <para>
The level <firstterm>Serializable</firstterm> provides the strictest transaction The <firstterm>Serializable</firstterm> isolation level provides the strictest transaction
isolation. This level emulates serial transaction execution, isolation. This level emulates serial transaction execution,
as if transactions had been executed one after another, serially, as if transactions had been executed one after another, serially,
rather than concurrently. However, applications using this level must rather than concurrently. However, applications using this level must
@ -362,19 +362,21 @@ COMMIT;
</para> </para>
<para> <para>
When a transaction is on the serializable level, When a transaction is using the serializable level,
a <command>SELECT</command> query sees only data committed before the a <command>SELECT</command> query only sees data committed before the
transaction began; it never sees either uncommitted data or changes transaction began; it never sees either uncommitted data or changes
committed committed
during transaction execution by concurrent transactions. (However, the during transaction execution by concurrent transactions. (However,
<command>SELECT</command> does see the effects of previous updates <command>SELECT</command> does see the effects of previous updates
executed within its own transaction, even though they are not yet executed within its own transaction, even though they are not yet
committed.) This is different from Read Committed in that the committed.) This is different from Read Committed in that
<command>SELECT</command> <command>SELECT</command> in a serializable transaction
sees a snapshot as of the start of the transaction, not as of the start sees a snapshot as of the start of the <emphasis>transaction</>, not as of the start
of the current query within the transaction. Thus, successive of the current query within the transaction. Thus, successive
<command>SELECT</command> commands within a single transaction always see the same <command>SELECT</command> commands within a <emphasis>single</>
data. transaction see the same data, i.e. they never see changes made by
transactions that committed after its own transaction started. (This
behavior can be ideal for reporting applications.)
</para> </para>
<para> <para>
@ -382,7 +384,7 @@ COMMIT;
FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands FOR UPDATE</command>, and <command>SELECT FOR SHARE</command> commands
behave the same as <command>SELECT</command> behave the same as <command>SELECT</command>
in terms of searching for target rows: they will only find target rows in terms of searching for target rows: they will only find target rows
that were committed as of the transaction start time. However, such a that were committed before the transaction start time. However, such a
target target
row might have already been updated (or deleted or locked) by row might have already been updated (or deleted or locked) by
another concurrent transaction by the time it is found. In this case, the another concurrent transaction by the time it is found. In this case, the
@ -402,9 +404,9 @@ ERROR: could not serialize access due to concurrent update
</para> </para>
<para> <para>
When the application receives this error message, it should abort When an application receives this error message, it should abort
the current transaction and then retry the whole transaction from the current transaction and retry the whole transaction from
the beginning. The second time through, the transaction sees the the beginning. The second time through, the transaction will see the
previously-committed change as part of its initial view of the database, previously-committed change as part of its initial view of the database,
so there is no logical conflict in using the new version of the row so there is no logical conflict in using the new version of the row
as the starting point for the new transaction's update. as the starting point for the new transaction's update.
@ -420,8 +422,8 @@ ERROR: could not serialize access due to concurrent update
transaction sees a wholly consistent view of the database. However, transaction sees a wholly consistent view of the database. However,
the application has to be prepared to retry transactions when concurrent the application has to be prepared to retry transactions when concurrent
updates make it impossible to sustain the illusion of serial execution. updates make it impossible to sustain the illusion of serial execution.
Since the cost of redoing complex transactions might be significant, Since the cost of redoing complex transactions can be significant,
this mode is recommended only when updating transactions contain logic serializable mode is recommended only when updating transactions contain logic
sufficiently complex that they might give wrong answers in Read sufficiently complex that they might give wrong answers in Read
Committed mode. Most commonly, Serializable mode is necessary when Committed mode. Most commonly, Serializable mode is necessary when
a transaction executes several successive commands that must see a transaction executes several successive commands that must see
@ -449,7 +451,7 @@ ERROR: could not serialize access due to concurrent update
is not sufficient to guarantee true serializability, and in fact is not sufficient to guarantee true serializability, and in fact
<productname>PostgreSQL</productname>'s Serializable mode <emphasis>does <productname>PostgreSQL</productname>'s Serializable mode <emphasis>does
not guarantee serializable execution in this sense</>. As an example, not guarantee serializable execution in this sense</>. As an example,
consider a table <structname>mytab</>, initially containing consider a table <structname>mytab</>, initially containing:
<screen> <screen>
class | value class | value
-------+------- -------+-------
@ -458,18 +460,18 @@ ERROR: could not serialize access due to concurrent update
2 | 100 2 | 100
2 | 200 2 | 200
</screen> </screen>
Suppose that serializable transaction A computes Suppose that serializable transaction A computes:
<screen> <screen>
SELECT SUM(value) FROM mytab WHERE class = 1; SELECT SUM(value) FROM mytab WHERE class = 1;
</screen> </screen>
and then inserts the result (30) as the <structfield>value</> in a and then inserts the result (30) as the <structfield>value</> in a
new row with <structfield>class</> = 2. Concurrently, serializable new row with <structfield>class</><literal> = 2</>. Concurrently, serializable
transaction B computes transaction B computes:
<screen> <screen>
SELECT SUM(value) FROM mytab WHERE class = 2; SELECT SUM(value) FROM mytab WHERE class = 2;
</screen> </screen>
and obtains the result 300, which it inserts in a new row with and obtains the result 300, which it inserts in a new row with
<structfield>class</> = 1. Then both transactions commit. None of <structfield>class</><literal> = 1</>. Then both transactions commit. None of
the listed undesirable behaviors have occurred, yet we have a result the listed undesirable behaviors have occurred, yet we have a result
that could not have occurred in either order serially. If A had that could not have occurred in either order serially. If A had
executed before B, B would have computed the sum 330, not 300, and executed before B, B would have computed the sum 330, not 300, and
@ -505,7 +507,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
</para> </para>
<para> <para>
In those cases where the possibility of nonserializable execution In cases where the possibility of non-serializable execution
is a real hazard, problems can be prevented by appropriate use of is a real hazard, problems can be prevented by appropriate use of
explicit locking. Further discussion appears in the following explicit locking. Further discussion appears in the following
sections. sections.
@ -588,7 +590,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
<para> <para>
The <command>SELECT</command> command acquires a lock of this mode on The <command>SELECT</command> command acquires a lock of this mode on
referenced tables. In general, any query that only reads a table referenced tables. In general, any query that only <emphasis>reads</> a table
and does not modify it will acquire this lock mode. and does not modify it will acquire this lock mode.
</para> </para>
</listitem> </listitem>
@ -632,7 +634,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
acquire this lock mode on the target table (in addition to acquire this lock mode on the target table (in addition to
<literal>ACCESS SHARE</literal> locks on any other referenced <literal>ACCESS SHARE</literal> locks on any other referenced
tables). In general, this lock mode will be acquired by any tables). In general, this lock mode will be acquired by any
command that modifies the data in a table. command that <emphasis>modifies data</> in a table.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -664,10 +666,9 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
</term> </term>
<listitem> <listitem>
<para> <para>
Conflicts with the <literal>ROW EXCLUSIVE</literal>, Conflicts all lock modes except <literal>ACCESS SHARE</literal>,
<literal>SHARE UPDATE EXCLUSIVE</literal>, <literal>SHARE ROW <literal>ROW SHARE</literal>, and <literal>SHARE</literal> (it
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and does not conflict with itself).
<literal>ACCESS EXCLUSIVE</literal> lock modes.
This mode protects a table against concurrent data changes. This mode protects a table against concurrent data changes.
</para> </para>
@ -684,11 +685,8 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
</term> </term>
<listitem> <listitem>
<para> <para>
Conflicts with the <literal>ROW EXCLUSIVE</literal>, Conflicts all lock modes except <literal>ACCESS SHARE</literal>
<literal>SHARE UPDATE EXCLUSIVE</literal>, and <literal>ROW SHARE</literal>.
<literal>SHARE</literal>, <literal>SHARE ROW
EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
<literal>ACCESS EXCLUSIVE</literal> lock modes.
</para> </para>
<para> <para>
@ -704,11 +702,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
</term> </term>
<listitem> <listitem>
<para> <para>
Conflicts with the <literal>ROW SHARE</literal>, <literal>ROW Conflicts all lock modes except <literal>ACCESS SHARE</literal>.
EXCLUSIVE</literal>, <literal>SHARE UPDATE
EXCLUSIVE</literal>, <literal>SHARE</literal>, <literal>SHARE
ROW EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
<literal>ACCESS EXCLUSIVE</literal> lock modes.
This mode allows only concurrent <literal>ACCESS SHARE</literal> locks, This mode allows only concurrent <literal>ACCESS SHARE</literal> locks,
i.e., only reads from the table can proceed in parallel with a i.e., only reads from the table can proceed in parallel with a
transaction holding this lock mode. transaction holding this lock mode.
@ -717,7 +711,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
<para> <para>
This lock mode is not automatically acquired on user tables by any This lock mode is not automatically acquired on user tables by any
<productname>PostgreSQL</productname> command. However it is <productname>PostgreSQL</productname> command. However it is
acquired on certain system catalogs in some operations. acquired during certain internal system catalogs operations.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -728,12 +722,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
</term> </term>
<listitem> <listitem>
<para> <para>
Conflicts with locks of all modes (<literal>ACCESS Conflicts with all lock modes.
SHARE</literal>, <literal>ROW SHARE</literal>, <literal>ROW
EXCLUSIVE</literal>, <literal>SHARE UPDATE
EXCLUSIVE</literal>, <literal>SHARE</literal>, <literal>SHARE
ROW EXCLUSIVE</literal>, <literal>EXCLUSIVE</literal>, and
<literal>ACCESS EXCLUSIVE</literal>).
This mode guarantees that the This mode guarantees that the
holder is the only transaction accessing the table in any way. holder is the only transaction accessing the table in any way.
</para> </para>
@ -760,7 +749,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
<para> <para>
Once acquired, a lock is normally held till end of transaction. But if a Once acquired, a lock is normally held till end of transaction. But if a
lock is acquired after establishing a savepoint, the lock is released lock is acquired after establishing a savepoint, the lock is released
immediately if the savepoint is rolled back to. This is consistent with immediately if the savepoint is rolled back. This is consistent with
the principle that <command>ROLLBACK</> cancels all effects of the the principle that <command>ROLLBACK</> cancels all effects of the
commands since the savepoint. The same holds for locks acquired within a commands since the savepoint. The same holds for locks acquired within a
<application>PL/pgSQL</> exception block: an error escape from the block <application>PL/pgSQL</> exception block: an error escape from the block
@ -893,9 +882,9 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
can be exclusive or shared locks. An exclusive row-level lock on a can be exclusive or shared locks. An exclusive row-level lock on a
specific row is automatically acquired when the row is updated or specific row is automatically acquired when the row is updated or
deleted. The lock is held until the transaction commits or rolls deleted. The lock is held until the transaction commits or rolls
back, in just the same way as for table-level locks. Row-level locks do back, like table-level locks. Row-level locks do
not affect data querying; they block <emphasis>writers to the same not affect data querying; they only block <emphasis>writers to the same
row</emphasis> only. row</emphasis>.
</para> </para>
<para> <para>
@ -917,10 +906,10 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
<para> <para>
<productname>PostgreSQL</productname> doesn't remember any <productname>PostgreSQL</productname> doesn't remember any
information about modified rows in memory, so it has no limit to information about modified rows in memory, so there is no limit on
the number of rows locked at one time. However, locking a row the number of rows locked at one time. However, locking a row
might cause a disk write; thus, for example, <command>SELECT FOR might cause a disk write, e.g., <command>SELECT FOR
UPDATE</command> will modify selected rows to mark them locked, and so UPDATE</command> modifies selected rows to mark them locked, and so
will result in disk writes. will result in disk writes.
</para> </para>
@ -929,7 +918,7 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
used to control read/write access to table pages in the shared buffer used to control read/write access to table pages in the shared buffer
pool. These locks are released immediately after a row is fetched or pool. These locks are released immediately after a row is fetched or
updated. Application developers normally need not be concerned with updated. Application developers normally need not be concerned with
page-level locks, but we mention them for completeness. page-level locks, but they are mentioned for completeness.
</para> </para>
</sect2> </sect2>
@ -953,14 +942,14 @@ SELECT SUM(value) FROM mytab WHERE class = 2;
deadlock situations and resolves them by aborting one of the deadlock situations and resolves them by aborting one of the
transactions involved, allowing the other(s) to complete. transactions involved, allowing the other(s) to complete.
(Exactly which transaction will be aborted is difficult to (Exactly which transaction will be aborted is difficult to
predict and should not be relied on.) predict and should not be relied upon.)
</para> </para>
<para> <para>
Note that deadlocks can also occur as the result of row-level Note that deadlocks can also occur as the result of row-level
locks (and thus, they can occur even if explicit locking is not locks (and thus, they can occur even if explicit locking is not
used). Consider the case in which there are two concurrent used). Consider the case in which two concurrent
transactions modifying a table. The first transaction executes: transactions modify a table. The first transaction executes:
<screen> <screen>
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 11111; UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 11111;
@ -1003,10 +992,10 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
above, if both transactions above, if both transactions
had updated the rows in the same order, no deadlock would have had updated the rows in the same order, no deadlock would have
occurred. One should also ensure that the first lock acquired on occurred. One should also ensure that the first lock acquired on
an object in a transaction is the highest mode that will be an object in a transaction is the most restrictive mode that will be
needed for that object. If it is not feasible to verify this in needed for that object. If it is not feasible to verify this in
advance, then deadlocks can be handled on-the-fly by retrying advance, then deadlocks can be handled on-the-fly by retrying
transactions that are aborted due to deadlock. transactions that abort due to deadlocks.
</para> </para>
<para> <para>
@ -1055,7 +1044,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
<xref linkend="guc-max-locks-per-transaction"> and <xref linkend="guc-max-locks-per-transaction"> and
<xref linkend="guc-max-connections">. <xref linkend="guc-max-connections">.
Care must be taken not to exhaust this Care must be taken not to exhaust this
memory or the server will not be able to grant any locks at all. memory or the server will be unable to grant any locks at all.
This imposes an upper limit on the number of advisory locks This imposes an upper limit on the number of advisory locks
grantable by the server, typically in the tens to hundreds of thousands grantable by the server, typically in the tens to hundreds of thousands
depending on how the server is configured. depending on how the server is configured.
@ -1068,7 +1057,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
While a flag stored in a table could be used for the same purpose, While a flag stored in a table could be used for the same purpose,
advisory locks are faster, avoid MVCC bloat, and are automatically advisory locks are faster, avoid MVCC bloat, and are automatically
cleaned up by the server at the end of the session. cleaned up by the server at the end of the session.
In certain cases using this method, especially in queries In certain cases using this advisory locking method, especially in queries
involving explicit ordering and <literal>LIMIT</> clauses, care must be involving explicit ordering and <literal>LIMIT</> clauses, care must be
taken to control the locks acquired because of the order in which SQL taken to control the locks acquired because of the order in which SQL
expressions are evaluated. For example: expressions are evaluated. For example:
@ -1109,9 +1098,9 @@ SELECT pg_advisory_lock(q.id) FROM
if a row is returned by <command>SELECT</command> it doesn't mean that if a row is returned by <command>SELECT</command> it doesn't mean that
the row is still current at the instant it is returned (i.e., sometime the row is still current at the instant it is returned (i.e., sometime
after the current query began). The row might have been modified or after the current query began). The row might have been modified or
deleted by an already-committed transaction that committed after this one deleted by an already-committed transaction that committed after
started. the <command>SELECT</command> started.
Even if the row is still valid <quote>now</quote>, it could be changed or Even if the row is still valid <emphasis>now</>, it could be changed or
deleted deleted
before the current transaction does a commit or rollback. before the current transaction does a commit or rollback.
</para> </para>
@ -1132,7 +1121,7 @@ SELECT pg_advisory_lock(q.id) FROM
concurrent updates one must use <command>SELECT FOR UPDATE</command>, concurrent updates one must use <command>SELECT FOR UPDATE</command>,
<command>SELECT FOR SHARE</command>, or an appropriate <command>LOCK <command>SELECT FOR SHARE</command>, or an appropriate <command>LOCK
TABLE</command> statement. (<command>SELECT FOR UPDATE</command> TABLE</command> statement. (<command>SELECT FOR UPDATE</command>
or <command>SELECT FOR SHARE</command> locks just the or <command>SELECT FOR SHARE</command> lock just the
returned rows against concurrent updates, while <command>LOCK returned rows against concurrent updates, while <command>LOCK
TABLE</command> locks the whole table.) This should be taken into TABLE</command> locks the whole table.) This should be taken into
account when porting applications to account when porting applications to
@ -1144,10 +1133,10 @@ SELECT pg_advisory_lock(q.id) FROM
For example, a banking application might wish to check that the sum of For example, a banking application might wish to check that the sum of
all credits in one table equals the sum of debits in another table, all credits in one table equals the sum of debits in another table,
when both tables are being actively updated. Comparing the results of two when both tables are being actively updated. Comparing the results of two
successive <literal>SELECT sum(...)</literal> commands will not work reliably under successive <literal>SELECT sum(...)</literal> commands will not work reliably in
Read Committed mode, since the second query will likely include the results Read Committed mode, since the second query will likely include the results
of transactions not counted by the first. Doing the two sums in a of transactions not counted by the first. Doing the two sums in a
single serializable transaction will give an accurate picture of the single serializable transaction will give an accurate picture of only the
effects of transactions that committed before the serializable transaction effects of transactions that committed before the serializable transaction
started &mdash; but one might legitimately wonder whether the answer is still started &mdash; but one might legitimately wonder whether the answer is still
relevant by the time it is delivered. If the serializable transaction relevant by the time it is delivered. If the serializable transaction
@ -1164,8 +1153,8 @@ SELECT pg_advisory_lock(q.id) FROM
<para> <para>
Note also that if one is Note also that if one is
relying on explicit locking to prevent concurrent changes, one should use relying on explicit locking to prevent concurrent changes, one should use
Read Committed mode, or in Serializable mode be careful to obtain the either Read Committed mode, or in Serializable mode be careful to obtain
lock(s) before performing queries. A lock obtained by a locks before performing queries. A lock obtained by a
serializable transaction guarantees that no other transactions modifying serializable transaction guarantees that no other transactions modifying
the table are still running, but if the snapshot seen by the the table are still running, but if the snapshot seen by the
transaction predates obtaining the lock, it might predate some now-committed transaction predates obtaining the lock, it might predate some now-committed
@ -1173,7 +1162,7 @@ SELECT pg_advisory_lock(q.id) FROM
frozen at the start of its first query or data-modification command frozen at the start of its first query or data-modification command
(<literal>SELECT</>, <literal>INSERT</>, (<literal>SELECT</>, <literal>INSERT</>,
<literal>UPDATE</>, or <literal>DELETE</>), so <literal>UPDATE</>, or <literal>DELETE</>), so
it's possible to obtain locks explicitly before the snapshot is it is often desirable to obtain locks explicitly before the snapshot is
frozen. frozen.
</para> </para>
</sect1> </sect1>
@ -1189,7 +1178,7 @@ SELECT pg_advisory_lock(q.id) FROM
<para> <para>
Though <productname>PostgreSQL</productname> Though <productname>PostgreSQL</productname>
provides nonblocking read/write access to table provides nonblocking read/write access to table
data, nonblocking read/write access is not currently offered for every data, nonblocking read/write access is currently not offered for every
index access method implemented index access method implemented
in <productname>PostgreSQL</productname>. in <productname>PostgreSQL</productname>.
The various index types are handled as follows: The various index types are handled as follows:
@ -1232,8 +1221,8 @@ SELECT pg_advisory_lock(q.id) FROM
<para> <para>
Short-term share/exclusive page-level locks are used for Short-term share/exclusive page-level locks are used for
read/write access. Locks are released immediately after each read/write access. Locks are released immediately after each
index row is fetched or inserted. But note that a GIN-indexed index row is fetched or inserted. But note insertion of a GIN-indexed
value insertion usually produces several index key insertions value usually produces several index key insertions
per row, so GIN might do substantial work for a single value's per row, so GIN might do substantial work for a single value's
insertion. insertion.
</para> </para>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.69 2008/12/13 19:13:43 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.70 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="performance-tips"> <chapter id="performance-tips">
<title>Performance Tips</title> <title>Performance Tips</title>
@ -9,7 +9,7 @@
<para> <para>
Query performance can be affected by many things. Some of these can Query performance can be affected by many things. Some of these can
be manipulated by the user, while others are fundamental to the underlying be controlled by the user, while others are fundamental to the underlying
design of the system. This chapter provides some hints about understanding design of the system. This chapter provides some hints about understanding
and tuning <productname>PostgreSQL</productname> performance. and tuning <productname>PostgreSQL</productname> performance.
</para> </para>
@ -27,10 +27,10 @@
<para> <para>
<productname>PostgreSQL</productname> devises a <firstterm>query <productname>PostgreSQL</productname> devises a <firstterm>query
plan</firstterm> for each query it is given. Choosing the right plan</firstterm> for each query it receives. Choosing the right
plan to match the query structure and the properties of the data plan to match the query structure and the properties of the data
is absolutely critical for good performance, so the system includes is absolutely critical for good performance, so the system includes
a complex <firstterm>planner</> that tries to select good plans. a complex <firstterm>planner</> that tries to choose good plans.
You can use the You can use the
<xref linkend="sql-explain" endterm="sql-explain-title"> command <xref linkend="sql-explain" endterm="sql-explain-title"> command
to see what query plan the planner creates for any query. to see what query plan the planner creates for any query.
@ -40,14 +40,13 @@
<para> <para>
The structure of a query plan is a tree of <firstterm>plan nodes</>. The structure of a query plan is a tree of <firstterm>plan nodes</>.
Nodes at the bottom level are table scan nodes: they return raw rows Nodes at the bottom level of the tree are table scan nodes: they return raw rows
from a table. There are different types of scan nodes for different from a table. There are different types of scan nodes for different
table access methods: sequential scans, index scans, and bitmap index table access methods: sequential scans, index scans, and bitmap index
scans. If the query requires joining, aggregation, sorting, or other scans. If the query requires joining, aggregation, sorting, or other
operations on the raw rows, then there will be additional nodes operations on the raw rows, then there will be additional nodes
<quote>atop</> the scan nodes to perform these operations. Again, above the scan nodes to perform these operations. Other nodes types
there is usually more than one possible way to do these operations, are also supported. The output
so different node types can appear here too. The output
of <command>EXPLAIN</command> has one line for each node in the plan of <command>EXPLAIN</command> has one line for each node in the plan
tree, showing the basic node type plus the cost estimates that the planner tree, showing the basic node type plus the cost estimates that the planner
made for the execution of that plan node. The first line (topmost node) made for the execution of that plan node. The first line (topmost node)
@ -56,15 +55,15 @@
</para> </para>
<para> <para>
Here is a trivial example, just to show what the output looks like. Here is a trivial example, just to show what the output looks like:
<footnote> <footnote>
<para> <para>
Examples in this section are drawn from the regression test database Examples in this section are drawn from the regression test database
after doing a <command>VACUUM ANALYZE</>, using 8.2 development sources. after doing a <command>VACUUM ANALYZE</>, using 8.2 development sources.
You should be able to get similar results if you try the examples yourself, You should be able to get similar results if you try the examples yourself,
but your estimated costs and row counts will probably vary slightly but your estimated costs and row counts might vary slightly
because <command>ANALYZE</>'s statistics are random samples rather because <command>ANALYZE</>'s statistics are random samples rather
than being exact. than exact.
</para> </para>
</footnote> </footnote>
@ -78,22 +77,23 @@ EXPLAIN SELECT * FROM tenk1;
</para> </para>
<para> <para>
The numbers that are quoted by <command>EXPLAIN</command> are: The numbers that are quoted by <command>EXPLAIN</command> are (left
to right):
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
Estimated start-up cost (Time expended before output scan can start, Estimated start-up cost, e.g., time expended before the output scan can start,
e.g., time to do the sorting in a sort node.) time to do the sorting in a sort node
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Estimated total cost (If all rows were to be retrieved, though they might Estimated total cost if all rows were to be retrieved (though they might
not be: for example, a query with a <literal>LIMIT</> clause will stop not be, e.g., a query with a <literal>LIMIT</> clause will stop
short of paying the total cost of the <literal>Limit</> plan node's short of paying the total cost of the <literal>Limit</> node's
input node.) input node)
</para> </para>
</listitem> </listitem>
@ -119,8 +119,8 @@ EXPLAIN SELECT * FROM tenk1;
Traditional practice is to measure the costs in units of disk page Traditional practice is to measure the costs in units of disk page
fetches; that is, <xref linkend="guc-seq-page-cost"> is conventionally fetches; that is, <xref linkend="guc-seq-page-cost"> is conventionally
set to <literal>1.0</> and the other cost parameters are set relative set to <literal>1.0</> and the other cost parameters are set relative
to that. The examples in this section are run with the default cost to that. (The examples in this section are run with the default cost
parameters. parameters.)
</para> </para>
<para> <para>
@ -129,17 +129,18 @@ EXPLAIN SELECT * FROM tenk1;
the cost only reflects things that the planner cares about. the cost only reflects things that the planner cares about.
In particular, the cost does not consider the time spent transmitting In particular, the cost does not consider the time spent transmitting
result rows to the client, which could be an important result rows to the client, which could be an important
factor in the true elapsed time; but the planner ignores it because factor in the total elapsed time; but the planner ignores it because
it cannot change it by altering the plan. (Every correct plan will it cannot change it by altering the plan. (Every correct plan will
output the same row set, we trust.) output the same row set, we trust.)
</para> </para>
<para> <para>
Rows output is a little tricky because it is <emphasis>not</emphasis> the The <command>EXPLAIN</command> <literal>rows=</> value is a little tricky
because it is <emphasis>not</emphasis> the
number of rows processed or scanned by the plan node. It is usually less, number of rows processed or scanned by the plan node. It is usually less,
reflecting the estimated selectivity of any <literal>WHERE</>-clause reflecting the estimated selectivity of any <literal>WHERE</>-clause
conditions that are being conditions that are being
applied at the node. Ideally the top-level rows estimate will applied to the node. Ideally the top-level rows estimate will
approximate the number of rows actually returned, updated, or deleted approximate the number of rows actually returned, updated, or deleted
by the query. by the query.
</para> </para>
@ -163,16 +164,16 @@ EXPLAIN SELECT * FROM tenk1;
SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1';
</programlisting> </programlisting>
you will find out that <classname>tenk1</classname> has 358 disk you will find that <classname>tenk1</classname> has 358 disk
pages and 10000 rows. The estimated cost is (disk pages read * pages and 10000 rows. The estimated cost is computed as (disk pages read *
<xref linkend="guc-seq-page-cost">) + (rows scanned * <xref linkend="guc-seq-page-cost">) + (rows scanned *
<xref linkend="guc-cpu-tuple-cost">). By default, <xref linkend="guc-cpu-tuple-cost">). By default,
<varname>seq_page_cost</> is 1.0 and <varname>cpu_tuple_cost</> is 0.01. <varname>seq_page_cost</> is 1.0 and <varname>cpu_tuple_cost</> is 0.01,
So the estimated cost is (358 * 1.0) + (10000 * 0.01) = 458. so the estimated cost is (358 * 1.0) + (10000 * 0.01) = 458.
</para> </para>
<para> <para>
Now let's modify the query to add a <literal>WHERE</> condition: Now let's modify the original query to add a <literal>WHERE</> condition:
<programlisting> <programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000; EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000;
@ -187,7 +188,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000;
clause being applied as a <quote>filter</> condition; this means that clause being applied as a <quote>filter</> condition; this means that
the plan node checks the condition for each row it scans, and outputs the plan node checks the condition for each row it scans, and outputs
only the ones that pass the condition. only the ones that pass the condition.
The estimate of output rows has gone down because of the <literal>WHERE</> The estimate of output rows has been reduced because of the <literal>WHERE</>
clause. clause.
However, the scan will still have to visit all 10000 rows, so the cost However, the scan will still have to visit all 10000 rows, so the cost
hasn't decreased; in fact it has gone up a bit (by 10000 * <xref hasn't decreased; in fact it has gone up a bit (by 10000 * <xref
@ -196,7 +197,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 7000;
</para> </para>
<para> <para>
The actual number of rows this query would select is 7000, but the rows The actual number of rows this query would select is 7000, but the <literal>rows=</>
estimate is only approximate. If you try to duplicate this experiment, estimate is only approximate. If you try to duplicate this experiment,
you will probably get a slightly different estimate; moreover, it will you will probably get a slightly different estimate; moreover, it will
change after each <command>ANALYZE</command> command, because the change after each <command>ANALYZE</command> command, because the
@ -224,16 +225,16 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100;
from the table itself. Fetching the rows separately is much more from the table itself. Fetching the rows separately is much more
expensive than sequentially reading them, but because not all the pages expensive than sequentially reading them, but because not all the pages
of the table have to be visited, this is still cheaper than a sequential of the table have to be visited, this is still cheaper than a sequential
scan. (The reason for using two levels of plan is that the upper plan scan. (The reason for using two plan levels is that the upper plan
node sorts the row locations identified by the index into physical order node sorts the row locations identified by the index into physical order
before reading them, so as to minimize the costs of the separate fetches. before reading them, to minimize the cost of separate fetches.
The <quote>bitmap</> mentioned in the node names is the mechanism that The <quote>bitmap</> mentioned in the node names is the mechanism that
does the sorting.) does the sorting.)
</para> </para>
<para> <para>
If the <literal>WHERE</> condition is selective enough, the planner might If the <literal>WHERE</> condition is selective enough, the planner might
switch to a <quote>simple</> index scan plan: switch to a <emphasis>simple</> index scan plan:
<programlisting> <programlisting>
EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3; EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3;
@ -247,8 +248,8 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3;
In this case the table rows are fetched in index order, which makes them In this case the table rows are fetched in index order, which makes them
even more expensive to read, but there are so few that the extra cost even more expensive to read, but there are so few that the extra cost
of sorting the row locations is not worth it. You'll most often see of sorting the row locations is not worth it. You'll most often see
this plan type for queries that fetch just a single row, and for queries this plan type in queries that fetch just a single row, and for queries
that request an <literal>ORDER BY</> condition that matches the index with an <literal>ORDER BY</> condition that matches the index
order. order.
</para> </para>
@ -271,11 +272,11 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 3 AND stringu1 = 'xxx';
cannot be applied as an index condition (since this index is only on cannot be applied as an index condition (since this index is only on
the <literal>unique1</> column). Instead it is applied as a filter on the <literal>unique1</> column). Instead it is applied as a filter on
the rows retrieved by the index. Thus the cost has actually gone up the rows retrieved by the index. Thus the cost has actually gone up
a little bit to reflect this extra checking. slightly to reflect this extra checking.
</para> </para>
<para> <para>
If there are indexes on several columns used in <literal>WHERE</>, the If there are indexes on several columns referenced in <literal>WHERE</>, the
planner might choose to use an AND or OR combination of the indexes: planner might choose to use an AND or OR combination of the indexes:
<programlisting> <programlisting>
@ -302,7 +303,9 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 &lt; 100 AND unique2 &gt; 9000;
Let's try joining two tables, using the columns we have been discussing: Let's try joining two tables, using the columns we have been discussing:
<programlisting> <programlisting>
EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; EXPLAIN SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
QUERY PLAN QUERY PLAN
-------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------
@ -317,12 +320,12 @@ EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique
</para> </para>
<para> <para>
In this nested-loop join, the outer scan is the same bitmap index scan we In this nested-loop join, the outer scan (upper) is the same bitmap index scan we
saw earlier, and so its cost and row count are the same because we are saw earlier, and so its cost and row count are the same because we are
applying the <literal>WHERE</> clause <literal>unique1 &lt; 100</literal> applying the <literal>WHERE</> clause <literal>unique1 &lt; 100</literal>
at that node. at that node.
The <literal>t1.unique2 = t2.unique2</literal> clause is not relevant yet, The <literal>t1.unique2 = t2.unique2</literal> clause is not relevant yet,
so it doesn't affect row count of the outer scan. For the inner scan, the so it doesn't affect the row count of the outer scan. For the inner (lower) scan, the
<literal>unique2</> value of the current outer-scan row is plugged into <literal>unique2</> value of the current outer-scan row is plugged into
the inner index scan to produce an index condition like the inner index scan to produce an index condition like
<literal>t2.unique2 = <replaceable>constant</replaceable></literal>. <literal>t2.unique2 = <replaceable>constant</replaceable></literal>.
@ -335,8 +338,8 @@ EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique
<para> <para>
In this example the join's output row count is the same as the product In this example the join's output row count is the same as the product
of the two scans' row counts, but that's not true in general, because of the two scans' row counts, but that's not true in all cases because
in general you can have <literal>WHERE</> clauses that mention both tables you can have <literal>WHERE</> clauses that mention both tables
and so can only be applied at the join point, not to either input scan. and so can only be applied at the join point, not to either input scan.
For example, if we added For example, if we added
<literal>WHERE ... AND t1.hundred &lt; t2.hundred</literal>, <literal>WHERE ... AND t1.hundred &lt; t2.hundred</literal>,
@ -346,14 +349,16 @@ EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique
<para> <para>
One way to look at variant plans is to force the planner to disregard One way to look at variant plans is to force the planner to disregard
whatever strategy it thought was the winner, using the enable/disable whatever strategy it thought was the cheapest, using the enable/disable
flags described in <xref linkend="runtime-config-query-enable">. flags described in <xref linkend="runtime-config-query-enable">.
(This is a crude tool, but useful. See (This is a crude tool, but useful. See
also <xref linkend="explicit-joins">.) also <xref linkend="explicit-joins">.)
<programlisting> <programlisting>
SET enable_nestloop = off; SET enable_nestloop = off;
EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; EXPLAIN SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
QUERY PLAN QUERY PLAN
------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------
@ -370,9 +375,9 @@ EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique
This plan proposes to extract the 100 interesting rows of <classname>tenk1</classname> This plan proposes to extract the 100 interesting rows of <classname>tenk1</classname>
using that same old index scan, stash them into an in-memory hash table, using that same old index scan, stash them into an in-memory hash table,
and then do a sequential scan of <classname>tenk2</classname>, probing into the hash table and then do a sequential scan of <classname>tenk2</classname>, probing into the hash table
for possible matches of <literal>t1.unique2 = t2.unique2</literal> at each <classname>tenk2</classname> row. for possible matches of <literal>t1.unique2 = t2.unique2</literal> for each <classname>tenk2</classname> row.
The cost to read <classname>tenk1</classname> and set up the hash table is entirely start-up The cost to read <classname>tenk1</classname> and set up the hash table is a start-up
cost for the hash join, since we won't get any rows out until we can cost for the hash join, since there will be no output until we can
start reading <classname>tenk2</classname>. The total time estimate for the join also start reading <classname>tenk2</classname>. The total time estimate for the join also
includes a hefty charge for the CPU time to probe the hash table includes a hefty charge for the CPU time to probe the hash table
10000 times. Note, however, that we are <emphasis>not</emphasis> charging 10000 times 232.35; 10000 times. Note, however, that we are <emphasis>not</emphasis> charging 10000 times 232.35;
@ -380,14 +385,16 @@ EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique
</para> </para>
<para> <para>
It is possible to check on the accuracy of the planner's estimated costs It is possible to check the accuracy of the planner's estimated costs
by using <command>EXPLAIN ANALYZE</>. This command actually executes the query, by using <command>EXPLAIN ANALYZE</>. This command actually executes the query,
and then displays the true run time accumulated within each plan node and then displays the true run time accumulated within each plan node
along with the same estimated costs that a plain <command>EXPLAIN</command> shows. along with the same estimated costs that a plain <command>EXPLAIN</command> shows.
For example, we might get a result like this: For example, we might get a result like this:
<screen> <screen>
EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2; EXPLAIN ANALYZE SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 &lt; 100 AND t1.unique2 = t2.unique2;
QUERY PLAN QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------
@ -402,7 +409,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t
</screen> </screen>
Note that the <quote>actual time</quote> values are in milliseconds of Note that the <quote>actual time</quote> values are in milliseconds of
real time, whereas the <quote>cost</quote> estimates are expressed in real time, whereas the <literal>cost=</> estimates are expressed in
arbitrary units; so they are unlikely to match up. arbitrary units; so they are unlikely to match up.
The thing to pay attention to is whether the ratios of actual time and The thing to pay attention to is whether the ratios of actual time and
estimated costs are consistent. estimated costs are consistent.
@ -412,11 +419,11 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t
In some query plans, it is possible for a subplan node to be executed more In some query plans, it is possible for a subplan node to be executed more
than once. For example, the inner index scan is executed once per outer than once. For example, the inner index scan is executed once per outer
row in the above nested-loop plan. In such cases, the row in the above nested-loop plan. In such cases, the
<quote>loops</quote> value reports the <literal>loops=</> value reports the
total number of executions of the node, and the actual time and rows total number of executions of the node, and the actual time and rows
values shown are averages per-execution. This is done to make the numbers values shown are averages per-execution. This is done to make the numbers
comparable with the way that the cost estimates are shown. Multiply by comparable with the way that the cost estimates are shown. Multiply by
the <quote>loops</quote> value to get the total time actually spent in the <literal>loops=</> value to get the total time actually spent in
the node. the node.
</para> </para>
@ -429,9 +436,9 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t
reported for the top-level plan node. For <command>INSERT</>, reported for the top-level plan node. For <command>INSERT</>,
<command>UPDATE</>, and <command>DELETE</> commands, the total run time <command>UPDATE</>, and <command>DELETE</> commands, the total run time
might be considerably larger, because it includes the time spent processing might be considerably larger, because it includes the time spent processing
the result rows. In these commands, the time for the top plan node the result rows. For these commands, the time for the top plan node is
essentially is the time spent computing the new rows and/or locating the essentially the time spent locating the old rows and/or computing
old ones, but it doesn't include the time spent applying the changes. the new ones, but it doesn't include the time spent applying the changes.
Time spent firing triggers, if any, is also outside the top plan node, Time spent firing triggers, if any, is also outside the top plan node,
and is shown separately for each trigger. and is shown separately for each trigger.
</para> </para>
@ -475,7 +482,9 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 &lt; 100 AND t
queries similar to this one: queries similar to this one:
<screen> <screen>
SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 'tenk1%'; SELECT relname, relkind, reltuples, relpages
FROM pg_class
WHERE relname LIKE 'tenk1%';
relname | relkind | reltuples | relpages relname | relkind | reltuples | relpages
----------------------+---------+-----------+---------- ----------------------+---------+-----------+----------
@ -512,7 +521,7 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
<para> <para>
Most queries retrieve only a fraction of the rows in a table, due Most queries retrieve only a fraction of the rows in a table, due
to having <literal>WHERE</> clauses that restrict the rows to be to <literal>WHERE</> clauses that restrict the rows to be
examined. The planner thus needs to make an estimate of the examined. The planner thus needs to make an estimate of the
<firstterm>selectivity</> of <literal>WHERE</> clauses, that is, <firstterm>selectivity</> of <literal>WHERE</> clauses, that is,
the fraction of rows that match each condition in the the fraction of rows that match each condition in the
@ -544,7 +553,9 @@ SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 't
For example, we might do: For example, we might do:
<screen> <screen>
SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'road'; SELECT attname, n_distinct, most_common_vals
FROM pg_stats
WHERE tablename = 'road';
attname | n_distinct | most_common_vals attname | n_distinct | most_common_vals
---------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@ -769,7 +780,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
</indexterm> </indexterm>
<para> <para>
Turn off autocommit and just do one commit at the end. (In plain When doing <command>INSERT</>s, turn off autocommit and just do
one commit at the end. (In plain
SQL, this means issuing <command>BEGIN</command> at the start and SQL, this means issuing <command>BEGIN</command> at the start and
<command>COMMIT</command> at the end. Some client libraries might <command>COMMIT</command> at the end. Some client libraries might
do this behind your back, in which case you need to make sure the do this behind your back, in which case you need to make sure the
@ -812,7 +824,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
<para> <para>
Note that loading a large number of rows using Note that loading a large number of rows using
<command>COPY</command> is almost always faster than using <command>COPY</command> is almost always faster than using
<command>INSERT</command>, even if <command>PREPARE</> is used and <command>INSERT</command>, even if the <command>PREPARE ... INSERT</> is used and
multiple insertions are batched into a single transaction. multiple insertions are batched into a single transaction.
</para> </para>
@ -823,7 +835,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
needs to be written, because in case of an error, the files needs to be written, because in case of an error, the files
containing the newly loaded data will be removed anyway. containing the newly loaded data will be removed anyway.
However, this consideration does not apply when However, this consideration does not apply when
<xref linkend="guc-archive-mode"> is set, as all commands <xref linkend="guc-archive-mode"> is on, as all commands
must write WAL in that case. must write WAL in that case.
</para> </para>
@ -833,7 +845,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
<title>Remove Indexes</title> <title>Remove Indexes</title>
<para> <para>
If you are loading a freshly created table, the fastest way is to If you are loading a freshly created table, the fastest method is to
create the table, bulk load the table's data using create the table, bulk load the table's data using
<command>COPY</command>, then create any indexes needed for the <command>COPY</command>, then create any indexes needed for the
table. Creating an index on pre-existing data is quicker than table. Creating an index on pre-existing data is quicker than
@ -844,8 +856,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
If you are adding large amounts of data to an existing table, If you are adding large amounts of data to an existing table,
it might be a win to drop the index, it might be a win to drop the index,
load the table, and then recreate the index. Of course, the load the table, and then recreate the index. Of course, the
database performance for other users might be adversely affected database performance for other users might suffer
during the time that the index is missing. One should also think during the time the index is missing. One should also think
twice before dropping unique indexes, since the error checking twice before dropping unique indexes, since the error checking
afforded by the unique constraint will be lost while the index is afforded by the unique constraint will be lost while the index is
missing. missing.

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/pgbuffercache.sgml,v 2.3 2008/08/14 12:56:41 heikki Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/pgbuffercache.sgml,v 2.4 2009/04/27 16:27:36 momjian Exp $ -->
<sect1 id="pgbuffercache"> <sect1 id="pgbuffercache">
<title>pg_buffercache</title> <title>pg_buffercache</title>
@ -141,7 +141,8 @@
b.reldatabase IN (0, (SELECT oid FROM pg_database b.reldatabase IN (0, (SELECT oid FROM pg_database
WHERE datname = current_database())) WHERE datname = current_database()))
GROUP BY c.relname GROUP BY c.relname
ORDER BY 2 DESC LIMIT 10; ORDER BY 2 DESC
LIMIT 10;
relname | buffers relname | buffers
---------------------------------+--------- ---------------------------------+---------
tenk2 | 345 tenk2 | 345

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/postgres.sgml,v 1.86 2008/05/07 16:36:43 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/postgres.sgml,v 1.87 2009/04/27 16:27:36 momjian Exp $ -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
@ -78,7 +78,7 @@
chapters individually as they choose. The information in this chapters individually as they choose. The information in this
part is presented in a narrative fashion in topical units. part is presented in a narrative fashion in topical units.
Readers looking for a complete description of a particular command Readers looking for a complete description of a particular command
should look into <xref linkend="reference">. should see <xref linkend="reference">.
</para> </para>
<para> <para>
@ -127,14 +127,14 @@
self-contained and can be read individually as desired. The self-contained and can be read individually as desired. The
information in this part is presented in a narrative fashion in information in this part is presented in a narrative fashion in
topical units. Readers looking for a complete description of a topical units. Readers looking for a complete description of a
particular command should look into <xref linkend="reference">. particular command should see <xref linkend="reference">.
</para> </para>
<para> <para>
The first few chapters are written so that they can be understood The first few chapters are written so they can be understood
without prerequisite knowledge, so that new users who need to set without prerequisite knowledge, so new users who need to set
up their own server can begin their exploration with this part. up their own server can begin their exploration with this part.
The rest of this part is about tuning and management; that material The rest of this part is about tuning and management; the material
assumes that the reader is familiar with the general use of assumes that the reader is familiar with the general use of
the <productname>PostgreSQL</> database system. Readers are the <productname>PostgreSQL</> database system. Readers are
encouraged to look at <xref linkend="tutorial"> and <xref encouraged to look at <xref linkend="tutorial"> and <xref

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/problems.sgml,v 2.29 2009/01/06 17:27:06 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/problems.sgml,v 2.30 2009/04/27 16:27:36 momjian Exp $ -->
<sect1 id="bug-reporting"> <sect1 id="bug-reporting">
<title>Bug Reporting Guidelines</title> <title>Bug Reporting Guidelines</title>
@ -136,7 +136,7 @@
file that can be run through the <application>psql</application> file that can be run through the <application>psql</application>
frontend that shows the problem. (Be sure to not have anything frontend that shows the problem. (Be sure to not have anything
in your <filename>~/.psqlrc</filename> start-up file.) An easy in your <filename>~/.psqlrc</filename> start-up file.) An easy
start at this file is to use <application>pg_dump</application> way to create this file is to use <application>pg_dump</application>
to dump out the table declarations and data needed to set the to dump out the table declarations and data needed to set the
scene, then add the problem query. You are encouraged to scene, then add the problem query. You are encouraged to
minimize the size of your example, but this is not absolutely minimize the size of your example, but this is not absolutely
@ -252,7 +252,7 @@
C library, processor, memory information, and so on. In most C library, processor, memory information, and so on. In most
cases it is sufficient to report the vendor and version, but do cases it is sufficient to report the vendor and version, but do
not assume everyone knows what exactly <quote>Debian</quote> not assume everyone knows what exactly <quote>Debian</quote>
contains or that everyone runs on Pentiums. If you have contains or that everyone runs on i386s. If you have
installation problems then information about the toolchain on installation problems then information about the toolchain on
your machine (compiler, <application>make</application>, and so your machine (compiler, <application>make</application>, and so
on) is also necessary. on) is also necessary.

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/queries.sgml,v 1.53 2009/02/07 20:11:16 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/queries.sgml,v 1.54 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="queries"> <chapter id="queries">
<title>Queries</title> <title>Queries</title>
@ -14,7 +14,7 @@
<para> <para>
The previous chapters explained how to create tables, how to fill The previous chapters explained how to create tables, how to fill
them with data, and how to manipulate that data. Now we finally them with data, and how to manipulate that data. Now we finally
discuss how to retrieve the data out of the database. discuss how to retrieve the data from the database.
</para> </para>
@ -63,7 +63,7 @@ SELECT a, b + c FROM table1;
</para> </para>
<para> <para>
<literal>FROM table1</literal> is a particularly simple kind of <literal>FROM table1</literal> is a simple kind of
table expression: it reads just one table. In general, table table expression: it reads just one table. In general, table
expressions can be complex constructs of base tables, joins, and expressions can be complex constructs of base tables, joins, and
subqueries. But you can also omit the table expression entirely and subqueries. But you can also omit the table expression entirely and
@ -133,8 +133,8 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
<para> <para>
When a table reference names a table that is the parent of a When a table reference names a table that is the parent of a
table inheritance hierarchy, the table reference produces rows of table inheritance hierarchy, the table reference produces rows
not only that table but all of its descendant tables, unless the not only of that table but all of its descendant tables, unless the
key word <literal>ONLY</> precedes the table name. However, the key word <literal>ONLY</> precedes the table name. However, the
reference produces only the columns that appear in the named table reference produces only the columns that appear in the named table
&mdash; any columns added in subtables are ignored. &mdash; any columns added in subtables are ignored.
@ -174,11 +174,12 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
</synopsis> </synopsis>
<para> <para>
For each combination of rows from Produce every possible combination of rows from
<replaceable>T1</replaceable> and <replaceable>T1</replaceable> and
<replaceable>T2</replaceable>, the derived table will contain a <replaceable>T2</replaceable> (i.e., a Cartesian product),
row consisting of all columns in <replaceable>T1</replaceable> with output columns consisting of
followed by all columns in <replaceable>T2</replaceable>. If all <replaceable>T1</replaceable> columns
followed by all <replaceable>T2</replaceable> columns. If
the tables have N and M rows respectively, the joined the tables have N and M rows respectively, the joined
table will have N * M rows. table will have N * M rows.
</para> </para>
@ -242,14 +243,15 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
comma-separated list of column names, which the joined tables comma-separated list of column names, which the joined tables
must have in common, and forms a join condition specifying must have in common, and forms a join condition specifying
equality of each of these pairs of columns. Furthermore, the equality of each of these pairs of columns. Furthermore, the
output of a <literal>JOIN USING</> has one column for each of output of <literal>JOIN USING</> has one column for each of
the equated pairs of input columns, followed by all of the the equated pairs of input columns, followed by the
other columns from each table. Thus, <literal>USING (a, b, other columns from each table. Thus, <literal>USING (a, b,
c)</literal> is equivalent to <literal>ON (t1.a = t2.a AND c)</literal> is equivalent to <literal>ON (t1.a = t2.a AND
t1.b = t2.b AND t1.c = t2.c)</literal> with the exception that t1.b = t2.b AND t1.c = t2.c)</literal> with the exception that
if <literal>ON</> is used there will be two columns if <literal>ON</> is used there will be two columns
<literal>a</>, <literal>b</>, and <literal>c</> in the result, <literal>a</>, <literal>b</>, and <literal>c</> in the result,
whereas with <literal>USING</> there will be only one of each. whereas with <literal>USING</> there will be only one of each
(and they will appear first if <command>SELECT *</> is used).
</para> </para>
<para> <para>
@ -262,7 +264,7 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
</indexterm> </indexterm>
Finally, <literal>NATURAL</> is a shorthand form of Finally, <literal>NATURAL</> is a shorthand form of
<literal>USING</>: it forms a <literal>USING</> list <literal>USING</>: it forms a <literal>USING</> list
consisting of exactly those column names that appear in both consisting of all column names that appear in both
input tables. As with <literal>USING</>, these columns appear input tables. As with <literal>USING</>, these columns appear
only once in the output table. only once in the output table.
</para> </para>
@ -298,8 +300,8 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
<para> <para>
First, an inner join is performed. Then, for each row in First, an inner join is performed. Then, for each row in
T1 that does not satisfy the join condition with any row in T1 that does not satisfy the join condition with any row in
T2, a joined row is added with null values in columns of T2, a row is added with null values in columns of
T2. Thus, the joined table unconditionally has at least T2. Thus, the joined table always has at least
one row for each row in T1. one row for each row in T1.
</para> </para>
</listitem> </listitem>
@ -321,9 +323,9 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
<para> <para>
First, an inner join is performed. Then, for each row in First, an inner join is performed. Then, for each row in
T2 that does not satisfy the join condition with any row in T2 that does not satisfy the join condition with any row in
T1, a joined row is added with null values in columns of T1, a row is added with null values in columns of
T1. This is the converse of a left join: the result table T1. This is the converse of a left join: the result table
will unconditionally have a row for each row in T2. will always have a row for each row in T2.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -335,9 +337,9 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
<para> <para>
First, an inner join is performed. Then, for each row in First, an inner join is performed. Then, for each row in
T1 that does not satisfy the join condition with any row in T1 that does not satisfy the join condition with any row in
T2, a joined row is added with null values in columns of T2, a row is added with null values in columns of
T2. Also, for each row of T2 that does not satisfy the T2. Also, for each row of T2 that does not satisfy the
join condition with any row in T1, a joined row with null join condition with any row in T1, a row with null
values in the columns of T1 is added. values in the columns of T1 is added.
</para> </para>
</listitem> </listitem>
@ -350,8 +352,8 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
<para> <para>
Joins of all types can be chained together or nested: either or Joins of all types can be chained together or nested: either or
both of <replaceable>T1</replaceable> and both <replaceable>T1</replaceable> and
<replaceable>T2</replaceable> might be joined tables. Parentheses <replaceable>T2</replaceable> can be joined tables. Parentheses
can be used around <literal>JOIN</> clauses to control the join can be used around <literal>JOIN</> clauses to control the join
order. In the absence of parentheses, <literal>JOIN</> clauses order. In the absence of parentheses, <literal>JOIN</> clauses
nest left-to-right. nest left-to-right.
@ -460,6 +462,19 @@ FROM <replaceable>table_reference</replaceable> <optional>, <replaceable>table_r
3 | c | | 3 | c | |
(3 rows) (3 rows)
</screen> </screen>
Notice that placing the restriction in the <literal>WHERE</> clause
produces a different result:
<screen>
<prompt>=&gt;</> <userinput>SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx';</>
num | name | num | value
-----+------+-----+-------
1 | a | 1 | xxx
(1 row)
</screen>
This is because a restriction placed in the <literal>ON</>
clause is processed <emphasis>before</> the join, while
a restriction placed in the <literal>WHERE</> clause is processed
<emphasis>after</> the join.
</para> </para>
</sect3> </sect3>
@ -513,7 +528,7 @@ SELECT * FROM some_very_long_table_name s JOIN another_fairly_long_name a ON s.i
SELECT * FROM my_table AS m WHERE my_table.a &gt; 5; SELECT * FROM my_table AS m WHERE my_table.a &gt; 5;
</programlisting> </programlisting>
is not valid according to the SQL standard. In is not valid according to the SQL standard. In
<productname>PostgreSQL</productname> this will draw an error if the <productname>PostgreSQL</productname> this will draw an error, assuming the
<xref linkend="guc-add-missing-from"> configuration variable is <xref linkend="guc-add-missing-from"> configuration variable is
<literal>off</> (as it is by default). If it is <literal>on</>, <literal>off</> (as it is by default). If it is <literal>on</>,
an implicit table reference will be added to the an implicit table reference will be added to the
@ -559,8 +574,8 @@ FROM <replaceable>table_reference</replaceable> <optional>AS</optional> <replace
<para> <para>
When an alias is applied to the output of a <literal>JOIN</> When an alias is applied to the output of a <literal>JOIN</>
clause, using any of these forms, the alias hides the original clause, the alias hides the original
names within the <literal>JOIN</>. For example: name referenced in the <literal>JOIN</>. For example:
<programlisting> <programlisting>
SELECT a.* FROM my_table AS a JOIN your_table AS b ON ... SELECT a.* FROM my_table AS a JOIN your_table AS b ON ...
</programlisting> </programlisting>
@ -568,7 +583,7 @@ SELECT a.* FROM my_table AS a JOIN your_table AS b ON ...
<programlisting> <programlisting>
SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c
</programlisting> </programlisting>
is not valid: the table alias <literal>a</> is not visible is not valid; the table alias <literal>a</> is not visible
outside the alias <literal>c</>. outside the alias <literal>c</>.
</para> </para>
</sect3> </sect3>
@ -631,7 +646,7 @@ FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow'))
<para> <para>
If a table function returns a base data type, the single result If a table function returns a base data type, the single result
column is named like the function. If the function returns a column name matches the function name. If the function returns a
composite type, the result columns get the same names as the composite type, the result columns get the same names as the
individual attributes of the type. individual attributes of the type.
</para> </para>
@ -655,8 +670,11 @@ $$ LANGUAGE SQL;
SELECT * FROM getfoo(1) AS t1; SELECT * FROM getfoo(1) AS t1;
SELECT * FROM foo SELECT * FROM foo
WHERE foosubid IN (select foosubid from getfoo(foo.fooid) z WHERE foosubid IN (
where z.fooid = foo.fooid); SELECT foosubid
FROM getfoo(foo.fooid) z
WHERE z.fooid = foo.fooid
);
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1); CREATE VIEW vw_getfoo AS SELECT * FROM getfoo(1);
@ -668,13 +686,14 @@ SELECT * FROM vw_getfoo;
In some cases it is useful to define table functions that can In some cases it is useful to define table functions that can
return different column sets depending on how they are invoked. return different column sets depending on how they are invoked.
To support this, the table function can be declared as returning To support this, the table function can be declared as returning
the pseudotype <type>record</>. When such a function is used in the pseudotype <type>record</>, rather than <literal>SET OF</>.
When such a function is used in
a query, the expected row structure must be specified in the a query, the expected row structure must be specified in the
query itself, so that the system can know how to parse and plan query itself, so that the system can know how to parse and plan
the query. Consider this example: the query. Consider this example:
<programlisting> <programlisting>
SELECT * SELECT *
FROM dblink('dbname=mydb', 'select proname, prosrc from pg_proc') FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc')
AS t1(proname name, prosrc text) AS t1(proname name, prosrc text)
WHERE proname LIKE 'bytea%'; WHERE proname LIKE 'bytea%';
</programlisting> </programlisting>
@ -710,9 +729,9 @@ WHERE <replaceable>search_condition</replaceable>
After the processing of the <literal>FROM</> clause is done, each After the processing of the <literal>FROM</> clause is done, each
row of the derived virtual table is checked against the search row of the derived virtual table is checked against the search
condition. If the result of the condition is true, the row is condition. If the result of the condition is true, the row is
kept in the output table, otherwise (that is, if the result is kept in the output table, otherwise (i.e., if the result is
false or null) it is discarded. The search condition typically false or null) it is discarded. The search condition typically
references at least some column of the table generated in the references at least one column of the table generated in the
<literal>FROM</> clause; this is not required, but otherwise the <literal>FROM</> clause; this is not required, but otherwise the
<literal>WHERE</> clause will be fairly useless. <literal>WHERE</> clause will be fairly useless.
</para> </para>
@ -735,11 +754,12 @@ FROM a NATURAL JOIN b WHERE b.val &gt; 5
</programlisting> </programlisting>
Which one of these you use is mainly a matter of style. The Which one of these you use is mainly a matter of style. The
<literal>JOIN</> syntax in the <literal>FROM</> clause is <literal>JOIN</> syntax in the <literal>FROM</> clause is
probably not as portable to other SQL database management systems. For probably not as portable to other SQL database management systems,
outer joins there is no choice in any case: they must be done in even though it is in the SQL standard. For
the <literal>FROM</> clause. An <literal>ON</>/<literal>USING</> outer joins there is no choice: they must be done in
the <literal>FROM</> clause. The <literal>ON</>/<literal>USING</>
clause of an outer join is <emphasis>not</> equivalent to a clause of an outer join is <emphasis>not</> equivalent to a
<literal>WHERE</> condition, because it determines the addition <literal>WHERE</> condition, because it affects the addition
of rows (for unmatched input rows) as well as the removal of rows of rows (for unmatched input rows) as well as the removal of rows
from the final result. from the final result.
</para> </para>
@ -760,7 +780,7 @@ SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10)
SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 &gt; fdt.c1) SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 &gt; fdt.c1)
</programlisting> </programlisting>
<literal>fdt</literal> is the table derived in the <literal>fdt</literal> is the table used in the
<literal>FROM</> clause. Rows that do not meet the search <literal>FROM</> clause. Rows that do not meet the search
condition of the <literal>WHERE</> clause are eliminated from condition of the <literal>WHERE</> clause are eliminated from
<literal>fdt</literal>. Notice the use of scalar subqueries as <literal>fdt</literal>. Notice the use of scalar subqueries as
@ -803,11 +823,11 @@ SELECT <replaceable>select_list</replaceable>
<para> <para>
The <xref linkend="sql-groupby" endterm="sql-groupby-title"> is The <xref linkend="sql-groupby" endterm="sql-groupby-title"> is
used to group together those rows in a table that share the same used to group together those rows in a table that have the same
values in all the columns listed. The order in which the columns values in all the columns listed. The order in which the columns
are listed does not matter. The effect is to combine each set are listed does not matter. The effect is to combine each set
of rows sharing common values into one group row that is of rows having common values into one group row that
representative of all rows in the group. This is done to represents all rows in the group. This is done to
eliminate redundancy in the output and/or compute aggregates that eliminate redundancy in the output and/or compute aggregates that
apply to these groups. For instance: apply to these groups. For instance:
<screen> <screen>
@ -840,7 +860,7 @@ SELECT <replaceable>select_list</replaceable>
<para> <para>
In general, if a table is grouped, columns that are not In general, if a table is grouped, columns that are not
used in the grouping cannot be referenced except in aggregate the same in the group cannot be referenced except in aggregate
expressions. An example with aggregate expressions is: expressions. An example with aggregate expressions is:
<screen> <screen>
<prompt>=&gt;</> <userinput>SELECT x, sum(y) FROM test1 GROUP BY x;</> <prompt>=&gt;</> <userinput>SELECT x, sum(y) FROM test1 GROUP BY x;</>
@ -860,7 +880,7 @@ SELECT <replaceable>select_list</replaceable>
<tip> <tip>
<para> <para>
Grouping without aggregate expressions effectively calculates the Grouping without aggregate expressions effectively calculates the
set of distinct values in a column. This can also be achieved set of distinct values in a column. This can more clearly be achieved
using the <literal>DISTINCT</> clause (see <xref using the <literal>DISTINCT</> clause (see <xref
linkend="queries-distinct">). linkend="queries-distinct">).
</para> </para>
@ -868,7 +888,7 @@ SELECT <replaceable>select_list</replaceable>
<para> <para>
Here is another example: it calculates the total sales for each Here is another example: it calculates the total sales for each
product (rather than the total sales on all products): product (rather than the total sales of all products):
<programlisting> <programlisting>
SELECT product_id, p.name, (sum(s.units) * p.price) AS sales SELECT product_id, p.name, (sum(s.units) * p.price) AS sales
FROM products p LEFT JOIN sales s USING (product_id) FROM products p LEFT JOIN sales s USING (product_id)
@ -877,10 +897,10 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales
In this example, the columns <literal>product_id</literal>, In this example, the columns <literal>product_id</literal>,
<literal>p.name</literal>, and <literal>p.price</literal> must be <literal>p.name</literal>, and <literal>p.price</literal> must be
in the <literal>GROUP BY</> clause since they are referenced in in the <literal>GROUP BY</> clause since they are referenced in
the query select list. (Depending on how exactly the products the query select list. (Depending on how the products
table is set up, name and price might be fully dependent on the table is set up, name and price might be fully dependent on the
product ID, so the additional groupings could theoretically be product ID, so the additional groupings could theoretically be
unnecessary, but this is not implemented yet.) The column unnecessary, though this is not implemented.) The column
<literal>s.units</> does not have to be in the <literal>GROUP <literal>s.units</> does not have to be in the <literal>GROUP
BY</> list since it is only used in an aggregate expression BY</> list since it is only used in an aggregate expression
(<literal>sum(...)</literal>), which represents the sales (<literal>sum(...)</literal>), which represents the sales
@ -901,11 +921,11 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales
</indexterm> </indexterm>
<para> <para>
If a table has been grouped using a <literal>GROUP BY</literal> If a table has been grouped using <literal>GROUP BY</literal>,
clause, but then only certain groups are of interest, the but only certain groups are of interest, the
<literal>HAVING</literal> clause can be used, much like a <literal>HAVING</literal> clause can be used, much like a
<literal>WHERE</> clause, to eliminate groups from a grouped <literal>WHERE</> clause, to eliminate groups from the result.
table. The syntax is: The syntax is:
<synopsis> <synopsis>
SELECT <replaceable>select_list</replaceable> FROM ... <optional>WHERE ...</optional> GROUP BY ... HAVING <replaceable>boolean_expression</replaceable> SELECT <replaceable>select_list</replaceable> FROM ... <optional>WHERE ...</optional> GROUP BY ... HAVING <replaceable>boolean_expression</replaceable>
</synopsis> </synopsis>
@ -1068,8 +1088,7 @@ SELECT tbl1.*, tbl2.a FROM ...
the row's values substituted for any column references. But the the row's values substituted for any column references. But the
expressions in the select list do not have to reference any expressions in the select list do not have to reference any
columns in the table expression of the <literal>FROM</> clause; columns in the table expression of the <literal>FROM</> clause;
they could be constant arithmetic expressions as well, for they can be constant arithmetic expressions as well.
instance.
</para> </para>
</sect2> </sect2>
@ -1083,9 +1102,8 @@ SELECT tbl1.*, tbl2.a FROM ...
<para> <para>
The entries in the select list can be assigned names for further The entries in the select list can be assigned names for further
processing. The <quote>further processing</quote> in this case is processing, perhaps for reference in an <literal>ORDER BY</> clause
an optional sort specification and the client application (e.g., or for display by the client application. For example:
column headers for display). For example:
<programlisting> <programlisting>
SELECT a AS value, b + c AS sum FROM ... SELECT a AS value, b + c AS sum FROM ...
</programlisting> </programlisting>
@ -1122,8 +1140,8 @@ SELECT a "value", b + c AS sum FROM ...
<para> <para>
The naming of output columns here is different from that done in The naming of output columns here is different from that done in
the <literal>FROM</> clause (see <xref the <literal>FROM</> clause (see <xref
linkend="queries-table-aliases">). This pipeline will in fact linkend="queries-table-aliases">). It is possible
allow you to rename the same column twice, but the name chosen in to rename the same column twice, but the name used in
the select list is the one that will be passed on. the select list is the one that will be passed on.
</para> </para>
</note> </note>
@ -1181,7 +1199,7 @@ SELECT DISTINCT ON (<replaceable>expression</replaceable> <optional>, <replaceab
The <literal>DISTINCT ON</> clause is not part of the SQL standard The <literal>DISTINCT ON</> clause is not part of the SQL standard
and is sometimes considered bad style because of the potentially and is sometimes considered bad style because of the potentially
indeterminate nature of its results. With judicious use of indeterminate nature of its results. With judicious use of
<literal>GROUP BY</> and subqueries in <literal>FROM</> the <literal>GROUP BY</> and subqueries in <literal>FROM</>, this
construct can be avoided, but it is often the most convenient construct can be avoided, but it is often the most convenient
alternative. alternative.
</para> </para>
@ -1229,7 +1247,7 @@ SELECT DISTINCT ON (<replaceable>expression</replaceable> <optional>, <replaceab
<synopsis> <synopsis>
<replaceable>query1</replaceable> UNION <replaceable>query2</replaceable> UNION <replaceable>query3</replaceable> <replaceable>query1</replaceable> UNION <replaceable>query2</replaceable> UNION <replaceable>query3</replaceable>
</synopsis> </synopsis>
which really says which is executed as:
<synopsis> <synopsis>
(<replaceable>query1</replaceable> UNION <replaceable>query2</replaceable>) UNION <replaceable>query3</replaceable> (<replaceable>query1</replaceable> UNION <replaceable>query2</replaceable>) UNION <replaceable>query3</replaceable>
</synopsis> </synopsis>
@ -1328,9 +1346,9 @@ SELECT a, b FROM table1 ORDER BY a + b, c;
<para> <para>
The <literal>NULLS FIRST</> and <literal>NULLS LAST</> options can be The <literal>NULLS FIRST</> and <literal>NULLS LAST</> options can be
used to determine whether nulls appear before or after non-null values used to determine whether nulls appear before or after non-null values
in the sort ordering. By default, null values sort as if larger than any in the sort ordering. The default behavior is for null values sort as
non-null value; that is, <literal>NULLS FIRST</> is the default for if larger than all non-null values (<literal>NULLS FIRST</>), except
<literal>DESC</> order, and <literal>NULLS LAST</> otherwise. in <literal>DESC</> ordering, where <literal>NULLS LAST</> is the default.
</para> </para>
<para> <para>
@ -1341,15 +1359,14 @@ SELECT a, b FROM table1 ORDER BY a + b, c;
</para> </para>
<para> <para>
For backwards compatibility with the SQL92 version of the standard, A <replaceable>sort_expression</> can also be the column label or number
a <replaceable>sort_expression</> can instead be the name or number
of an output column, as in: of an output column, as in:
<programlisting> <programlisting>
SELECT a + b AS sum, c FROM table1 ORDER BY sum; SELECT a + b AS sum, c FROM table1 ORDER BY sum;
SELECT a, max(b) FROM table1 GROUP BY a ORDER BY 1; SELECT a, max(b) FROM table1 GROUP BY a ORDER BY 1;
</programlisting> </programlisting>
both of which sort by the first output column. Note that an output both of which sort by the first output column. Note that an output
column name has to stand alone, it's not allowed as part of an expression column name has to stand alone, e.g., it cannot be used in an expression
&mdash; for example, this is <emphasis>not</> correct: &mdash; for example, this is <emphasis>not</> correct:
<programlisting> <programlisting>
SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong
@ -1412,16 +1429,16 @@ SELECT <replaceable>select_list</replaceable>
<para> <para>
When using <literal>LIMIT</>, it is important to use an When using <literal>LIMIT</>, it is important to use an
<literal>ORDER BY</> clause that constrains the result rows into a <literal>ORDER BY</> clause that constrains the result rows in a
unique order. Otherwise you will get an unpredictable subset of unique order. Otherwise you will get an unpredictable subset of
the query's rows. You might be asking for the tenth through the query's rows. You might be asking for the tenth through
twentieth rows, but tenth through twentieth in what ordering? The twentieth rows, but tenth through twentieth using what ordering? The
ordering is unknown, unless you specified <literal>ORDER BY</>. ordering is unknown, unless you specified <literal>ORDER BY</>.
</para> </para>
<para> <para>
The query optimizer takes <literal>LIMIT</> into account when The query optimizer takes <literal>LIMIT</> into account when
generating a query plan, so you are very likely to get different generating query plans, so you are very likely to get different
plans (yielding different row orders) depending on what you give plans (yielding different row orders) depending on what you give
for <literal>LIMIT</> and <literal>OFFSET</>. Thus, using for <literal>LIMIT</> and <literal>OFFSET</>. Thus, using
different <literal>LIMIT</>/<literal>OFFSET</> values to select different <literal>LIMIT</>/<literal>OFFSET</> values to select
@ -1455,7 +1472,7 @@ SELECT <replaceable>select_list</replaceable>
<synopsis> <synopsis>
VALUES ( <replaceable class="PARAMETER">expression</replaceable> [, ...] ) [, ...] VALUES ( <replaceable class="PARAMETER">expression</replaceable> [, ...] ) [, ...]
</synopsis> </synopsis>
Each parenthesized list of expressions generates a row in the table. Each parenthesized list of expressions generates a row in the table expression.
The lists must all have the same number of elements (i.e., the number The lists must all have the same number of elements (i.e., the number
of columns in the table), and corresponding entries in each list must of columns in the table), and corresponding entries in each list must
have compatible data types. The actual data type assigned to each column have compatible data types. The actual data type assigned to each column
@ -1489,12 +1506,12 @@ SELECT 3, 'three';
<para> <para>
Syntactically, <literal>VALUES</> followed by expression lists is Syntactically, <literal>VALUES</> followed by expression lists is
treated as equivalent to treated as equivalent to:
<synopsis> <synopsis>
SELECT <replaceable>select_list</replaceable> FROM <replaceable>table_expression</replaceable> SELECT <replaceable>select_list</replaceable> FROM <replaceable>table_expression</replaceable>
</synopsis> </synopsis>
and can appear anywhere a <literal>SELECT</> can. For example, you can and can appear anywhere a <literal>SELECT</> can. For example, you can
use it as an arm of a <literal>UNION</>, or attach a use it as part of a <literal>UNION</>, or attach a
<replaceable>sort_specification</replaceable> (<literal>ORDER BY</>, <replaceable>sort_specification</replaceable> (<literal>ORDER BY</>,
<literal>LIMIT</>, and/or <literal>OFFSET</>) to it. <literal>VALUES</> <literal>LIMIT</>, and/or <literal>OFFSET</>) to it. <literal>VALUES</>
is most commonly used as the data source in an <command>INSERT</> command, is most commonly used as the data source in an <command>INSERT</> command,

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/query.sgml,v 1.51 2008/12/28 18:53:54 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/query.sgml,v 1.52 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="tutorial-sql"> <chapter id="tutorial-sql">
<title>The <acronym>SQL</acronym> Language</title> <title>The <acronym>SQL</acronym> Language</title>
@ -38,7 +38,7 @@
functions and types. (If you installed a pre-packaged version of functions and types. (If you installed a pre-packaged version of
<productname>PostgreSQL</productname> rather than building from source, <productname>PostgreSQL</productname> rather than building from source,
look for a directory named <filename>tutorial</> within the look for a directory named <filename>tutorial</> within the
<productname>PostgreSQL</productname> documentation. The <quote>make</> <productname>PostgreSQL</productname> distribution. The <quote>make</>
part should already have been done for you.) part should already have been done for you.)
Then, to start the tutorial, do the following: Then, to start the tutorial, do the following:
@ -53,7 +53,7 @@
</screen> </screen>
The <literal>\i</literal> command reads in commands from the The <literal>\i</literal> command reads in commands from the
specified file. The <literal>-s</literal> option puts you in specified file. The <command>psql</command> <literal>-s</> option puts you in
single step mode which pauses before sending each statement to the single step mode which pauses before sending each statement to the
server. The commands used in this section are in the file server. The commands used in this section are in the file
<filename>basics.sql</filename>. <filename>basics.sql</filename>.
@ -165,7 +165,7 @@ CREATE TABLE weather (
and a rich set of geometric types. and a rich set of geometric types.
<productname>PostgreSQL</productname> can be customized with an <productname>PostgreSQL</productname> can be customized with an
arbitrary number of user-defined data types. Consequently, type arbitrary number of user-defined data types. Consequently, type
names are not syntactical key words, except where required to names are not special key words in the syntax except where required to
support special cases in the <acronym>SQL</acronym> standard. support special cases in the <acronym>SQL</acronym> standard.
</para> </para>
@ -421,7 +421,7 @@ SELECT DISTINCT city
<literal>DISTINCT</literal> automatically orders the rows and <literal>DISTINCT</literal> automatically orders the rows and
so <literal>ORDER BY</literal> is unnecessary. But this is not so <literal>ORDER BY</literal> is unnecessary. But this is not
required by the SQL standard, and current required by the SQL standard, and current
<productname>PostgreSQL</productname> doesn't guarantee that <productname>PostgreSQL</productname> does not guarantee that
<literal>DISTINCT</literal> causes the rows to be ordered. <literal>DISTINCT</literal> causes the rows to be ordered.
</para> </para>
</footnote> </footnote>
@ -451,8 +451,8 @@ SELECT DISTINCT city
<firstterm>join</firstterm> query. As an example, say you wish to <firstterm>join</firstterm> query. As an example, say you wish to
list all the weather records together with the location of the list all the weather records together with the location of the
associated city. To do that, we need to compare the city column of associated city. To do that, we need to compare the city column of
each row of the weather table with the name column of all rows in each row of the <literal>weather</> table with the name column of all rows in
the cities table, and select the pairs of rows where these values match. the <literal>cities</> table, and select the pairs of rows where these values match.
<note> <note>
<para> <para>
This is only a conceptual model. The join is usually performed This is only a conceptual model. The join is usually performed
@ -486,7 +486,7 @@ SELECT *
There is no result row for the city of Hayward. This is There is no result row for the city of Hayward. This is
because there is no matching entry in the because there is no matching entry in the
<classname>cities</classname> table for Hayward, so the join <classname>cities</classname> table for Hayward, so the join
ignores the unmatched rows in the weather table. We will see ignores the unmatched rows in the <literal>weather</> table. We will see
shortly how this can be fixed. shortly how this can be fixed.
</para> </para>
</listitem> </listitem>
@ -494,9 +494,9 @@ SELECT *
<listitem> <listitem>
<para> <para>
There are two columns containing the city name. This is There are two columns containing the city name. This is
correct because the lists of columns of the correct because the columns from the
<classname>weather</classname> and the <classname>weather</classname> and the
<classname>cities</classname> table are concatenated. In <classname>cities</classname> tables are concatenated. In
practice this is undesirable, though, so you will probably want practice this is undesirable, though, so you will probably want
to list the output columns explicitly rather than using to list the output columns explicitly rather than using
<literal>*</literal>: <literal>*</literal>:
@ -514,14 +514,14 @@ SELECT city, temp_lo, temp_hi, prcp, date, location
<title>Exercise:</title> <title>Exercise:</title>
<para> <para>
Attempt to find out the semantics of this query when the Attempt to determine the semantics of this query when the
<literal>WHERE</literal> clause is omitted. <literal>WHERE</literal> clause is omitted.
</para> </para>
</formalpara> </formalpara>
<para> <para>
Since the columns all had different names, the parser Since the columns all had different names, the parser
automatically found out which table they belong to. If there automatically found which table they belong to. If there
were duplicate column names in the two tables you'd need to were duplicate column names in the two tables you'd need to
<firstterm>qualify</> the column names to show which one you <firstterm>qualify</> the column names to show which one you
meant, as in: meant, as in:

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.62 2009/02/12 13:26:03 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.63 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="regress"> <chapter id="regress">
<title id="regress-title">Regression Tests</title> <title id="regress-title">Regression Tests</title>
@ -37,7 +37,7 @@
<para> <para>
To run the regression tests after building but before installation, To run the regression tests after building but before installation,
type type:
<screen> <screen>
gmake check gmake check
</screen> </screen>
@ -45,7 +45,7 @@ gmake check
<filename>src/test/regress</filename> and run the command there.) <filename>src/test/regress</filename> and run the command there.)
This will first build several auxiliary files, such as This will first build several auxiliary files, such as
some sample user-defined trigger functions, and then run the test driver some sample user-defined trigger functions, and then run the test driver
script. At the end you should see something like script. At the end you should see something like:
<screen> <screen>
<computeroutput> <computeroutput>
======================= =======================
@ -64,7 +64,7 @@ gmake check
If you already did the build as root, you do not have to start all If you already did the build as root, you do not have to start all
over. Instead, make the regression test directory writable by over. Instead, make the regression test directory writable by
some other user, log in as that user, and restart the tests. some other user, log in as that user, and restart the tests.
For example For example:
<screen> <screen>
<prompt>root# </prompt><userinput>chmod -R a+w src/test/regress</userinput> <prompt>root# </prompt><userinput>chmod -R a+w src/test/regress</userinput>
<prompt>root# </prompt><userinput>su - joeuser</userinput> <prompt>root# </prompt><userinput>su - joeuser</userinput>
@ -101,7 +101,7 @@ gmake check
make sure this limit is at least fifty or so, else you might get make sure this limit is at least fifty or so, else you might get
random-seeming failures in the parallel test. If you are not in random-seeming failures in the parallel test. If you are not in
a position to raise the limit, you can cut down the degree of parallelism a position to raise the limit, you can cut down the degree of parallelism
by setting the <literal>MAX_CONNECTIONS</> parameter. For example, by setting the <literal>MAX_CONNECTIONS</> parameter. For example:
<screen> <screen>
gmake MAX_CONNECTIONS=10 check gmake MAX_CONNECTIONS=10 check
</screen> </screen>
@ -111,11 +111,11 @@ gmake MAX_CONNECTIONS=10 check
<para> <para>
To run the tests after installation<![%standalone-ignore;[ (see <xref linkend="installation">)]]>, To run the tests after installation<![%standalone-ignore;[ (see <xref linkend="installation">)]]>,
initialize a data area and start the initialize a data area and start the
server, <![%standalone-ignore;[as explained in <xref linkend="runtime">, ]]> then type server, <![%standalone-ignore;[as explained in <xref linkend="runtime">, ]]> then type:
<screen> <screen>
gmake installcheck gmake installcheck
</screen> </screen>
or for a parallel test or for a parallel test:
<screen> <screen>
gmake installcheck-parallel gmake installcheck-parallel
</screen> </screen>
@ -130,14 +130,14 @@ gmake installcheck-parallel
At present, these tests can be used only against an already-installed At present, these tests can be used only against an already-installed
server. To run the tests for all procedural languages that have been server. To run the tests for all procedural languages that have been
built and installed, change to the <filename>src/pl</> directory of the built and installed, change to the <filename>src/pl</> directory of the
build tree and type build tree and type:
<screen> <screen>
gmake installcheck gmake installcheck
</screen> </screen>
You can also do this in any of the subdirectories of <filename>src/pl</> You can also do this in any of the subdirectories of <filename>src/pl</>
to run tests for just one procedural language. To run the tests for all to run tests for just one procedural language. To run the tests for all
<filename>contrib</> modules that have them, change to the <filename>contrib</> modules that have them, change to the
<filename>contrib</> directory of the build tree and type <filename>contrib</> directory of the build tree and type:
<screen> <screen>
gmake installcheck gmake installcheck
</screen> </screen>
@ -479,7 +479,7 @@ gmake coverage-html
</para> </para>
<para> <para>
To reset the execution counts between test runs, run To reset the execution counts between test runs, run:
<screen> <screen>
gmake coverage-clean gmake coverage-clean
</screen> </screen>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/rowtypes.sgml,v 2.9 2007/02/01 00:28:18 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/rowtypes.sgml,v 2.10 2009/04/27 16:27:36 momjian Exp $ -->
<sect1 id="rowtypes"> <sect1 id="rowtypes">
<title>Composite Types</title> <title>Composite Types</title>
@ -12,9 +12,9 @@
</indexterm> </indexterm>
<para> <para>
A <firstterm>composite type</> describes the structure of a row or record; A <firstterm>composite type</> represents the structure of a row or record;
it is in essence just a list of field names and their data types. it is essentially just a list of field names and their data types.
<productname>PostgreSQL</productname> allows values of composite types to be <productname>PostgreSQL</productname> allows composite types to be
used in many of the same ways that simple types can be used. For example, a used in many of the same ways that simple types can be used. For example, a
column of a table can be declared to be of a composite type. column of a table can be declared to be of a composite type.
</para> </para>
@ -39,9 +39,9 @@ CREATE TYPE inventory_item AS (
The syntax is comparable to <command>CREATE TABLE</>, except that only The syntax is comparable to <command>CREATE TABLE</>, except that only
field names and types can be specified; no constraints (such as <literal>NOT field names and types can be specified; no constraints (such as <literal>NOT
NULL</>) can presently be included. Note that the <literal>AS</> keyword NULL</>) can presently be included. Note that the <literal>AS</> keyword
is essential; without it, the system will think a quite different kind is essential; without it, the system will think a different kind
of <command>CREATE TYPE</> command is meant, and you'll get odd syntax of <command>CREATE TYPE</> command is meant, and you will get odd syntax
errors. error.
</para> </para>
<para> <para>
@ -68,8 +68,8 @@ SELECT price_extension(item, 10) FROM on_hand;
</para> </para>
<para> <para>
Whenever you create a table, a composite type is also automatically Whenever you create a table, a composite type is automatically
created, with the same name as the table, to represent the table's created also, with the same name as the table, to represent the table's
row type. For example, had we said: row type. For example, had we said:
<programlisting> <programlisting>
CREATE TABLE inventory_item ( CREATE TABLE inventory_item (
@ -135,7 +135,7 @@ CREATE TABLE inventory_item (
<para> <para>
The <literal>ROW</literal> expression syntax can also be used to The <literal>ROW</literal> expression syntax can also be used to
construct composite values. In most cases this is considerably construct composite values. In most cases this is considerably
simpler to use than the string-literal syntax, since you don't have simpler to use than the string-literal syntax since you don't have
to worry about multiple layers of quoting. We already used this to worry about multiple layers of quoting. We already used this
method above: method above:
<programlisting> <programlisting>
@ -169,7 +169,8 @@ SELECT item.name FROM on_hand WHERE item.price &gt; 9.99;
</programlisting> </programlisting>
This will not work since the name <literal>item</> is taken to be a table This will not work since the name <literal>item</> is taken to be a table
name, not a field name, per SQL syntax rules. You must write it like this: name, not a column name of <literal>on_hand</>, per SQL syntax rules.
You must write it like this:
<programlisting> <programlisting>
SELECT (item).name FROM on_hand WHERE (item).price &gt; 9.99; SELECT (item).name FROM on_hand WHERE (item).price &gt; 9.99;
@ -195,7 +196,7 @@ SELECT (on_hand.item).name FROM on_hand WHERE (on_hand.item).price &gt; 9.99;
SELECT (my_func(...)).field FROM ... SELECT (my_func(...)).field FROM ...
</programlisting> </programlisting>
Without the extra parentheses, this will provoke a syntax error. Without the extra parentheses, this will generate a syntax error.
</para> </para>
</sect2> </sect2>
@ -249,7 +250,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
The external text representation of a composite value consists of items that The external text representation of a composite value consists of items that
are interpreted according to the I/O conversion rules for the individual are interpreted according to the I/O conversion rules for the individual
field types, plus decoration that indicates the composite structure. field types, plus decoration that indicates the composite structure.
The decoration consists of parentheses (<literal>(</> and <literal>)</>) The decoration consists of parentheses
around the whole value, plus commas (<literal>,</>) between adjacent around the whole value, plus commas (<literal>,</>) between adjacent
items. Whitespace outside the parentheses is ignored, but within the items. Whitespace outside the parentheses is ignored, but within the
parentheses it is considered part of the field value, and might or might not be parentheses it is considered part of the field value, and might or might not be
@ -263,7 +264,7 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
</para> </para>
<para> <para>
As shown previously, when writing a composite value you can write double As shown previously, when writing a composite value you can use double
quotes around any individual field value. quotes around any individual field value.
You <emphasis>must</> do so if the field value would otherwise You <emphasis>must</> do so if the field value would otherwise
confuse the composite-value parser. In particular, fields containing confuse the composite-value parser. In particular, fields containing
@ -272,7 +273,8 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2);
precede it with a backslash. (Also, a pair of double quotes within a precede it with a backslash. (Also, a pair of double quotes within a
double-quoted field value is taken to represent a double quote character, double-quoted field value is taken to represent a double quote character,
analogously to the rules for single quotes in SQL literal strings.) analogously to the rules for single quotes in SQL literal strings.)
Alternatively, you can use backslash-escaping to protect all data characters Alternatively, you can avoid quoting and use backslash-escaping to
protect all data characters
that would otherwise be taken as composite syntax. that would otherwise be taken as composite syntax.
</para> </para>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.427 2009/04/24 20:46:16 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.428 2009/04/27 16:27:36 momjian Exp $ -->
<chapter Id="runtime"> <chapter Id="runtime">
<title>Server Setup and Operation</title> <title>Server Setup and Operation</title>
@ -76,7 +76,7 @@
linkend="app-initdb">,<indexterm><primary>initdb</></> which is linkend="app-initdb">,<indexterm><primary>initdb</></> which is
installed with <productname>PostgreSQL</productname>. The desired installed with <productname>PostgreSQL</productname>. The desired
file system location of your database cluster is indicated by the file system location of your database cluster is indicated by the
<option>-D</option> option, for example <option>-D</option> option, for example:
<screen> <screen>
<prompt>$</> <userinput>initdb -D /usr/local/pgsql/data</userinput> <prompt>$</> <userinput>initdb -D /usr/local/pgsql/data</userinput>
</screen> </screen>
@ -382,7 +382,7 @@ FATAL: could not create TCP/IP listen socket
</para> </para>
<para> <para>
A message like A message like:
<screen> <screen>
FATAL: could not create shared memory segment: Invalid argument FATAL: could not create shared memory segment: Invalid argument
DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600). DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600).
@ -401,7 +401,7 @@ DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600).
</para> </para>
<para> <para>
An error like An error like:
<screen> <screen>
FATAL: could not create semaphores: No space left on device FATAL: could not create semaphores: No space left on device
DETAIL: Failed system call was semget(5440126, 17, 03600). DETAIL: Failed system call was semget(5440126, 17, 03600).

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/sources.sgml,v 2.32 2008/10/27 19:37:21 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/sources.sgml,v 2.33 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="source"> <chapter id="source">
<title>PostgreSQL Coding Conventions</title> <title>PostgreSQL Coding Conventions</title>
@ -661,10 +661,10 @@ BETTER: unrecognized node type: 42
<formalpara> <formalpara>
<title>May vs. Can vs. Might</title> <title>May vs. Can vs. Might</title>
<para> <para>
<quote>May</quote> suggests permission (e.g. "You may borrow my rake."), <quote>May</quote> suggests permission (e.g., "You may borrow my rake."),
and has little use in documentation or error messages. and has little use in documentation or error messages.
<quote>Can</quote> suggests ability (e.g. "I can lift that log."), <quote>Can</quote> suggests ability (e.g., "I can lift that log."),
and <quote>might</quote> suggests possibility (e.g. "It might rain and <quote>might</quote> suggests possibility (e.g., "It might rain
today."). Using the proper word clarifies meaning and assists today."). Using the proper word clarifies meaning and assists
translation. translation.
</para> </para>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/sql.sgml,v 1.47 2008/02/15 22:17:06 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/sql.sgml,v 1.48 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="sql-intro"> <chapter id="sql-intro">
<title>SQL</title> <title>SQL</title>
@ -95,7 +95,7 @@
as <firstterm><acronym>SQL3</acronym></firstterm> as <firstterm><acronym>SQL3</acronym></firstterm>
is under development. It is planned to make <acronym>SQL</acronym> is under development. It is planned to make <acronym>SQL</acronym>
a Turing-complete a Turing-complete
language, i.e. all computable queries (e.g. recursive queries) will be language, i.e., all computable queries (e.g., recursive queries) will be
possible. This has now been completed as SQL:2003. possible. This has now been completed as SQL:2003.
</para> </para>
@ -761,7 +761,7 @@ x(A) &mid; F(x)
<para> <para>
The relational algebra and the relational calculus have the same The relational algebra and the relational calculus have the same
<firstterm>expressive power</firstterm>; i.e. all queries that <firstterm>expressive power</firstterm>; i.e., all queries that
can be formulated using relational algebra can also be formulated can be formulated using relational algebra can also be formulated
using the relational calculus and vice versa. using the relational calculus and vice versa.
This was first proved by E. F. Codd in This was first proved by E. F. Codd in
@ -811,7 +811,7 @@ x(A) &mid; F(x)
<para> <para>
Arithmetic capability: In <acronym>SQL</acronym> it is possible Arithmetic capability: In <acronym>SQL</acronym> it is possible
to involve to involve
arithmetic operations as well as comparisons, e.g. arithmetic operations as well as comparisons, e.g.:
<programlisting> <programlisting>
A &lt; B + 3. A &lt; B + 3.
@ -1027,7 +1027,7 @@ SELECT S.SNAME, P.PNAME
SUPPLIER &times; PART &times; SELLS SUPPLIER &times; PART &times; SELLS
is derived. Now only those tuples satisfying the is derived. Now only those tuples satisfying the
conditions given in the WHERE clause are selected (i.e. the common conditions given in the WHERE clause are selected (i.e., the common
named attributes have to be equal). Finally we project out all named attributes have to be equal). Finally we project out all
columns but S.SNAME and P.PNAME. columns but S.SNAME and P.PNAME.
</para> </para>
@ -1312,7 +1312,7 @@ SELECT COUNT(PNO)
<acronym>SQL</acronym> allows one to partition the tuples of a table <acronym>SQL</acronym> allows one to partition the tuples of a table
into groups. Then the into groups. Then the
aggregate functions described above can be applied to the groups &mdash; aggregate functions described above can be applied to the groups &mdash;
i.e. the value of the aggregate function is no longer calculated over i.e., the value of the aggregate function is no longer calculated over
all the values of the specified column but over all values of a all the values of the specified column but over all values of a
group. Thus the aggregate function is evaluated separately for every group. Thus the aggregate function is evaluated separately for every
group. group.
@ -1517,7 +1517,7 @@ SELECT *
<para> <para>
If we want to know all suppliers that do not sell any part If we want to know all suppliers that do not sell any part
(e.g. to be able to remove these suppliers from the database) we use: (e.g., to be able to remove these suppliers from the database) we use:
<programlisting> <programlisting>
SELECT * SELECT *
@ -1533,7 +1533,7 @@ SELECT *
sells at least one part. Note that we use S.SNO from the outer sells at least one part. Note that we use S.SNO from the outer
<command>SELECT</command> within the WHERE clause of the inner <command>SELECT</command> within the WHERE clause of the inner
<command>SELECT</command>. Here the subquery must be evaluated <command>SELECT</command>. Here the subquery must be evaluated
afresh for each tuple from the outer query, i.e. the value for afresh for each tuple from the outer query, i.e., the value for
S.SNO is always taken from the current tuple of the outer S.SNO is always taken from the current tuple of the outer
<command>SELECT</command>. <command>SELECT</command>.
</para> </para>
@ -1811,7 +1811,7 @@ CREATE INDEX I ON SUPPLIER (SNAME);
</para> </para>
<para> <para>
The created index is maintained automatically, i.e. whenever a new The created index is maintained automatically, i.e., whenever a new
tuple is inserted into the relation SUPPLIER the index I is tuple is inserted into the relation SUPPLIER the index I is
adapted. Note that the only changes a user can perceive when an adapted. Note that the only changes a user can perceive when an
index is present are increased speed for <command>SELECT</command> index is present are increased speed for <command>SELECT</command>
@ -1826,7 +1826,7 @@ CREATE INDEX I ON SUPPLIER (SNAME);
<para> <para>
A view can be regarded as a <firstterm>virtual table</firstterm>, A view can be regarded as a <firstterm>virtual table</firstterm>,
i.e. a table that i.e., a table that
does not <emphasis>physically</emphasis> exist in the database does not <emphasis>physically</emphasis> exist in the database
but looks to the user but looks to the user
as if it does. By contrast, when we talk of a as if it does. By contrast, when we talk of a
@ -1838,7 +1838,7 @@ CREATE INDEX I ON SUPPLIER (SNAME);
<para> <para>
Views do not have their own, physically separate, distinguishable Views do not have their own, physically separate, distinguishable
stored data. Instead, the system stores the definition of the stored data. Instead, the system stores the definition of the
view (i.e. the rules about how to access physically stored base view (i.e., the rules about how to access physically stored base
tables in order to materialize the view) somewhere in the system tables in order to materialize the view) somewhere in the system
catalogs (see catalogs (see
<xref linkend="tutorial-catalogs-title" endterm="tutorial-catalogs-title">). For a <xref linkend="tutorial-catalogs-title" endterm="tutorial-catalogs-title">). For a
@ -2082,7 +2082,7 @@ DELETE FROM SUPPLIER
<para> <para>
In this section we will sketch how <acronym>SQL</acronym> can be In this section we will sketch how <acronym>SQL</acronym> can be
embedded into a host language (e.g. <literal>C</literal>). embedded into a host language (e.g., <literal>C</literal>).
There are two main reasons why we want to use <acronym>SQL</acronym> There are two main reasons why we want to use <acronym>SQL</acronym>
from a host language: from a host language:
@ -2090,7 +2090,7 @@ DELETE FROM SUPPLIER
<listitem> <listitem>
<para> <para>
There are queries that cannot be formulated using pure <acronym>SQL</acronym> There are queries that cannot be formulated using pure <acronym>SQL</acronym>
(i.e. recursive queries). To be able to perform such queries we need a (i.e., recursive queries). To be able to perform such queries we need a
host language with a greater expressive power than host language with a greater expressive power than
<acronym>SQL</acronym>. <acronym>SQL</acronym>.
</para> </para>
@ -2099,7 +2099,7 @@ DELETE FROM SUPPLIER
<listitem> <listitem>
<para> <para>
We simply want to access a database from some application that We simply want to access a database from some application that
is written in the host language (e.g. a ticket reservation system is written in the host language (e.g., a ticket reservation system
with a graphical user interface is written in C and the information with a graphical user interface is written in C and the information
about which tickets are still left is stored in a database that can be about which tickets are still left is stored in a database that can be
accessed using embedded <acronym>SQL</acronym>). accessed using embedded <acronym>SQL</acronym>).

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/start.sgml,v 1.48 2009/01/06 03:05:23 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/start.sgml,v 1.49 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="tutorial-start"> <chapter id="tutorial-start">
<title>Getting Started</title> <title>Getting Started</title>
@ -74,7 +74,7 @@
<para> <para>
A server process, which manages the database files, accepts A server process, which manages the database files, accepts
connections to the database from client applications, and connections to the database from client applications, and
performs actions on the database on behalf of the clients. The performs database actions on the behalf of the clients. The
database server program is called database server program is called
<filename>postgres</filename>. <filename>postgres</filename>.
<indexterm><primary>postgres</primary></indexterm> <indexterm><primary>postgres</primary></indexterm>
@ -108,7 +108,7 @@
<para> <para>
The <productname>PostgreSQL</productname> server can handle The <productname>PostgreSQL</productname> server can handle
multiple concurrent connections from clients. For that purpose it multiple concurrent connections from clients. To achieve this it
starts (<quote>forks</quote>) a new process for each connection. starts (<quote>forks</quote>) a new process for each connection.
From that point on, the client and the new server process From that point on, the client and the new server process
communicate without intervention by the original communicate without intervention by the original
@ -159,25 +159,26 @@
</para> </para>
<para> <para>
If you see a message similar to If you see a message similar to:
<screen> <screen>
createdb: command not found createdb: command not found
</screen> </screen>
then <productname>PostgreSQL</> was not installed properly. Either it was not then <productname>PostgreSQL</> was not installed properly. Either it was not
installed at all or the search path was not set correctly. Try installed at all or your shell's search path was not set correctly. Try
calling the command with an absolute path instead: calling the command with an absolute path instead:
<screen> <screen>
<prompt>$</prompt> <userinput>/usr/local/pgsql/bin/createdb mydb</userinput> <prompt>$</prompt> <userinput>/usr/local/pgsql/bin/createdb mydb</userinput>
</screen> </screen>
The path at your site might be different. Contact your site The path at your site might be different. Contact your site
administrator or check back in the installation instructions to administrator or check the installation instructions to
correct the situation. correct the situation.
</para> </para>
<para> <para>
Another response could be this: Another response could be this:
<screen> <screen>
createdb: could not connect to database postgres: could not connect to server: No such file or directory createdb: could not connect to database postgres: could not connect
to server: No such file or directory
Is the server running locally and accepting Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"? connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
</screen> </screen>
@ -246,7 +247,7 @@ createdb: database creation failed: ERROR: permission denied to create database
length. A convenient choice is to create a database with the same length. A convenient choice is to create a database with the same
name as your current user name. Many tools assume that database name as your current user name. Many tools assume that database
name as the default, so it can save you some typing. To create name as the default, so it can save you some typing. To create
that database, simply type that database, simply type:
<screen> <screen>
<prompt>$</prompt> <userinput>createdb</userinput> <prompt>$</prompt> <userinput>createdb</userinput>
</screen> </screen>
@ -299,7 +300,7 @@ createdb: database creation failed: ERROR: permission denied to create database
<para> <para>
Using an existing graphical frontend tool like Using an existing graphical frontend tool like
<application>pgAdmin</application> or an office suite with <application>pgAdmin</application> or an office suite with
<acronym>ODBC</acronym> support to create and manipulate a <acronym>ODBC</> or <acronym>JDBC</> support to create and manipulate a
database. These possibilities are not covered in this database. These possibilities are not covered in this
tutorial. tutorial.
</para> </para>
@ -314,15 +315,15 @@ createdb: database creation failed: ERROR: permission denied to create database
</listitem> </listitem>
</itemizedlist> </itemizedlist>
You probably want to start up <command>psql</command>, to try out You probably want to start up <command>psql</command> to try
the examples in this tutorial. It can be activated for the the examples in this tutorial. It can be activated for the
<literal>mydb</literal> database by typing the command: <literal>mydb</literal> database by typing the command:
<screen> <screen>
<prompt>$</prompt> <userinput>psql mydb</userinput> <prompt>$</prompt> <userinput>psql mydb</userinput>
</screen> </screen>
If you leave off the database name then it will default to your If you do not supply the database name then it will default to your
user account name. You already discovered this scheme in the user account name. You already discovered this scheme in the
previous section. previous section using <command>createdb</command>.
</para> </para>
<para> <para>
@ -335,15 +336,15 @@ Type "help" for help.
mydb=&gt; mydb=&gt;
</screen> </screen>
<indexterm><primary>superuser</primary></indexterm> <indexterm><primary>superuser</primary></indexterm>
The last line could also be The last line could also be:
<screen> <screen>
mydb=# mydb=#
</screen> </screen>
That would mean you are a database superuser, which is most likely That would mean you are a database superuser, which is most likely
the case if you installed <productname>PostgreSQL</productname> the case if you installed <productname>PostgreSQL</productname>
yourself. Being a superuser means that you are not subject to yourself. Being a superuser means that you are not subject to
access controls. For the purposes of this tutorial that is not of access controls. For the purposes of this tutorial that is not
importance. important.
</para> </para>
<para> <para>
@ -395,7 +396,7 @@ mydb=#
</para> </para>
<para> <para>
To get out of <command>psql</command>, type To get out of <command>psql</command>, type:
<screen> <screen>
<prompt>mydb=&gt;</prompt> <userinput>\q</userinput> <prompt>mydb=&gt;</prompt> <userinput>\q</userinput>
</screen> </screen>
@ -407,7 +408,7 @@ mydb=#
installed correctly you can also type <literal>man psql</literal> installed correctly you can also type <literal>man psql</literal>
at the operating system shell prompt to see the documentation. In at the operating system shell prompt to see the documentation. In
this tutorial we will not use these features explicitly, but you this tutorial we will not use these features explicitly, but you
can use them yourself when you see fit. can use them yourself when it is helpful.
</para> </para>
</sect1> </sect1>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/syntax.sgml,v 1.130 2009/02/04 21:30:41 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/syntax.sgml,v 1.131 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="sql-syntax"> <chapter id="sql-syntax">
<title>SQL Syntax</title> <title>SQL Syntax</title>
@ -11,12 +11,12 @@
<para> <para>
This chapter describes the syntax of SQL. It forms the foundation This chapter describes the syntax of SQL. It forms the foundation
for understanding the following chapters which will go into detail for understanding the following chapters which will go into detail
about how the SQL commands are applied to define and modify data. about how SQL commands are applied to define and modify data.
</para> </para>
<para> <para>
We also advise users who are already familiar with SQL to read this We also advise users who are already familiar with SQL to read this
chapter carefully because there are several rules and concepts that chapter carefully because it contains several rules and concepts that
are implemented inconsistently among SQL databases or that are are implemented inconsistently among SQL databases or that are
specific to <productname>PostgreSQL</productname>. specific to <productname>PostgreSQL</productname>.
</para> </para>
@ -293,7 +293,7 @@ U&amp;"d!0061t!+000061" UESCAPE '!'
bounded by single quotes (<literal>'</literal>), for example bounded by single quotes (<literal>'</literal>), for example
<literal>'This is a string'</literal>. To include <literal>'This is a string'</literal>. To include
a single-quote character within a string constant, a single-quote character within a string constant,
write two adjacent single quotes, e.g. write two adjacent single quotes, e.g.,
<literal>'Dianne''s horse'</literal>. <literal>'Dianne''s horse'</literal>.
Note that this is <emphasis>not</> the same as a double-quote Note that this is <emphasis>not</> the same as a double-quote
character (<literal>"</>). <!-- font-lock sanity: " --> character (<literal>"</>). <!-- font-lock sanity: " -->
@ -337,7 +337,7 @@ SELECT 'foo' 'bar';
string constants, which are an extension to the SQL standard. string constants, which are an extension to the SQL standard.
An escape string constant is specified by writing the letter An escape string constant is specified by writing the letter
<literal>E</literal> (upper or lower case) just before the opening single <literal>E</literal> (upper or lower case) just before the opening single
quote, e.g. <literal>E'foo'</>. (When continuing an escape string quote, e.g., <literal>E'foo'</>. (When continuing an escape string
constant across lines, write <literal>E</> only before the first opening constant across lines, write <literal>E</> only before the first opening
quote.) quote.)
Within an escape string, a backslash character (<literal>\</>) begins a Within an escape string, a backslash character (<literal>\</>) begins a
@ -422,14 +422,14 @@ SELECT 'foo' 'bar';
<xref linkend="guc-standard-conforming-strings"> is <literal>off</>, <xref linkend="guc-standard-conforming-strings"> is <literal>off</>,
then <productname>PostgreSQL</productname> recognizes backslash escapes then <productname>PostgreSQL</productname> recognizes backslash escapes
in both regular and escape string constants. This is for backward in both regular and escape string constants. This is for backward
compatibility with the historical behavior, in which backslash escapes compatibility with the historical behavior, where backslash escapes
were always recognized. were always recognized.
Although <varname>standard_conforming_strings</> currently defaults to Although <varname>standard_conforming_strings</> currently defaults to
<literal>off</>, the default will change to <literal>on</> in a future <literal>off</>, the default will change to <literal>on</> in a future
release for improved standards compliance. Applications are therefore release for improved standards compliance. Applications are therefore
encouraged to migrate away from using backslash escapes. If you need encouraged to migrate away from using backslash escapes. If you need
to use a backslash escape to represent a special character, write the to use a backslash escape to represent a special character, write the
constant with an <literal>E</> to be sure it will be handled the same string constant with an <literal>E</> to be sure it will be handled the same
way in future releases. way in future releases.
</para> </para>
@ -442,7 +442,7 @@ SELECT 'foo' 'bar';
</caution> </caution>
<para> <para>
The character with the code zero cannot be in a string constant. The zero-byte (null byte) character cannot be in a string constant.
</para> </para>
</sect3> </sect3>
@ -896,7 +896,7 @@ CAST ( '<replaceable>string</replaceable>' AS <replaceable>type</replaceable> )
</indexterm> </indexterm>
<para> <para>
A comment is an arbitrary sequence of characters beginning with A comment is a sequence of characters beginning with
double dashes and extending to the end of the line, e.g.: double dashes and extending to the end of the line, e.g.:
<programlisting> <programlisting>
-- This is a standard SQL comment -- This is a standard SQL comment
@ -918,8 +918,8 @@ CAST ( '<replaceable>string</replaceable>' AS <replaceable>type</replaceable> )
</para> </para>
<para> <para>
A comment is removed from the input stream before further syntax Comment are removed from the input stream before further syntax
analysis and is effectively replaced by whitespace. analysis and are effectively replaced by whitespace.
</para> </para>
</sect2> </sect2>
@ -1112,7 +1112,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4;
</programlisting> </programlisting>
the <literal>OPERATOR</> construct is taken to have the default precedence the <literal>OPERATOR</> construct is taken to have the default precedence
shown in <xref linkend="sql-precedence-table"> for <quote>any other</> operator. This is true no matter shown in <xref linkend="sql-precedence-table"> for <quote>any other</> operator. This is true no matter
which specific operator name appears inside <literal>OPERATOR()</>. which specific operator appears inside <literal>OPERATOR()</>.
</para> </para>
</sect2> </sect2>
</sect1> </sect1>
@ -1154,80 +1154,80 @@ SELECT 3 OPERATOR(pg_catalog.+) 4;
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
A constant or literal value. A constant or literal value
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A column reference. A column reference
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A positional parameter reference, in the body of a function definition A positional parameter reference, in the body of a function definition
or prepared statement. or prepared statement
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A subscripted expression. A subscripted expression
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A field selection expression. A field selection expression
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
An operator invocation. An operator invocation
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A function call. A function call
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
An aggregate expression. An aggregate expression
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A window function call. A window function call
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A type cast. A type cast
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A scalar subquery. A scalar subquery
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
An array constructor. An array constructor
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
A row constructor. A row constructor
</para> </para>
</listitem> </listitem>
@ -1264,7 +1264,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4;
</indexterm> </indexterm>
<para> <para>
A column can be referenced in the form A column can be referenced in the form:
<synopsis> <synopsis>
<replaceable>correlation</replaceable>.<replaceable>columnname</replaceable> <replaceable>correlation</replaceable>.<replaceable>columnname</replaceable>
</synopsis> </synopsis>
@ -1426,7 +1426,7 @@ $1.somecolumn
where the <replaceable>operator</replaceable> token follows the syntax where the <replaceable>operator</replaceable> token follows the syntax
rules of <xref linkend="sql-syntax-operators">, or is one of the rules of <xref linkend="sql-syntax-operators">, or is one of the
key words <token>AND</token>, <token>OR</token>, and key words <token>AND</token>, <token>OR</token>, and
<token>NOT</token>, or is a qualified operator name in the form <token>NOT</token>, or is a qualified operator name in the form:
<synopsis> <synopsis>
<literal>OPERATOR(</><replaceable>schema</><literal>.</><replaceable>operatorname</><literal>)</> <literal>OPERATOR(</><replaceable>schema</><literal>.</><replaceable>operatorname</><literal>)</>
</synopsis> </synopsis>
@ -1714,7 +1714,7 @@ CAST ( <replaceable>expression</replaceable> AS <replaceable>type</replaceable>
casts that are marked <quote>OK to apply implicitly</> casts that are marked <quote>OK to apply implicitly</>
in the system catalogs. Other casts must be invoked with in the system catalogs. Other casts must be invoked with
explicit casting syntax. This restriction is intended to prevent explicit casting syntax. This restriction is intended to prevent
surprising conversions from being applied silently. surprising conversions from being silently applied.
</para> </para>
<para> <para>
@ -1730,7 +1730,7 @@ CAST ( <replaceable>expression</replaceable> AS <replaceable>type</replaceable>
<literal>timestamp</> can only be used in this fashion if they are <literal>timestamp</> can only be used in this fashion if they are
double-quoted, because of syntactic conflicts. Therefore, the use of double-quoted, because of syntactic conflicts. Therefore, the use of
the function-like cast syntax leads to inconsistencies and should the function-like cast syntax leads to inconsistencies and should
probably be avoided in new applications. probably be avoided.
</para> </para>
<note> <note>
@ -1794,7 +1794,7 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name)
<para> <para>
An array constructor is an expression that builds an An array constructor is an expression that builds an
array value from values for its member elements. A simple array array using values for its member elements. A simple array
constructor constructor
consists of the key word <literal>ARRAY</literal>, a left square bracket consists of the key word <literal>ARRAY</literal>, a left square bracket
<literal>[</>, a list of expressions (separated by commas) for the <literal>[</>, a list of expressions (separated by commas) for the
@ -1925,8 +1925,8 @@ SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
</indexterm> </indexterm>
<para> <para>
A row constructor is an expression that builds a row value (also A row constructor is an expression that builds a row (also
called a composite value) from values called a composite value) using values
for its member fields. A row constructor consists of the key word for its member fields. A row constructor consists of the key word
<literal>ROW</literal>, a left parenthesis, zero or more <literal>ROW</literal>, a left parenthesis, zero or more
expressions (separated by commas) for the row field values, and finally expressions (separated by commas) for the row field values, and finally

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.50 2009/04/19 20:36:06 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.51 2009/04/27 16:27:36 momjian Exp $ -->
<chapter id="textsearch"> <chapter id="textsearch">
<title id="textsearch-title">Full Text Search</title> <title id="textsearch-title">Full Text Search</title>
@ -74,7 +74,7 @@
<listitem> <listitem>
<para> <para>
<emphasis>Parsing documents into <firstterm>tokens</></emphasis>. It is <emphasis>Parsing documents into <firstterm>tokens</></emphasis>. It is
useful to identify various classes of tokens, e.g. numbers, words, useful to identify various classes of tokens, e.g., numbers, words,
complex words, email addresses, so that they can be processed complex words, email addresses, so that they can be processed
differently. In principle token classes depend on the specific differently. In principle token classes depend on the specific
application, but for most purposes it is adequate to use a predefined application, but for most purposes it is adequate to use a predefined
@ -323,7 +323,7 @@ text @@ text
The above are all simple text search examples. As mentioned before, full The above are all simple text search examples. As mentioned before, full
text search functionality includes the ability to do many more things: text search functionality includes the ability to do many more things:
skip indexing certain words (stop words), process synonyms, and use skip indexing certain words (stop words), process synonyms, and use
sophisticated parsing, e.g. parse based on more than just white space. sophisticated parsing, e.g., parse based on more than just white space.
This functionality is controlled by <firstterm>text search This functionality is controlled by <firstterm>text search
configurations</>. <productname>PostgreSQL</> comes with predefined configurations</>. <productname>PostgreSQL</> comes with predefined
configurations for many languages, and you can easily create your own configurations for many languages, and you can easily create your own
@ -389,7 +389,7 @@ text @@ text
<para> <para>
Text search parsers and templates are built from low-level C functions; Text search parsers and templates are built from low-level C functions;
therefore it requires C programming ability to develop new ones, and therefore C programming ability is required to develop new ones, and
superuser privileges to install one into a database. (There are examples superuser privileges to install one into a database. (There are examples
of add-on parsers and templates in the <filename>contrib/</> area of the of add-on parsers and templates in the <filename>contrib/</> area of the
<productname>PostgreSQL</> distribution.) Since dictionaries and <productname>PostgreSQL</> distribution.) Since dictionaries and
@ -416,7 +416,7 @@ text @@ text
<title>Searching a Table</title> <title>Searching a Table</title>
<para> <para>
It is possible to do full text search with no index. A simple query It is possible to do a full text search without an index. A simple query
to print the <structname>title</> of each row that contains the word to print the <structname>title</> of each row that contains the word
<literal>friend</> in its <structfield>body</> field is: <literal>friend</> in its <structfield>body</> field is:
@ -455,7 +455,8 @@ WHERE to_tsvector(body) @@ to_tsquery('friend');
SELECT title SELECT title
FROM pgweb FROM pgweb
WHERE to_tsvector(title || ' ' || body) @@ to_tsquery('create &amp; table') WHERE to_tsvector(title || ' ' || body) @@ to_tsquery('create &amp; table')
ORDER BY last_mod_date DESC LIMIT 10; ORDER BY last_mod_date DESC
LIMIT 10;
</programlisting> </programlisting>
For clarity we omitted the <function>coalesce</function> function calls For clarity we omitted the <function>coalesce</function> function calls
@ -518,7 +519,7 @@ CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(config_name, body));
recording which configuration was used for each index entry. This recording which configuration was used for each index entry. This
would be useful, for example, if the document collection contained would be useful, for example, if the document collection contained
documents in different languages. Again, documents in different languages. Again,
queries that are to use the index must be phrased to match, e.g. queries that wish to use the index must be phrased to match, e.g.,
<literal>WHERE to_tsvector(config_name, body) @@ 'a &amp; b'</>. <literal>WHERE to_tsvector(config_name, body) @@ 'a &amp; b'</>.
</para> </para>
@ -555,7 +556,8 @@ CREATE INDEX textsearch_idx ON pgweb USING gin(textsearchable_index_col);
SELECT title SELECT title
FROM pgweb FROM pgweb
WHERE textsearchable_index_col @@ to_tsquery('create &amp; table') WHERE textsearchable_index_col @@ to_tsquery('create &amp; table')
ORDER BY last_mod_date DESC LIMIT 10; ORDER BY last_mod_date DESC
LIMIT 10;
</programlisting> </programlisting>
</para> </para>
@ -840,7 +842,7 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
document, and how important is the part of the document where they occur. document, and how important is the part of the document where they occur.
However, the concept of relevancy is vague and very application-specific. However, the concept of relevancy is vague and very application-specific.
Different applications might require additional information for ranking, Different applications might require additional information for ranking,
e.g. document modification time. The built-in ranking functions are only e.g., document modification time. The built-in ranking functions are only
examples. You can write your own ranking functions and/or combine their examples. You can write your own ranking functions and/or combine their
results with additional factors to fit your specific needs. results with additional factors to fit your specific needs.
</para> </para>
@ -877,7 +879,8 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
<term> <term>
<synopsis> <synopsis>
ts_rank_cd(<optional> <replaceable class="PARAMETER">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="PARAMETER">vector</replaceable> <type>tsvector</>, <replaceable class="PARAMETER">query</replaceable> <type>tsquery</> <optional>, <replaceable class="PARAMETER">normalization</replaceable> <type>integer</> </optional>) returns <type>float4</> ts_rank_cd(<optional> <replaceable class="PARAMETER">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="PARAMETER">vector</replaceable> <type>tsvector</>,
<replaceable class="PARAMETER">query</replaceable> <type>tsquery</> <optional>, <replaceable class="PARAMETER">normalization</replaceable> <type>integer</> </optional>) returns <type>float4</>
</synopsis> </synopsis>
</term> </term>
@ -921,13 +924,13 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
</programlisting> </programlisting>
Typically weights are used to mark words from special areas of the Typically weights are used to mark words from special areas of the
document, like the title or an initial abstract, so that they can be document, like the title or an initial abstract, so they can be
treated as more or less important than words in the document body. treated with more or less importance than words in the document body.
</para> </para>
<para> <para>
Since a longer document has a greater chance of containing a query term Since a longer document has a greater chance of containing a query term
it is reasonable to take into account document size, e.g. a hundred-word it is reasonable to take into account document size, e.g., a hundred-word
document with five instances of a search word is probably more relevant document with five instances of a search word is probably more relevant
than a thousand-word document with five instances. Both ranking functions than a thousand-word document with five instances. Both ranking functions
take an integer <replaceable>normalization</replaceable> option that take an integer <replaceable>normalization</replaceable> option that
@ -996,7 +999,8 @@ SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
SELECT title, ts_rank_cd(textsearch, query) AS rank SELECT title, ts_rank_cd(textsearch, query) AS rank
FROM apod, to_tsquery('neutrino|(dark &amp; matter)') query FROM apod, to_tsquery('neutrino|(dark &amp; matter)') query
WHERE query @@ textsearch WHERE query @@ textsearch
ORDER BY rank DESC LIMIT 10; ORDER BY rank DESC
LIMIT 10;
title | rank title | rank
-----------------------------------------------+---------- -----------------------------------------------+----------
Neutrinos in the Sun | 3.1 Neutrinos in the Sun | 3.1
@ -1017,7 +1021,8 @@ ORDER BY rank DESC LIMIT 10;
SELECT title, ts_rank_cd(textsearch, query, 32 /* rank/(rank+1) */ ) AS rank SELECT title, ts_rank_cd(textsearch, query, 32 /* rank/(rank+1) */ ) AS rank
FROM apod, to_tsquery('neutrino|(dark &amp; matter)') query FROM apod, to_tsquery('neutrino|(dark &amp; matter)') query
WHERE query @@ textsearch WHERE query @@ textsearch
ORDER BY rank DESC LIMIT 10; ORDER BY rank DESC
LIMIT 10;
title | rank title | rank
-----------------------------------------------+------------------- -----------------------------------------------+-------------------
Neutrinos in the Sun | 0.756097569485493 Neutrinos in the Sun | 0.756097569485493
@ -1037,7 +1042,7 @@ ORDER BY rank DESC LIMIT 10;
Ranking can be expensive since it requires consulting the Ranking can be expensive since it requires consulting the
<type>tsvector</type> of each matching document, which can be I/O bound and <type>tsvector</type> of each matching document, which can be I/O bound and
therefore slow. Unfortunately, it is almost impossible to avoid since therefore slow. Unfortunately, it is almost impossible to avoid since
practical queries often result in large numbers of matches. practical queries often result in a large number of matches.
</para> </para>
</sect2> </sect2>
@ -1063,7 +1068,7 @@ ORDER BY rank DESC LIMIT 10;
<para> <para>
<function>ts_headline</function> accepts a document along <function>ts_headline</function> accepts a document along
with a query, and returns an excerpt from with a query, and returns an excerpt of
the document in which terms from the query are highlighted. The the document in which terms from the query are highlighted. The
configuration to be used to parse the document can be specified by configuration to be used to parse the document can be specified by
<replaceable>config</replaceable>; if <replaceable>config</replaceable> <replaceable>config</replaceable>; if <replaceable>config</replaceable>
@ -1080,8 +1085,8 @@ ORDER BY rank DESC LIMIT 10;
<itemizedlist spacing="compact" mark="bullet"> <itemizedlist spacing="compact" mark="bullet">
<listitem> <listitem>
<para> <para>
<literal>StartSel</>, <literal>StopSel</literal>: the strings with which <literal>StartSel</>, <literal>StopSel</literal>: the strings to delimit
query words appearing in the document should be delimited to distinguish query words appearing in the document, to distinguish
them from other excerpted words. You must double-quote these strings them from other excerpted words. You must double-quote these strings
if they contain spaces or commas. if they contain spaces or commas.
</para> </para>
@ -1183,7 +1188,8 @@ SELECT id, ts_headline(body, q), rank
FROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank FROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank
FROM apod, to_tsquery('stars') q FROM apod, to_tsquery('stars') q
WHERE ti @@ q WHERE ti @@ q
ORDER BY rank DESC LIMIT 10) AS foo; ORDER BY rank DESC
LIMIT 10) AS foo;
</programlisting> </programlisting>
</para> </para>
@ -1267,7 +1273,7 @@ FROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank
<listitem> <listitem>
<para> <para>
This function returns a copy of the input vector in which every <function>setweight</> returns a copy of the input vector in which every
position has been labeled with the given <replaceable>weight</>, either position has been labeled with the given <replaceable>weight</>, either
<literal>A</literal>, <literal>B</literal>, <literal>C</literal>, or <literal>A</literal>, <literal>B</literal>, <literal>C</literal>, or
<literal>D</literal>. (<literal>D</literal> is the default for new <literal>D</literal>. (<literal>D</literal> is the default for new
@ -1467,7 +1473,7 @@ SELECT querytree(to_tsquery('!defined'));
<para> <para>
The <function>ts_rewrite</function> family of functions search a The <function>ts_rewrite</function> family of functions search a
given <type>tsquery</> for occurrences of a target given <type>tsquery</> for occurrences of a target
subquery, and replace each occurrence with another subquery, and replace each occurrence with a
substitute subquery. In essence this operation is a substitute subquery. In essence this operation is a
<type>tsquery</>-specific version of substring replacement. <type>tsquery</>-specific version of substring replacement.
A target and substitute combination can be A target and substitute combination can be
@ -1567,7 +1573,9 @@ SELECT ts_rewrite(to_tsquery('supernovae &amp; crab'), 'SELECT * FROM aliases');
We can change the rewriting rules just by updating the table: We can change the rewriting rules just by updating the table:
<programlisting> <programlisting>
UPDATE aliases SET s = to_tsquery('supernovae|sn &amp; !nebulae') WHERE t = to_tsquery('supernovae'); UPDATE aliases
SET s = to_tsquery('supernovae|sn &amp; !nebulae')
WHERE t = to_tsquery('supernovae');
SELECT ts_rewrite(to_tsquery('supernovae &amp; crab'), 'SELECT * FROM aliases'); SELECT ts_rewrite(to_tsquery('supernovae &amp; crab'), 'SELECT * FROM aliases');
ts_rewrite ts_rewrite
@ -1578,7 +1586,7 @@ SELECT ts_rewrite(to_tsquery('supernovae &amp; crab'), 'SELECT * FROM aliases');
<para> <para>
Rewriting can be slow when there are many rewriting rules, since it Rewriting can be slow when there are many rewriting rules, since it
checks every rule for a possible hit. To filter out obvious non-candidate checks every rule for a possible match. To filter out obvious non-candidate
rules we can use the containment operators for the <type>tsquery</type> rules we can use the containment operators for the <type>tsquery</type>
type. In the example below, we select only those rules which might match type. In the example below, we select only those rules which might match
the original query: the original query:
@ -1670,9 +1678,9 @@ SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title &amp; body');
</para> </para>
<para> <para>
A limitation of the built-in triggers is that they treat all the A limitation of built-in triggers is that they treat all the
input columns alike. To process columns differently &mdash; for input columns alike. To process columns differently &mdash; for
example, to weight title differently from body &mdash; it is necessary example, to weigh title differently from body &mdash; it is necessary
to write a custom trigger. Here is an example using to write a custom trigger. Here is an example using
<application>PL/pgSQL</application> as the trigger language: <application>PL/pgSQL</application> as the trigger language:
@ -1714,11 +1722,13 @@ ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger();
</para> </para>
<synopsis> <synopsis>
ts_stat(<replaceable class="PARAMETER">sqlquery</replaceable> <type>text</>, <optional> <replaceable class="PARAMETER">weights</replaceable> <type>text</>, </optional> OUT <replaceable class="PARAMETER">word</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">ndoc</replaceable> <type>integer</>, OUT <replaceable class="PARAMETER">nentry</replaceable> <type>integer</>) returns <type>setof record</> ts_stat(<replaceable class="PARAMETER">sqlquery</replaceable> <type>text</>, <optional> <replaceable class="PARAMETER">weights</replaceable> <type>text</>,
</optional> OUT <replaceable class="PARAMETER">word</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">ndoc</replaceable> <type>integer</>,
OUT <replaceable class="PARAMETER">nentry</replaceable> <type>integer</>) returns <type>setof record</>
</synopsis> </synopsis>
<para> <para>
<replaceable>sqlquery</replaceable> is a text value containing a SQL <replaceable>sqlquery</replaceable> is a text value containing an SQL
query which must return a single <type>tsvector</type> column. query which must return a single <type>tsvector</type> column.
<function>ts_stat</> executes the query and returns statistics about <function>ts_stat</> executes the query and returns statistics about
each distinct lexeme (word) contained in the <type>tsvector</type> each distinct lexeme (word) contained in the <type>tsvector</type>
@ -1930,7 +1940,7 @@ LIMIT 10;
only the basic ASCII letters are reported as a separate token type, only the basic ASCII letters are reported as a separate token type,
since it is sometimes useful to distinguish them. In most European since it is sometimes useful to distinguish them. In most European
languages, token types <literal>word</> and <literal>asciiword</> languages, token types <literal>word</> and <literal>asciiword</>
should always be treated alike. should be treated alike.
</para> </para>
</note> </note>
@ -2077,7 +2087,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
by the parser, each dictionary in the list is consulted in turn, by the parser, each dictionary in the list is consulted in turn,
until some dictionary recognizes it as a known word. If it is identified until some dictionary recognizes it as a known word. If it is identified
as a stop word, or if no dictionary recognizes the token, it will be as a stop word, or if no dictionary recognizes the token, it will be
discarded and not indexed or searched for. discarded and not indexed or searched.
The general rule for configuring a list of dictionaries The general rule for configuring a list of dictionaries
is to place first the most narrow, most specific dictionary, then the more is to place first the most narrow, most specific dictionary, then the more
general dictionaries, finishing with a very general dictionary, like general dictionaries, finishing with a very general dictionary, like
@ -2268,7 +2278,8 @@ CREATE TEXT SEARCH DICTIONARY my_synonym (
); );
ALTER TEXT SEARCH CONFIGURATION english ALTER TEXT SEARCH CONFIGURATION english
ALTER MAPPING FOR asciiword WITH my_synonym, english_stem; ALTER MAPPING FOR asciiword
WITH my_synonym, english_stem;
SELECT * FROM ts_debug('english', 'Paris'); SELECT * FROM ts_debug('english', 'Paris');
alias | description | token | dictionaries | dictionary | lexemes alias | description | token | dictionaries | dictionary | lexemes
@ -2428,7 +2439,8 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
<programlisting> <programlisting>
ALTER TEXT SEARCH CONFIGURATION russian ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart WITH thesaurus_simple; ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
WITH thesaurus_simple;
</programlisting> </programlisting>
</para> </para>
@ -2457,7 +2469,8 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
); );
ALTER TEXT SEARCH CONFIGURATION russian ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart WITH thesaurus_astro, english_stem; ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
WITH thesaurus_astro, english_stem;
</programlisting> </programlisting>
Now we can see how it works. Now we can see how it works.
@ -2520,7 +2533,7 @@ SELECT plainto_tsquery('supernova star');
<firstterm>morphological dictionaries</>, which can normalize many <firstterm>morphological dictionaries</>, which can normalize many
different linguistic forms of a word into the same lexeme. For example, different linguistic forms of a word into the same lexeme. For example,
an English <application>Ispell</> dictionary can match all declensions and an English <application>Ispell</> dictionary can match all declensions and
conjugations of the search term <literal>bank</literal>, e.g. conjugations of the search term <literal>bank</literal>, e.g.,
<literal>banking</>, <literal>banked</>, <literal>banks</>, <literal>banking</>, <literal>banked</>, <literal>banks</>,
<literal>banks'</>, and <literal>bank's</>. <literal>banks'</>, and <literal>bank's</>.
</para> </para>
@ -2567,9 +2580,8 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
</para> </para>
<para> <para>
Ispell dictionaries support splitting compound words. Ispell dictionaries support splitting compound words;
This is a nice feature and a useful feature.
<productname>PostgreSQL</productname> supports it.
Notice that the affix file should specify a special flag using the Notice that the affix file should specify a special flag using the
<literal>compoundwords controlled</literal> statement that marks dictionary <literal>compoundwords controlled</literal> statement that marks dictionary
words that can participate in compound formation: words that can participate in compound formation:
@ -2603,8 +2615,8 @@ SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk');
<title><application>Snowball</> Dictionary</title> <title><application>Snowball</> Dictionary</title>
<para> <para>
The <application>Snowball</> dictionary template is based on the project The <application>Snowball</> dictionary template is based on a project
of Martin Porter, inventor of the popular Porter's stemming algorithm by Martin Porter, inventor of the popular Porter's stemming algorithm
for the English language. Snowball now provides stemming algorithms for for the English language. Snowball now provides stemming algorithms for
many languages (see the <ulink url="http://snowball.tartarus.org">Snowball many languages (see the <ulink url="http://snowball.tartarus.org">Snowball
site</ulink> for more information). Each algorithm understands how to site</ulink> for more information). Each algorithm understands how to
@ -2668,7 +2680,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem (
<para> <para>
As an example, we will create a configuration As an example, we will create a configuration
<literal>pg</literal>, starting from a duplicate of the built-in <literal>pg</literal> by duplicating the built-in
<literal>english</> configuration. <literal>english</> configuration.
<programlisting> <programlisting>
@ -2767,7 +2779,7 @@ SHOW default_text_search_config;
<para> <para>
The behavior of a custom text search configuration can easily become The behavior of a custom text search configuration can easily become
complicated enough to be confusing or undesirable. The functions described confusing. The functions described
in this section are useful for testing text search objects. You can in this section are useful for testing text search objects. You can
test a complete configuration, or test parsers and dictionaries separately. test a complete configuration, or test parsers and dictionaries separately.
</para> </para>
@ -2938,7 +2950,7 @@ SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
</para> </para>
<para> <para>
You can reduce the volume of output by explicitly specifying which columns You can reduce the width of the output by explicitly specifying which columns
you want to see: you want to see:
<programlisting> <programlisting>
@ -2968,8 +2980,10 @@ FROM ts_debug('public.english','The Brightest supernovaes');
</indexterm> </indexterm>
<synopsis> <synopsis>
ts_parse(<replaceable class="PARAMETER">parser_name</replaceable> <type>text</>, <replaceable class="PARAMETER">document</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">token</> <type>text</>) returns <type>setof record</> ts_parse(<replaceable class="PARAMETER">parser_name</replaceable> <type>text</>, <replaceable class="PARAMETER">document</replaceable> <type>text</>,
ts_parse(<replaceable class="PARAMETER">parser_oid</replaceable> <type>oid</>, <replaceable class="PARAMETER">document</replaceable> <type>text</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">token</> <type>text</>) returns <type>setof record</> OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">token</> <type>text</>) returns <type>setof record</>
ts_parse(<replaceable class="PARAMETER">parser_oid</replaceable> <type>oid</>, <replaceable class="PARAMETER">document</replaceable> <type>text</>,
OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">token</> <type>text</>) returns <type>setof record</>
</synopsis> </synopsis>
<para> <para>
@ -2997,8 +3011,10 @@ SELECT * FROM ts_parse('default', '123 - a number');
</indexterm> </indexterm>
<synopsis> <synopsis>
ts_token_type(<replaceable class="PARAMETER">parser_name</> <type>text</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">alias</> <type>text</>, OUT <replaceable class="PARAMETER">description</> <type>text</>) returns <type>setof record</> ts_token_type(<replaceable class="PARAMETER">parser_name</> <type>text</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>,
ts_token_type(<replaceable class="PARAMETER">parser_oid</> <type>oid</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>, OUT <replaceable class="PARAMETER">alias</> <type>text</>, OUT <replaceable class="PARAMETER">description</> <type>text</>) returns <type>setof record</> OUT <replaceable class="PARAMETER">alias</> <type>text</>, OUT <replaceable class="PARAMETER">description</> <type>text</>) returns <type>setof record</>
ts_token_type(<replaceable class="PARAMETER">parser_oid</> <type>oid</>, OUT <replaceable class="PARAMETER">tokid</> <type>integer</>,
OUT <replaceable class="PARAMETER">alias</> <type>text</>, OUT <replaceable class="PARAMETER">description</> <type>text</>) returns <type>setof record</>
</synopsis> </synopsis>
<para> <para>
@ -3121,11 +3137,11 @@ SELECT plainto_tsquery('supernovae stars');
</indexterm> </indexterm>
<para> <para>
There are two kinds of indexes that can be used to speed up full text There are two kinds of indexes which can be used to speed up full text
searches. searches.
Note that indexes are not mandatory for full text searching, but in Note that indexes are not mandatory for full text searching, but in
cases where a column is searched on a regular basis, an index will cases where a column is searched on a regular basis, an index is
usually be desirable. usually desirable.
<variablelist> <variablelist>
@ -3179,7 +3195,7 @@ SELECT plainto_tsquery('supernovae stars');
<para> <para>
There are substantial performance differences between the two index types, There are substantial performance differences between the two index types,
so it is important to understand which to use. so it is important to understand their characteristics.
</para> </para>
<para> <para>
@ -3188,7 +3204,7 @@ SELECT plainto_tsquery('supernovae stars');
to check the actual table row to eliminate such false matches. to check the actual table row to eliminate such false matches.
(<productname>PostgreSQL</productname> does this automatically when needed.) (<productname>PostgreSQL</productname> does this automatically when needed.)
GiST indexes are lossy because each document is represented in the GiST indexes are lossy because each document is represented in the
index by a fixed-length signature. The signature is generated by hashing index using a fixed-length signature. The signature is generated by hashing
each word into a random bit in an n-bit string, with all these bits OR-ed each word into a random bit in an n-bit string, with all these bits OR-ed
together to produce an n-bit document signature. When two words hash to together to produce an n-bit document signature. When two words hash to
the same bit position there will be a false match. If all words in the same bit position there will be a false match. If all words in
@ -3197,7 +3213,7 @@ SELECT plainto_tsquery('supernovae stars');
</para> </para>
<para> <para>
Lossiness causes performance degradation due to useless fetches of table Lossiness causes performance degradation due to unnecessary fetches of table
records that turn out to be false matches. Since random access to table records that turn out to be false matches. Since random access to table
records is slow, this limits the usefulness of GiST indexes. The records is slow, this limits the usefulness of GiST indexes. The
likelihood of false matches depends on several factors, in particular the likelihood of false matches depends on several factors, in particular the
@ -3284,7 +3300,7 @@ SELECT plainto_tsquery('supernovae stars');
</para> </para>
<para> <para>
The optional parameter <literal>PATTERN</literal> should be the name of The optional parameter <literal>PATTERN</literal> can be the name of
a text search object, optionally schema-qualified. If a text search object, optionally schema-qualified. If
<literal>PATTERN</literal> is omitted then information about all <literal>PATTERN</literal> is omitted then information about all
visible objects will be displayed. <literal>PATTERN</literal> can be a visible objects will be displayed. <literal>PATTERN</literal> can be a
@ -3565,7 +3581,7 @@ Parser: "pg_catalog.default"
Text search configuration setup is completely different now. Text search configuration setup is completely different now.
Instead of manually inserting rows into configuration tables, Instead of manually inserting rows into configuration tables,
search is configured through the specialized SQL commands shown search is configured through the specialized SQL commands shown
earlier in this chapter. There is not currently any automated earlier in this chapter. There is no automated
support for converting an existing custom configuration for 8.3; support for converting an existing custom configuration for 8.3;
you're on your own here. you're on your own here.
</para> </para>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/typeconv.sgml,v 1.58 2008/12/18 18:20:33 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/typeconv.sgml,v 1.59 2009/04/27 16:27:36 momjian Exp $ -->
<chapter Id="typeconv"> <chapter Id="typeconv">
<title>Type Conversion</title> <title>Type Conversion</title>
@ -10,15 +10,15 @@
<para> <para>
<acronym>SQL</acronym> statements can, intentionally or not, require <acronym>SQL</acronym> statements can, intentionally or not, require
mixing of different data types in the same expression. the mixing of different data types in the same expression.
<productname>PostgreSQL</productname> has extensive facilities for <productname>PostgreSQL</productname> has extensive facilities for
evaluating mixed-type expressions. evaluating mixed-type expressions.
</para> </para>
<para> <para>
In many cases a user will not need In many cases a user does not need
to understand the details of the type conversion mechanism. to understand the details of the type conversion mechanism.
However, the implicit conversions done by <productname>PostgreSQL</productname> However, implicit conversions done by <productname>PostgreSQL</productname>
can affect the results of a query. When necessary, these results can affect the results of a query. When necessary, these results
can be tailored by using <emphasis>explicit</emphasis> type conversion. can be tailored by using <emphasis>explicit</emphasis> type conversion.
</para> </para>
@ -38,21 +38,21 @@ operators.
<acronym>SQL</acronym> is a strongly typed language. That is, every data item <acronym>SQL</acronym> is a strongly typed language. That is, every data item
has an associated data type which determines its behavior and allowed usage. has an associated data type which determines its behavior and allowed usage.
<productname>PostgreSQL</productname> has an extensible type system that is <productname>PostgreSQL</productname> has an extensible type system that is
much more general and flexible than other <acronym>SQL</acronym> implementations. more general and flexible than other <acronym>SQL</acronym> implementations.
Hence, most type conversion behavior in <productname>PostgreSQL</productname> Hence, most type conversion behavior in <productname>PostgreSQL</productname>
is governed by general rules rather than by <foreignphrase>ad hoc</> is governed by general rules rather than by <foreignphrase>ad hoc</>
heuristics. This allows heuristics. This allows the use of mixed-type expressions even with
mixed-type expressions to be meaningful even with user-defined types. user-defined types.
</para> </para>
<para> <para>
The <productname>PostgreSQL</productname> scanner/parser divides lexical The <productname>PostgreSQL</productname> scanner/parser divides lexical
elements into only five fundamental categories: integers, non-integer numbers, elements into five fundamental categories: integers, non-integer numbers,
strings, identifiers, and key words. Constants of most non-numeric types are strings, identifiers, and key words. Constants of most non-numeric types are
first classified as strings. The <acronym>SQL</acronym> language definition first classified as strings. The <acronym>SQL</acronym> language definition
allows specifying type names with strings, and this mechanism can be used in allows specifying type names with strings, and this mechanism can be used in
<productname>PostgreSQL</productname> to start the parser down the correct <productname>PostgreSQL</productname> to start the parser down the correct
path. For example, the query path. For example, the query:
<screen> <screen>
SELECT text 'Origin' AS "label", point '(0,0)' AS "value"; SELECT text 'Origin' AS "label", point '(0,0)' AS "value";
@ -99,7 +99,7 @@ Operators
<productname>PostgreSQL</productname> allows expressions with <productname>PostgreSQL</productname> allows expressions with
prefix and postfix unary (one-argument) operators, prefix and postfix unary (one-argument) operators,
as well as binary (two-argument) operators. Like functions, operators can as well as binary (two-argument) operators. Like functions, operators can
be overloaded, and so the same problem of selecting the right operator be overloaded, so the same problem of selecting the right operator
exists. exists.
</para> </para>
</listitem> </listitem>
@ -136,13 +136,13 @@ and for the <function>GREATEST</> and <function>LEAST</> functions.
</para> </para>
<para> <para>
The system catalogs store information about which conversions, called The system catalogs store information about which conversions, or
<firstterm>casts</firstterm>, between data types are valid, and how to <firstterm>casts</firstterm>, exist between which data types, and how to
perform those conversions. Additional casts can be added by the user perform those conversions. Additional casts can be added by the user
with the <xref linkend="sql-createcast" endterm="sql-createcast-title"> with the <xref linkend="sql-createcast" endterm="sql-createcast-title">
command. (This is usually command. (This is usually
done in conjunction with defining new data types. The set of casts done in conjunction with defining new data types. The set of casts
between the built-in types has been carefully crafted and is best not between built-in types has been carefully crafted and is best not
altered.) altered.)
</para> </para>
@ -152,8 +152,8 @@ altered.)
</indexterm> </indexterm>
<para> <para>
An additional heuristic is provided in the parser to allow better guesses An additional heuristic provided by the parser allows improved determination
at proper casting behavior among groups of types that have implicit casts. of the proper casting behavior among groups of types that have implicit casts.
Data types are divided into several basic <firstterm>type Data types are divided into several basic <firstterm>type
categories</firstterm>, including <type>boolean</type>, <type>numeric</type>, categories</firstterm>, including <type>boolean</type>, <type>numeric</type>,
<type>string</type>, <type>bitstring</type>, <type>datetime</type>, <type>string</type>, <type>bitstring</type>, <type>datetime</type>,
@ -161,7 +161,7 @@ categories</firstterm>, including <type>boolean</type>, <type>numeric</type>,
user-defined. (For a list see <xref linkend="catalog-typcategory-table">; user-defined. (For a list see <xref linkend="catalog-typcategory-table">;
but note it is also possible to create custom type categories.) Within each but note it is also possible to create custom type categories.) Within each
category there can be one or more <firstterm>preferred types</firstterm>, which category there can be one or more <firstterm>preferred types</firstterm>, which
are preferentially selected when there is ambiguity. With careful selection are selected when there is ambiguity. With careful selection
of preferred types and available implicit casts, it is possible to ensure that of preferred types and available implicit casts, it is possible to ensure that
ambiguous expressions (those with multiple candidate parsing solutions) can be ambiguous expressions (those with multiple candidate parsing solutions) can be
resolved in a useful way. resolved in a useful way.
@ -179,17 +179,17 @@ Implicit conversions should never have surprising or unpredictable outcomes.
<listitem> <listitem>
<para> <para>
There should be no extra overhead from the parser or executor There should be no extra overhead in the parser or executor
if a query does not need implicit type conversion. if a query does not need implicit type conversion.
That is, if a query is well formulated and the types already match up, then the query should proceed That is, if a query is well-formed and the types already match, then the query should execute
without spending extra time in the parser and without introducing unnecessary implicit conversion without spending extra time in the parser and without introducing unnecessary implicit conversion
calls into the query. calls in the query.
</para> </para>
<para> <para>
Additionally, if a query usually requires an implicit conversion for a function, and Additionally, if a query usually requires an implicit conversion for a function, and
if then the user defines a new function with the correct argument types, the parser if then the user defines a new function with the correct argument types, the parser
should use this new function and will no longer do the implicit conversion using the old function. should use this new function and no longer do implicit conversion using the old function.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -206,9 +206,8 @@ should use this new function and will no longer do the implicit conversion using
</indexterm> </indexterm>
<para> <para>
The specific operator to be used in an operator invocation is determined The specific operator invoked is determined by the following
by following steps. Note that this procedure is affected
the procedure below. Note that this procedure is indirectly affected
by the precedence of the involved operators. See <xref by the precedence of the involved operators. See <xref
linkend="sql-precedence"> for more information. linkend="sql-precedence"> for more information.
</para> </para>
@ -219,9 +218,9 @@ should use this new function and will no longer do the implicit conversion using
<step performance="required"> <step performance="required">
<para> <para>
Select the operators to be considered from the Select the operators to be considered from the
<classname>pg_operator</classname> system catalog. If an unqualified <classname>pg_operator</classname> system catalog. If a non-schema-qualified
operator name was used (the usual case), the operators operator name was used (the usual case), the operators
considered are those of the right name and argument count that are considered are those with a matching name and argument count that are
visible in the current search path (see <xref linkend="ddl-schemas-path">). visible in the current search path (see <xref linkend="ddl-schemas-path">).
If a qualified operator name was given, only operators in the specified If a qualified operator name was given, only operators in the specified
schema are considered. schema are considered.
@ -230,8 +229,8 @@ schema are considered.
<substeps> <substeps>
<step performance="optional"> <step performance="optional">
<para> <para>
If the search path finds multiple operators of identical argument types, If the search path finds multiple operators with identical argument types,
only the one appearing earliest in the path is considered. But operators of only the one appearing earliest in the path is considered. Operators with
different argument types are considered on an equal footing regardless of different argument types are considered on an equal footing regardless of
search path position. search path position.
</para> </para>
@ -251,7 +250,7 @@ operators considered), use it.
<para> <para>
If one argument of a binary operator invocation is of the <type>unknown</type> type, If one argument of a binary operator invocation is of the <type>unknown</type> type,
then assume it is the same type as the other argument for this check. then assume it is the same type as the other argument for this check.
Other cases involving <type>unknown</type> will never find a match at Cases involving two <type>unknown</type> types will never find a match at
this step. this step.
</para> </para>
</step> </step>
@ -276,7 +275,7 @@ candidate remains, use it; else continue to the next step.
<para> <para>
Run through all candidates and keep those with the most exact matches Run through all candidates and keep those with the most exact matches
on input types. (Domains are considered the same as their base type on input types. (Domains are considered the same as their base type
for this purpose.) Keep all candidates if none have any exact matches. for this purpose.) Keep all candidates if none have exact matches.
If only one candidate remains, use it; else continue to the next step. If only one candidate remains, use it; else continue to the next step.
</para> </para>
</step> </step>
@ -296,7 +295,7 @@ categories accepted at those argument positions by the remaining
candidates. At each position, select the <type>string</type> category candidates. At each position, select the <type>string</type> category
if any if any
candidate accepts that category. (This bias towards string is appropriate candidate accepts that category. (This bias towards string is appropriate
since an unknown-type literal does look like a string.) Otherwise, if since an unknown-type literal looks like a string.) Otherwise, if
all the remaining candidates accept the same type category, select that all the remaining candidates accept the same type category, select that
category; otherwise fail because the correct choice cannot be deduced category; otherwise fail because the correct choice cannot be deduced
without more clues. Now discard without more clues. Now discard
@ -339,7 +338,7 @@ SELECT 40 ! AS "40 factorial";
</screen> </screen>
So the parser does a type conversion on the operand and the query So the parser does a type conversion on the operand and the query
is equivalent to is equivalent to:
<screen> <screen>
SELECT CAST(40 AS bigint) ! AS "40 factorial"; SELECT CAST(40 AS bigint) ! AS "40 factorial";
@ -351,7 +350,7 @@ SELECT CAST(40 AS bigint) ! AS "40 factorial";
<title>String Concatenation Operator Type Resolution</title> <title>String Concatenation Operator Type Resolution</title>
<para> <para>
A string-like syntax is used for working with string types as well as for A string-like syntax is used for working with string types and for
working with complex extension types. working with complex extension types.
Strings with unspecified type are matched with likely operator candidates. Strings with unspecified type are matched with likely operator candidates.
</para> </para>
@ -371,7 +370,7 @@ SELECT text 'abc' || 'def' AS "text and unknown";
<para> <para>
In this case the parser looks to see if there is an operator taking <type>text</type> In this case the parser looks to see if there is an operator taking <type>text</type>
for both arguments. Since there is, it assumes that the second argument should for both arguments. Since there is, it assumes that the second argument should
be interpreted as of type <type>text</type>. be interpreted as type <type>text</type>.
</para> </para>
<para> <para>
@ -391,9 +390,9 @@ In this case there is no initial hint for which type to use, since no types
are specified in the query. So, the parser looks for all candidate operators are specified in the query. So, the parser looks for all candidate operators
and finds that there are candidates accepting both string-category and and finds that there are candidates accepting both string-category and
bit-string-category inputs. Since string category is preferred when available, bit-string-category inputs. Since string category is preferred when available,
that category is selected, and then the that category is selected, and the
preferred type for strings, <type>text</type>, is used as the specific preferred type for strings, <type>text</type>, is used as the specific
type to resolve the unknown literals to. type to resolve the unknown literals.
</para> </para>
</example> </example>
@ -460,7 +459,7 @@ SELECT ~ CAST('20' AS int8) AS "negation";
</indexterm> </indexterm>
<para> <para>
The specific function to be used in a function invocation is determined The specific function to be invoked is determined
according to the following steps. according to the following steps.
</para> </para>
@ -470,9 +469,9 @@ SELECT ~ CAST('20' AS int8) AS "negation";
<step performance="required"> <step performance="required">
<para> <para>
Select the functions to be considered from the Select the functions to be considered from the
<classname>pg_proc</classname> system catalog. If an unqualified <classname>pg_proc</classname> system catalog. If a non-schema-qualified
function name was used, the functions function name was used, the functions
considered are those of the right name and argument count that are considered are those with a matching name and argument count that are
visible in the current search path (see <xref linkend="ddl-schemas-path">). visible in the current search path (see <xref linkend="ddl-schemas-path">).
If a qualified function name was given, only functions in the specified If a qualified function name was given, only functions in the specified
schema are considered. schema are considered.
@ -482,7 +481,7 @@ schema are considered.
<step performance="optional"> <step performance="optional">
<para> <para>
If the search path finds multiple functions of identical argument types, If the search path finds multiple functions of identical argument types,
only the one appearing earliest in the path is considered. But functions of only the one appearing earliest in the path is considered. Functions of
different argument types are considered on an equal footing regardless of different argument types are considered on an equal footing regardless of
search path position. search path position.
</para> </para>
@ -527,7 +526,7 @@ this step.)
<step performance="required"> <step performance="required">
<para> <para>
If no exact match is found, see whether the function call appears If no exact match is found, see if the function call appears
to be a special type conversion request. This happens if the function call to be a special type conversion request. This happens if the function call
has just one argument and the function name is the same as the (internal) has just one argument and the function name is the same as the (internal)
name of some data type. Furthermore, the function argument must be either name of some data type. Furthermore, the function argument must be either
@ -555,7 +554,7 @@ Look for the best match.
<substeps> <substeps>
<step performance="required"> <step performance="required">
<para> <para>
Discard candidate functions for which the input types do not match Discard candidate functions in which the input types do not match
and cannot be converted (using an implicit conversion) to match. and cannot be converted (using an implicit conversion) to match.
<type>unknown</type> literals are <type>unknown</type> literals are
assumed to be convertible to anything for this purpose. If only one assumed to be convertible to anything for this purpose. If only one
@ -566,7 +565,7 @@ candidate remains, use it; else continue to the next step.
<para> <para>
Run through all candidates and keep those with the most exact matches Run through all candidates and keep those with the most exact matches
on input types. (Domains are considered the same as their base type on input types. (Domains are considered the same as their base type
for this purpose.) Keep all candidates if none have any exact matches. for this purpose.) Keep all candidates if none have exact matches.
If only one candidate remains, use it; else continue to the next step. If only one candidate remains, use it; else continue to the next step.
</para> </para>
</step> </step>
@ -586,7 +585,7 @@ accepted
at those argument positions by the remaining candidates. At each position, at those argument positions by the remaining candidates. At each position,
select the <type>string</type> category if any candidate accepts that category. select the <type>string</type> category if any candidate accepts that category.
(This bias towards string (This bias towards string
is appropriate since an unknown-type literal does look like a string.) is appropriate since an unknown-type literal looks like a string.)
Otherwise, if all the remaining candidates accept the same type category, Otherwise, if all the remaining candidates accept the same type category,
select that category; otherwise fail because select that category; otherwise fail because
the correct choice cannot be deduced without more clues. the correct choice cannot be deduced without more clues.
@ -616,9 +615,9 @@ Some examples follow.
<title>Rounding Function Argument Type Resolution</title> <title>Rounding Function Argument Type Resolution</title>
<para> <para>
There is only one <function>round</function> function with two There is only one <function>round</function> function which takes two
arguments. (The first is <type>numeric</type>, the second is arguments; it takes a first argument of <type>numeric</type> and
<type>integer</type>.) So the following query automatically converts a second argument of <type>integer</type>. So the following query automatically converts
the first argument of type <type>integer</type> to the first argument of type <type>integer</type> to
<type>numeric</type>: <type>numeric</type>:
@ -631,7 +630,7 @@ SELECT round(4, 4);
(1 row) (1 row)
</screen> </screen>
That query is actually transformed by the parser to That query is actually transformed by the parser to:
<screen> <screen>
SELECT round(CAST (4 AS numeric), 4); SELECT round(CAST (4 AS numeric), 4);
</screen> </screen>
@ -640,7 +639,7 @@ SELECT round(CAST (4 AS numeric), 4);
<para> <para>
Since numeric constants with decimal points are initially assigned the Since numeric constants with decimal points are initially assigned the
type <type>numeric</type>, the following query will require no type type <type>numeric</type>, the following query will require no type
conversion and might therefore be slightly more efficient: conversion and therefore might be slightly more efficient:
<screen> <screen>
SELECT round(4.0, 4); SELECT round(4.0, 4);
</screen> </screen>
@ -679,7 +678,7 @@ SELECT substr(varchar '1234', 3);
(1 row) (1 row)
</screen> </screen>
This is transformed by the parser to effectively become This is transformed by the parser to effectively become:
<screen> <screen>
SELECT substr(CAST (varchar '1234' AS text), 3); SELECT substr(CAST (varchar '1234' AS text), 3);
</screen> </screen>
@ -863,7 +862,7 @@ their underlying base types.
<para> <para>
If all inputs are of type <type>unknown</type>, resolve as type If all inputs are of type <type>unknown</type>, resolve as type
<type>text</type> (the preferred type of the string category). <type>text</type> (the preferred type of the string category).
Otherwise, the <type>unknown</type> inputs will be ignored. Otherwise, <type>unknown</type> inputs are ignored.
</para> </para>
</step> </step>
@ -914,7 +913,7 @@ SELECT text 'a' AS "text" UNION SELECT 'b';
b b
(2 rows) (2 rows)
</screen> </screen>
Here, the unknown-type literal <literal>'b'</literal> will be resolved as type <type>text</type>. Here, the unknown-type literal <literal>'b'</literal> will be resolved to type <type>text</type>.
</para> </para>
</example> </example>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/xfunc.sgml,v 1.136 2008/12/18 18:20:33 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/xfunc.sgml,v 1.137 2009/04/27 16:27:36 momjian Exp $ -->
<sect1 id="xfunc"> <sect1 id="xfunc">
<title>User-Defined Functions</title> <title>User-Defined Functions</title>
@ -2866,7 +2866,7 @@ typedef struct
/* /*
* OPTIONAL pointer to struct containing tuple description * OPTIONAL pointer to struct containing tuple description
* *
* tuple_desc is for use when returning tuples (i.e. composite data types) * tuple_desc is for use when returning tuples (i.e., composite data types)
* and is only needed if you are going to build the tuples with * and is only needed if you are going to build the tuples with
* heap_form_tuple() rather than with BuildTupleFromCStrings(). Note that * heap_form_tuple() rather than with BuildTupleFromCStrings(). Note that
* the TupleDesc pointer stored here should usually have been run through * the TupleDesc pointer stored here should usually have been run through

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/xml2.sgml,v 1.5 2008/05/08 16:49:37 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/xml2.sgml,v 1.6 2009/04/27 16:27:36 momjian Exp $ -->
<sect1 id="xml2"> <sect1 id="xml2">
<title>xml2</title> <title>xml2</title>
@ -173,7 +173,7 @@
<entry> <entry>
<para> <para>
the name of the <quote>key</> field &mdash; this is just a field to be used as the name of the <quote>key</> field &mdash; this is just a field to be used as
the first column of the output table, i.e. it identifies the record from the first column of the output table, i.e., it identifies the record from
which each output row came (see note below about multiple values) which each output row came (see note below about multiple values)
</para> </para>
</entry> </entry>