diff --git a/doc/src/sgml/arch-pg.sgml b/doc/src/sgml/arch-pg.sgml index af9b6c9ba7..21dbf58685 100644 --- a/doc/src/sgml/arch-pg.sgml +++ b/doc/src/sgml/arch-pg.sgml @@ -36,7 +36,7 @@ databases on a single host. Such a collection of databases is called a cluster (of databases). A frontend application that wishes to access a given database - within a cluster makes calls to an interface library (eg, libpq) + within a cluster makes calls to an interface library (e.g., libpq) that is linked into the application. The library sends user requests over the network to the postmaster @@ -81,7 +81,7 @@ From that point on, the frontend process and the backend You should also be aware that the postmaster and - postgres servers run with the user-id of the PostgreSQL + postgres servers run with the user ID of the PostgreSQL superuser. Note that the PostgreSQL superuser does not have to be any particular user (e.g., a user named diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 1696d61c25..4cffd3cfb7 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -1,4 +1,4 @@ - + Arrays @@ -244,7 +244,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter **= 10000; A limitation of the present array implementation is that individual elements of an array cannot be SQL NULLs. The entire array can be set to NULL, but you can't have an array with some elements NULL and some - not. Fixing this is on the TODO list. + not. Fixing this is on the to-do list. @@ -272,9 +272,9 @@ INSERT ... VALUES ('{"\\\\","\\""}'); The string-literal processor removes one level of backslashes, so that what arrives at the array-value parser looks like {"\\","\""}. - In turn, the strings fed to the text datatype's input routine + In turn, the strings fed to the text data type's input routine become \ and " respectively. (If we were working - with a datatype whose input routine also treated backslashes specially, + with a data type whose input routine also treated backslashes specially, bytea for example, we might need as many as eight backslashes in the query to get one backslash into the stored array element.) diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 75a29efd25..ebe91eb4ee 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -1,4 +1,4 @@ - + Localization</> @@ -834,7 +834,7 @@ RESET CLIENT_ENCODING; </para> <para> For 7.2, <option>--enable-unicode-conversion</option> is not necessary. - The unicode conversion functionality is automatically enabled + The Unicode conversion functionality is automatically enabled if <option>--enable-multibyte</option> is specified. </para> </sect2> @@ -933,7 +933,7 @@ Mar 23, 1999 * Fix problem with MB and locale Jan 26, 1999 - * Add support for Big5 for fronend encoding + * Add support for Big5 for frontend encoding (you need to create a database with EUC_TW to use Big5) * Add regression test case for EUC_TW (contributed by Jonah Kuo <email>jonahkuo@mail.ttn.com.tw</email>) @@ -959,7 +959,7 @@ Jun 5, 1998 and the frontend * new command SET CLIENT_ENCODING etc. added * add support for LATIN1 character set - * enhance 8 bit cleaness + * enhance 8-bit cleanliness April 21, 1998 some enhancements/fixes * character_length(), position(), substring() are now aware of diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 932b5acc0f..0f0b2eeac6 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.31 2001/12/27 21:37:34 tgl Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.32 2002/01/20 22:19:55 petere Exp $ --> <chapter id="client-authentication"> @@ -457,10 +457,10 @@ local all md5 admins <para> <literal>trust</> authentication is appropriate and very convenient for local connections on a single-user workstation. It is usually - <emphasis>not</> appropriate by itself on a multi-user machine. - However, you may be able to use <literal>trust</> even on a multi-user + <emphasis>not</> appropriate by itself on a multiuser machine. + However, you may be able to use <literal>trust</> even on a multiuser machine, if you restrict access to the postmaster's socket file using - filesystem permissions. To do this, set the parameter + file-system permissions. To do this, set the parameter <varname>unix_socket_permissions</varname> (and possibly <varname>unix_socket_group</varname>) in <filename>postgresql.conf</>, as described in <xref linkend="runtime-config-general">. Or you could @@ -469,7 +469,7 @@ local all md5 admins </para> <para> - Setting filesystem permissions only helps for Unix-socket connections. + Setting file-system permissions only helps for Unix-socket connections. Local TCP connections are not restricted by it; therefore, if you want to use permissions for local security, remove the <literal>host ... 127.0.0.1 ...</> line from <filename>pg_hba.conf</>, or change it to a diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index f596205a1c..68fdb9cbe8 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.83 2002/01/07 02:29:11 petere Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.84 2002/01/20 22:19:55 petere Exp $ --> <chapter id="datatype"> @@ -978,7 +978,7 @@ SELECT b, char_length(b) FROM test2; </thead> <tbody> <row> - <entry>bytea</entry> + <entry><type>bytea</type></entry> <entry>4 bytes plus the actual binary string</entry> <entry>Variable (not specifically limited) length binary string</entry> diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml index 32ddd2365b..2bafee8853 100644 --- a/doc/src/sgml/datetime.sgml +++ b/doc/src/sgml/datetime.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/datetime.sgml,v 2.25 2002/01/04 17:02:25 thomas Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/datetime.sgml,v 2.26 2002/01/20 22:19:55 petere Exp $ Date/time details --> @@ -211,8 +211,8 @@ Date/time details The following table of time zones recognized by <productname>PostgreSQL</productname> is organized by time zone offset from UTC, rather than alphabetically; this is intended - to faciliate - matching local usage with recognized abreviations for cases where + to facilitate + matching local usage with recognized abbreviations for cases where these might differ. <table tocentry="1"> @@ -1078,8 +1078,8 @@ Date/time details <note> <para> SQL92 states that - <quote>Within the definition of a <sgmltag>datetime literal</sgmltag>, - the <sgmltag>datetime value</sgmltag>s are constrained by the + <quote>Within the definition of a <quote>datetime literal</quote>, + the <quote>datetime value</quote>s are constrained by the natural rules for dates and times according to the Gregorian calendar</quote>. Dates between 1752-09-03 and 1752-09-13, although eliminated in diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml index 705040fcf8..cc1fb54f96 100644 --- a/doc/src/sgml/dfunc.sgml +++ b/doc/src/sgml/dfunc.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/dfunc.sgml,v 1.19 2002/01/09 00:52:37 petere Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/dfunc.sgml,v 1.20 2002/01/20 22:19:55 petere Exp $ --> <sect2 id="dfunc"> @@ -26,7 +26,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/dfunc.sgml,v 1.19 2002/01/09 00:52:37 peter <para> <indexterm><primary>PIC</></> - Creating shared libraries is generally analoguous to linking + Creating shared libraries is generally analogous to linking executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as <firstterm>position-independent code</firstterm> diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index b068138bae..4d975d73b5 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -1,4 +1,4 @@ -<!-- $Header: /cvsroot/pgsql/doc/src/sgml/docguide.sgml,v 1.39 2002/01/08 15:16:16 petere Exp $ --> +<!-- $Header: /cvsroot/pgsql/doc/src/sgml/docguide.sgml,v 1.40 2002/01/20 22:19:55 petere Exp $ --> <appendix id="docguide"> <title>Documentation @@ -515,7 +515,7 @@ CATALOG "docbook-dsssl--1.xx/catalog will need to increase the size of TeX's internal data structures. Details on this can be found in the JadeTeX - installation insructions. + installation instructions. @@ -738,7 +738,7 @@ gmake man styles, in particular the default style. If the document contains REFENTRY sections, one must also replace formatting hints which tie a - preceeding paragraph to the current + preceding paragraph to the current paragraph, and instead tie the current paragraph to the following one. A utility, fixrtf is available in @@ -1124,7 +1124,7 @@ Later stylesheets seem to not need this adjustment - thomas 2001-11-29 powerful mode for editing SGML documents. When properly configured, it will allow you to use Emacs to insert tags and check markup - consistancy. You could use it for HTML as + consistency. You could use it for HTML as well. Check the PSGML web site for downloads, installation instructions, and diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 45c2518c5b..6dc7399198 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -1,5 +1,5 @@ @@ -397,7 +397,7 @@ struct sqlca -240 (ECPG_UNKNOWN_DESCRIPTOR): Descriptor %s not found in line %d. - The descriptor specified was not foundstatement you are trying to use has not been prepared. + The descriptor specified was not found. The statement you are trying to use has not been prepared. @@ -415,7 +415,7 @@ struct sqlca -242 (ECPG_UNKNOWN_DESCRIPTOR_ITEM): Descriptor %s not found in line %d. - The descriptor specified was not foundstatement you are trying to use has not been prepared. + The descriptor specified was not found. The statement you are trying to use has not been prepared. diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 2cf4466d07..354f73643a 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -1,5 +1,5 @@ @@ -3925,7 +3925,7 @@ SELECT TIMESTAMP 'now'; just sequences) are special single-row tables created with CREATE SEQUENCE. A sequence object is usually used to generate unique identifiers for rows of a table. The sequence functions - provide simple, multi-user-safe methods for obtaining successive + provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects. @@ -4367,7 +4367,7 @@ SELECT NULLIF(value, '(none)') ... pg_get_userbyid(userid) name - Get user name given sysid + Get user name given ID @@ -4767,7 +4767,7 @@ OR As usual, NULLs in the expressions or subquery rows are combined per - the normal rules of SQL boolean expressions. Two rows are considered + the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (NULL). @@ -4856,7 +4856,7 @@ AND As usual, NULLs in the expressions or subquery rows are combined per - the normal rules of SQL boolean expressions. Two rows are considered + the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (NULL). @@ -4875,7 +4875,7 @@ AND The right-hand side of this form of ANY is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the - given operator, which must yield a boolean + given operator, which must yield a Boolean result. The result of ANY is TRUE if any true result is obtained. The result is FALSE if no true result is found (including the special @@ -4921,7 +4921,7 @@ AND As usual, NULLs in the expressions or subquery rows are combined per - the normal rules of SQL boolean expressions. Two rows are considered + the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (NULL). @@ -4939,7 +4939,7 @@ AND The right-hand side of this form of ALL is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the - given operator, which must yield a boolean + given operator, which must yield a Boolean result. The result of ALL is TRUE if all rows yield TRUE (including the special case where the subquery returns no rows). @@ -4984,7 +4984,7 @@ AND As usual, NULLs in the expressions or subquery rows are combined per - the normal rules of SQL boolean expressions. Two rows are considered + the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (NULL). @@ -5014,7 +5014,7 @@ AND As usual, NULLs in the expressions or subquery rows are combined per - the normal rules of SQL boolean expressions. Two rows are considered + the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of the row comparison is unknown (NULL). diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index 5483c633c6..b1d9a9670a 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -1,5 +1,5 @@ @@ -93,7 +93,7 @@ Genetic Optimizer determined, randomized search. The set of possible solutions for the optimization problem is considered as a population of individuals. - The degree of adaption of an individual to its environment is specified + The degree of adaptation of an individual to its environment is specified by its fitness. @@ -147,7 +147,7 @@ Genetic Optimizer +=========================================+ | INITIALIZE P(t) | +=========================================+ -| evalute FITNESS of P(t) | +| evaluate FITNESS of P(t) | +=========================================+ | while not STOPPING CRITERION do | | +-------------------------------------+ @@ -157,7 +157,7 @@ Genetic Optimizer | +-------------------------------------+ | | P(t+1) := SELECTION{P''(t) + P(t)} | | +-------------------------------------+ -| | evalute FITNESS of P''(t) | +| | evaluate FITNESS of P''(t) | | +-------------------------------------+ | | t := t + 1 | +===+=====================================+ @@ -182,7 +182,7 @@ Genetic Optimizer is encoded by the integer string '4-1-3-2', which means, first join relation '4' and '1', then '3', and - then '2', where 1, 2, 3, 4 are relids within the + then '2', where 1, 2, 3, 4 are relation IDs within the PostgreSQL optimizer. diff --git a/doc/src/sgml/jdbc.sgml b/doc/src/sgml/jdbc.sgml index 01fb72532d..241be88054 100644 --- a/doc/src/sgml/jdbc.sgml +++ b/doc/src/sgml/jdbc.sgml @@ -1,5 +1,5 @@ @@ -592,7 +592,7 @@ st.close(); PostgreSQL provides two distinct ways to store binary data. Binary data can be stored in a table using - PostgreSQL's binary datatype + PostgreSQL's binary data type bytea, or by using the Large Object feature which stores the binary data in a separate table in a special format, and refers to that table by storing a value of type @@ -602,7 +602,7 @@ st.close(); In order to determine which method is appropriate you need to understand the limitations of each method. The - bytea datatype is not well suited for storing very + bytea data type is not well suited for storing very large amounts of binary data. While a column of type bytea can hold upto 1Gig of binary data, it would require a huge amount of memory (RAM) to @@ -619,14 +619,14 @@ st.close(); 7.2 is the first release of the JDBC Driver - that supports the bytea datatype. The introduction of + that supports the bytea data type. The introduction of this functionality in 7.2 has introduced a change in behavior as compared to previous releases. In 7.2 the methods getBytes(), setBytes(), getBinaryStream(), and setBinaryStream() operate on - the bytea datatype. In 7.1 these methods operated - on the OID datatype associated with Large Objects. + the bytea data type. In 7.1 these methods operated + on the OID data type associated with Large Objects. It is possible to revert the driver back to the old 7.1 behavior by setting the compatible property on the Connection to a value of @@ -634,7 +634,7 @@ st.close(); - To use the bytea datatype you should simply use + To use the bytea data type you should simply use the getBytes(), setBytes(), getBinaryStream(), or setBinaryStream() methods. @@ -663,7 +663,7 @@ st.close(); JDBC Driver, the getBLOB() and setBLOB() methods may no longer interact with Large Objects and will instead work on - bytea datatypes. So it is recommended that you + bytea data types. So it is recommended that you use the LargeObject API if you intend to use Large Objects. @@ -728,7 +728,7 @@ ps.close(); - Alternativly you could be storing a very large file and want to use + Alternatively you could be storing a very large file and want to use the LargeObject API to store the file: diff --git a/doc/src/sgml/libpgtcl.sgml b/doc/src/sgml/libpgtcl.sgml index 2025dc0ff0..a6ebe9bf5a 100644 --- a/doc/src/sgml/libpgtcl.sgml +++ b/doc/src/sgml/libpgtcl.sgml @@ -1194,8 +1194,8 @@ The oid of the large object created. Usage -mode can be any OR'ing together of INV_READ and INV_WRITE. -The OR delimiter character is |. +mode can be any or'ing together of INV_READ and INV_WRITE. +The or operator is |. [pg_lo_creat $conn "INV_READ|INV_WRITE"] diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 29a6a19635..cd3e3bb8af 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -1,5 +1,5 @@ @@ -175,7 +175,7 @@ Oid lo_import(PGconn *conn, const c int lo_export(PGconn *conn, Oid lobjId, const char *filename) - The lobjId argument specifies the Oid of the large + The lobjId argument specifies the OID of the large object to export and the filename argument specifies the operating system name name of the file. @@ -189,7 +189,7 @@ int lo_export(PGconn *conn, Oid int lo_open(PGconn *conn, Oid lobjId, int mode) - The lobjId argument specifies the Oid of the large + The lobjId argument specifies the OID of the large object to open. The mode bits control whether the object is opened for reading (INV_READ), writing (INV_WRITE), or both. @@ -269,7 +269,7 @@ int lo_close(PGconn *conn, int fd) Oid lo_unlink(PGconn *conn, Oid lobjId) - The lobjId argument specifies the Oid of the large + The lobjId argument specifies the OID of the large object to remove. diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 76225bd2f9..68a978b6ff 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -1,5 +1,5 @@ @@ -377,16 +377,16 @@ VACUUM It's a good idea to save the database server's log output somewhere, rather than just routing it to /dev/null. The log output is invaluable when it comes time to diagnose problems. However, the - log output tends to be voluminuous (especially at higher debug levels) + log output tends to be voluminous (especially at higher debug levels) and you won't want to save it indefinitely. You need to rotate the log files so that new log files are started and old ones thrown away every so often. - If you simply direct the postmaster's stderr into a file, the only way + If you simply direct the postmaster's stderr into a file, the only way to truncate the log file is to stop and restart the postmaster. This - may be okay for development setups but you won't want to run a production + may be OK for development setups but you won't want to run a production server that way. @@ -406,9 +406,9 @@ VACUUM On many systems, however, syslog is not very reliable, particularly with large log messages; it may truncate or drop messages just when you need them the most. You may find it more useful to pipe the - postmaster's stderr to some type of log rotation script. + postmaster's stderr to some type of log rotation script. If you start the postmaster with pg_ctl, then the - postmaster's stderr is already redirected to stdout, so you just need a + postmaster's stderr is already redirected to stdout, so you just need a pipe command: diff --git a/doc/src/sgml/manage.sgml b/doc/src/sgml/manage.sgml index a9b3a41ea1..43dd8d5a95 100644 --- a/doc/src/sgml/manage.sgml +++ b/doc/src/sgml/manage.sgml @@ -1,5 +1,5 @@ @@ -94,7 +94,7 @@ ERROR: CREATE DATABASE: Permission denied. Running the PostgreSQL interactive - terminal program, called psql, which allows you + terminal program, called psql, which allows you to interactively enter, edit, and execute SQL commands. @@ -145,7 +145,7 @@ mydb=> This prompt indicates that psql is listening to you and that you can type SQL queries into a - workspace maintained by the terminal monitor. + work space maintained by the terminal monitor. The psql program itself responds to special commands that begin with the backslash character, \. For example, you @@ -156,7 +156,7 @@ mydb=> \h Once you have finished entering your queries into the - workspace, you can pass the contents of the workspace + work space, you can pass the contents of the work space to the PostgreSQL server by typing: mydb=> \g diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index c978288575..fb20f4a12b 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -1,9 +1,9 @@ - Multi-Version Concurrency Control + Multiversion Concurrency Control concurrency @@ -11,10 +11,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/mvcc.sgml,v 2.21 2001/11/28 20:49:10 petere - Multi-Version Concurrency Control + Multiversion Concurrency Control (MVCC) is an advanced technique for improving database performance in a - multi-user environment. + multiuser environment. Vadim Mikheev (vadim@krs.ru) provided the implementation for PostgreSQL. diff --git a/doc/src/sgml/nls.sgml b/doc/src/sgml/nls.sgml index d27f23a72b..8bef35c572 100644 --- a/doc/src/sgml/nls.sgml +++ b/doc/src/sgml/nls.sgml @@ -1,4 +1,4 @@ - + @@ -333,7 +333,7 @@ textdomain("progname"); - Whereever a message that is a candidate for translation is found, + Wherever a message that is a candidate for translation is found, a call to gettext() needs to be inserted. E.g., fprintf(stderr, "panic level %d\n", lvl); @@ -473,7 +473,7 @@ printf("number of copied files: %d", n); If you want to communicate something to the translator, such as about how a message is intended to line up with other output, - precede the occurance of the string with a comment that starts + precede the occurrance of the string with a comment that starts with translator, e.g., /* translator: This message is not what it seems to be. */ diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 8ac48226f7..be161ead74 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1,5 +1,5 @@ @@ -403,7 +403,7 @@ regression=# - pg_stats Columns + <structname>pg_stats</structname> Columns @@ -416,31 +416,31 @@ regression=# - tablename + tablename name Name of table containing column - attname + attname name Column described by this row - null_frac + null_frac real Fraction of column's entries that are NULL - avg_width + avg_width integer Average width in bytes of column's entries - n_distinct + n_distinct real If greater than zero, the estimated number of distinct values in the column. If less than zero, the negative of the number of @@ -454,14 +454,14 @@ regression=# - most_common_vals + most_common_vals text[] A list of the most common values in the column. (Omitted if no values seem to be more common than any others.) - most_common_freqs + most_common_freqs real[] A list of the frequencies of the most common values, ie, number of occurrences of each divided by total number of rows. @@ -474,7 +474,7 @@ regression=# A list of values that divide the column's values into groups of approximately equal population. The most_common_vals, if present, are omitted from the - histogram calculation. (Omitted if column datatype does not have a + histogram calculation. (Omitted if column data type does not have a < operator, or if the most_common_vals list accounts for the entire population.) @@ -485,9 +485,9 @@ regression=# real Statistical correlation between physical row ordering and logical ordering of the column values. This ranges from -1 to +1. - When the value is near -1 or +1, an indexscan on the column will + When the value is near -1 or +1, an index scan on the column will be estimated to be cheaper than when it is near zero, due to reduction - of random access to the disk. (Omitted if column datatype does + of random access to the disk. (Omitted if column data type does not have a < operator.) diff --git a/doc/src/sgml/problems.sgml b/doc/src/sgml/problems.sgml index 18c534f0ba..91626d90b2 100644 --- a/doc/src/sgml/problems.sgml +++ b/doc/src/sgml/problems.sgml @@ -1,5 +1,5 @@ @@ -265,7 +265,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/problems.sgml,v 2.12 2002/01/07 02:29:13 pe backend server process is quite different from crash of the parent postmaster process; please don't say the postmaster crashed when you mean a single backend went down, nor vice versa. - Also, client programs such as the interactive frontend psql + Also, client programs such as the interactive frontend psql are completely separate from the backend. Please try to be specific about whether the problem is on the client or server side. @@ -305,7 +305,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/problems.sgml,v 2.12 2002/01/07 02:29:13 pe development of PostgreSQL and it would be nice if we could keep the bug reports separate. We might choose to take up a discussion - about your bug report on pgsql-hackers, if the problem needs more review. + about your bug report on pgsql-hackers, if the problem needs more review. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index c5cc76ff05..7a37e7b99a 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -1,5 +1,5 @@ @@ -21,7 +21,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/query.sgml,v 1.24 2002/01/07 02:29:13 peter In the examples that follow, we assume that you have created a - database named mydb, as described in the previous + database named mydb, as described in the previous chapter, and have started psql. @@ -693,7 +693,7 @@ SELECT city, max(temp_lo) It is important to understand the interaction between aggregates and - SQL's WHERE and HAVING clauses. + SQL's WHERE and HAVING clauses. The fundamental difference between WHERE and HAVING is this: WHERE selects input rows before groups and aggregates are computed (thus, it controls diff --git a/doc/src/sgml/recovery.sgml b/doc/src/sgml/recovery.sgml index 910314a7f6..aa10b8cf64 100644 --- a/doc/src/sgml/recovery.sgml +++ b/doc/src/sgml/recovery.sgml @@ -13,7 +13,7 @@ Database recovery is necessary in the event of hardware or software failure. There are several categories of failures; some of these require relatively minor adjustments to the database, while others - may depend on the existance of previously prepared database dumps + may depend on the existence of previously prepared database dumps and other recovery data sets. It should be emphasized that if your data is important and/or difficult to regenerate, then you should have considered and prepared for various failure scenarios. @@ -44,7 +44,7 @@ that the database be recovered from a previously prepared database dump. This dump must be prepared using pg_dumpall, and updates to the database - occuring after the database installation was dumped will be lost. + occurring after the database installation was dumped will be lost. diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 6b1b427fae..014a184783 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -1,5 +1,5 @@ @@ -246,7 +246,7 @@ DROP TABLE temp; Usage - To add a column of type VARCHAR to a table: + To add a column of type varchar to a table: ALTER TABLE distributors ADD COLUMN address VARCHAR(30); @@ -288,7 +288,7 @@ ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address) REFERENCES - To add a (multi-column) unique constraint to a table: + To add a (multicolumn) unique constraint to a table: ALTER TABLE distributors ADD CONSTRAINT dist_id_zipcode_key UNIQUE (dist_id, zipcode); diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index 89628d0b04..b4208e8878 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -1,5 +1,5 @@ @@ -152,7 +152,7 @@ ANALYZE [ VERBOSE ] [ table [ (ANALYZE deems them uninteresting (for example, in a unique-key column, there are no common values) or if the column - datatype does not support the appropriate operators. There is more + data type does not support the appropriate operators. There is more information about the statistics in the User's Guide. diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index 718209ff3f..41746ed49a 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -1,5 +1,5 @@ @@ -125,7 +125,7 @@ NOTICE PerformPortalClose: portal "cursor - Close the cursor liahona: + Close the cursor liahona: CLOSE liahona; diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index 7a49f4cd77..850af1f077 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -1,5 +1,5 @@ @@ -74,7 +74,7 @@ COPY [ BINARY ] table [ WITH OIDS ] filename - The absolute Unix pathname of the input or output file. + The absolute Unix file name of the input or output file. @@ -181,7 +181,7 @@ ERROR: reason - COPY with a filename instructs + COPY with a file name instructs the PostgreSQL backend to directly read from or write to a file. The file must be accessible to the backend and the name must be specified @@ -262,8 +262,8 @@ ERROR: reason directly by the backend, not by the client application. Therefore, they must reside on or be accessible to the database server machine, not the client. They must be accessible to and readable or writable - by the PostgreSQL user (the userid the - backend runs as), not the client. + by the PostgreSQL user (the user ID the + server runs as), not the client. COPY naming a file is only allowed to database superusers, since it allows writing on any file that the backend has privileges to write on. @@ -483,7 +483,7 @@ The typlen field is interpreted thus: > 0 - Field is a fixed-length datatype. Exactly N + Field is a fixed-length data type. Exactly N bytes of data follow the typlen word. @@ -493,7 +493,7 @@ The typlen field is interpreted thus: -1 - Field is a varlena datatype. The next four + Field is a varlena data type. The next four bytes are the varlena header, which contains the total value length including itself. @@ -519,7 +519,7 @@ but very useful check that the data is as expected. There is no alignment padding or any other extra data between fields. -Note also that the format does not distinguish whether a datatype is +Note also that the format does not distinguish whether a data type is pass-by-reference or pass-by-value. Both of these provisions are deliberate: they might help improve portability of the files (although of course endianness and floating-point-format issues can still keep diff --git a/doc/src/sgml/ref/create_constraint.sgml b/doc/src/sgml/ref/create_constraint.sgml index 0a72f1fa44..92120950e7 100644 --- a/doc/src/sgml/ref/create_constraint.sgml +++ b/doc/src/sgml/ref/create_constraint.sgml @@ -1,5 +1,5 @@ @@ -67,7 +67,7 @@ CREATE CONSTRAINT TRIGGER name constraint - Actual onstraint specification. + Actual constraint specification. diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 1c20644502..b208f100ea 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -1,5 +1,5 @@ @@ -51,7 +51,7 @@ CREATE DATABASE name dbpath - An alternate filesystem location in which to store the new database, + An alternate file-system location in which to store the new database, specified as a string literal; or DEFAULT to use the default location. diff --git a/doc/src/sgml/ref/create_group.sgml b/doc/src/sgml/ref/create_group.sgml index 1a91013353..7784dc4543 100644 --- a/doc/src/sgml/ref/create_group.sgml +++ b/doc/src/sgml/ref/create_group.sgml @@ -1,5 +1,5 @@ @@ -110,7 +110,7 @@ where option can be: CREATE GROUP will create a new group in the database installation. - Refer to the adminstrator's guide for information about using groups + Refer to the Administrator's Guide for information about using groups for authentication. You must be a database superuser to use this command. diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 58392864cd..a3f11d8b0e 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -1,5 +1,5 @@ @@ -77,7 +77,7 @@ CREATE [ UNIQUE ] INDEX index_name The name of the access method to be used for - the index. The default access method is BTREE. + the index. The default access method is BTREE. PostgreSQL provides four access methods for indexes: @@ -86,7 +86,7 @@ CREATE [ UNIQUE ] INDEX index_name an implementation of Lehman-Yao - high-concurrency btrees. + high-concurrency B-trees. @@ -94,7 +94,7 @@ CREATE [ UNIQUE ] INDEX index_name RTREE - implements standard rtrees using Guttman's + implements standard R-trees using Guttman's quadratic split algorithm. @@ -221,7 +221,7 @@ ERROR: Cannot create index: 'index_name' already exists. In the first syntax shown above, the key field(s) for the index are specified as column names. Multiple fields can be specified if the index access method supports - multi-column indexes. + multicolumn indexes. @@ -236,10 +236,10 @@ ERROR: Cannot create index: 'index_name' already exists. - PostgreSQL provides btree, rtree, hash, and GiST access methods for - indexes. The btree access method is an implementation of - Lehman-Yao high-concurrency btrees. The rtree access method - implements standard rtrees using Guttman's quadratic split algorithm. + PostgreSQL provides B-tree, R-tree, hash, and GiST access methods for + indexes. The B-tree access method is an implementation of + Lehman-Yao high-concurrency B-trees. The R-tree access method + implements standard R-trees using Guttman's quadratic split algorithm. The hash access method is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these access methods are fully dynamic and do not have to be @@ -293,7 +293,7 @@ ERROR: Cannot create index: 'index_name' already exists. The PostgreSQL - query optimizer will consider using a btree index whenever + query optimizer will consider using a B-tree index whenever an indexed attribute is involved in a comparison using one of: @@ -307,7 +307,7 @@ ERROR: Cannot create index: 'index_name' already exists. The PostgreSQL - query optimizer will consider using an rtree index whenever + query optimizer will consider using an R-tree index whenever an indexed attribute is involved in a comparison using one of: @@ -329,17 +329,17 @@ ERROR: Cannot create index: 'index_name' already exists. - Currently, only the btree and gist access methods support multi-column + Currently, only the B-tree and gist access methods support multi-column indexes. Up to 16 keys may be specified by default (this limit can be altered when building - PostgreSQL). Only btree currently supports + PostgreSQL). Only B-tree currently supports unique indexes. An operator class can be specified for each column of an index. The operator class identifies the operators to - be used by the index for that column. For example, a btree index on + be used by the index for that column. For example, a B-tree index on four-byte integers would use the int4_ops class; this operator class includes comparison functions for four-byte integers. In practice the default operator class for the field's @@ -355,7 +355,7 @@ ERROR: Cannot create index: 'index_name' already exists. The operator classes box_ops and - bigbox_ops both support rtree indexes on the + bigbox_ops both support R-tree indexes on the box data type. The difference between them is that bigbox_ops scales box coordinates down, to avoid floating-point exceptions from @@ -389,7 +389,7 @@ SELECT am.amname AS acc_method, Usage - To create a btree index on the field title + To create a B-tree index on the field title in the table films: @@ -402,7 +402,7 @@ CREATE UNIQUE INDEX title_idx Is this example correct? - To create a rtree index on a point attribute so that we + To create a R-tree index on a point attribute so that we can efficiently use box operators on the result of the conversion function: diff --git a/doc/src/sgml/ref/create_operator.sgml b/doc/src/sgml/ref/create_operator.sgml index 93cc39770e..7f33ca47fa 100644 --- a/doc/src/sgml/ref/create_operator.sgml +++ b/doc/src/sgml/ref/create_operator.sgml @@ -1,5 +1,5 @@ @@ -175,7 +175,7 @@ CREATE The operator name - is a sequence of up to NAMEDATALEN-1 (31 by default) characters + is a sequence of up to NAMEDATALEN-1 (31 by default) characters from the following list: + - * / < > = ~ ! @ # % ^ & | ` ? $ @@ -186,7 +186,7 @@ CREATE $ cannot be defined as a single-character operator, - although it can be part of a multi-character operator name. + although it can be part of a multicharacter operator name. @@ -197,7 +197,7 @@ CREATE - A multi-character operator name cannot end in + or + A multicharacter operator name cannot end in + or -, unless the name also contains at least one of these characters: @@ -299,7 +299,7 @@ MYBOXES.description !== box '((0,0), (1,1))' query optimizer in performing joins. PostgreSQL can always evaluate a join (i.e., processing a clause with two tuple - variables separated by an operator that returns a boolean) + variables separated by an operator that returns a boolean) by iterative substitution [WONG76]. In addition, PostgreSQL can use a hash-join algorithm along diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index e528658d0d..115b833d49 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -1,5 +1,5 @@ @@ -265,9 +265,9 @@ SELECT * FROM seqname; will be lost when that session ends. Furthermore, although multiple backends are guaranteed to allocate distinct sequence values, the values may be generated out of sequence when all the backends are considered. (For example, with a cache - setting of 10, backend A might reserve values 1..10 and return nextval=1, + setting of 10, backend A might reserve values 1..10 and return nextval=1, then - backend B might reserve values 11..20 and return nextval=11 before backend + backend B might reserve values 11..20 and return nextval=11 before backend A has generated nextval=2.) Thus, with a cache setting of one it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you should only assume that the nextval values diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 87ebda032e..0ae9222d0f 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1,5 +1,5 @@ @@ -367,7 +367,7 @@ and table_constraint is: match type. There are three match types: MATCH FULL, MATCH PARTIAL, and a default match type if none is specified. MATCH FULL will not allow one - column of a multi-column foreign key to be NULL unless all + column of a multicolumn foreign key to be NULL unless all foreign key columns are NULL. The default match type allows some foreign key columns to be NULL while other parts of the foreign key are not NULL. MATCH PARTIAL is not yet @@ -583,7 +583,7 @@ and table_constraint is: Unique constraints and primary keys are not inherited in the current implementation. This makes the combination of - inheritance and unique constraints rather disfunctional. + inheritance and unique constraints rather dysfunctional. @@ -800,7 +800,7 @@ CREATE { GLOBAL | LOCAL } TEMPORARY TABLE tableNULL constraint (actually a non-constraint) is a PostgreSQL extension to SQL92 that is included for compatibility with some - other RDBMSes (and for symmetry with the NOT + other RDBMS (and for symmetry with the NOT NULL constraint). Since it is the default for any column, its presence is simply noise. diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 846bc5cf87..f3e8276638 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -1,5 +1,5 @@ @@ -232,7 +232,7 @@ CREATE TABLE distributors ( SQL99 allows you to define aliases for the old - and new rows or tables for use in the definiton + and new rows or tables for use in the definition of the triggered action (e.g., CREATE TRIGGER ... ON tablename REFERENCING OLD ROW AS somename NEW ROW AS othername ...). Since diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 916163c1dd..7f62c05612 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -1,5 +1,5 @@ @@ -238,7 +238,7 @@ CREATE declared as taking one argument of type opaque, or as taking two arguments of types opaque, OID. - (The first argument is actually of the datatype itself, but since the + (The first argument is actually of the data type itself, but since the output function must be declared first, it's easier to declare it as accepting type opaque. The second argument is again the array element type for array types.) @@ -282,7 +282,7 @@ CREATE A default value may be specified, in case a user wants columns of the - datatype to default to something other than NULL. + data type to default to something other than NULL. Specify the default with the keyword. (Such a default may be overridden by an explicit clause attached to a particular column.) @@ -311,7 +311,7 @@ CREATE The alignment keyword - specifies the storage alignment required for the datatype. The + specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least 4, since they necessarily contain an int4 as their first component. @@ -339,7 +339,7 @@ CREATE Array Types - Whenever a user-defined datatype is created, + Whenever a user-defined data type is created, PostgreSQL automatically creates an associated array type, whose name consists of the base type's name prepended with an underscore. The parser understands this @@ -358,7 +358,7 @@ CREATE identical things, and you want to allow the N things to be accessed directly by subscripting, in addition to whatever operations you plan to provide for the type as a whole. For example, type name - allows its constitutent chars to be accessed this way. + allows its constituent chars to be accessed this way. A 2-D point type could allow its two component floats to be accessed like point[0] and point[1]. Note that diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml index 6b14b26837..2e90888ee9 100644 --- a/doc/src/sgml/ref/createdb.sgml +++ b/doc/src/sgml/ref/createdb.sgml @@ -1,5 +1,5 @@ @@ -238,7 +238,7 @@ PostgreSQL documentation To create the database demo using the - server on host eden, port 5000, using the + server on host eden, port 5000, using the LATIN1 encoding scheme with a look at the underlying query: diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml index 9dcb7a1bb9..40d8a7afe6 100644 --- a/doc/src/sgml/ref/createuser.sgml +++ b/doc/src/sgml/ref/createuser.sgml @@ -1,5 +1,5 @@ @@ -255,7 +255,7 @@ PostgreSQL documentation To create the same user joe using the - server on host eden, port 5000, avoiding the prompts and + server on host eden, port 5000, avoiding the prompts and taking a look at the underlying query: $ createuser -p 5000 -h eden -D -A -e joe diff --git a/doc/src/sgml/ref/drop_aggregate.sgml b/doc/src/sgml/ref/drop_aggregate.sgml index d60c9cb8f4..3cc3d9dea1 100644 --- a/doc/src/sgml/ref/drop_aggregate.sgml +++ b/doc/src/sgml/ref/drop_aggregate.sgml @@ -1,5 +1,5 @@ @@ -47,7 +47,7 @@ DROP AGGREGATE name ( type - The input datatype of an existing aggregate function, + The input data type of an existing aggregate function, or * if the function accepts any input type. (Refer to the PostgreSQL User's Guide for further information about data types.) diff --git a/doc/src/sgml/ref/drop_operator.sgml b/doc/src/sgml/ref/drop_operator.sgml index 715e2e4d81..1a2dd8df72 100644 --- a/doc/src/sgml/ref/drop_operator.sgml +++ b/doc/src/sgml/ref/drop_operator.sgml @@ -1,5 +1,5 @@ @@ -176,7 +176,7 @@ DROP OPERATOR ^ (int4, int4); - Remove left unary negation operator (! b) for booleans: + Remove left unary negation operator (! b) for boolean: DROP OPERATOR ! (none, bool); diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 997607b1ae..6cb53f25ff 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -1,5 +1,5 @@ @@ -105,7 +105,7 @@ ERROR: RemoveType: type 'typename' It is the user's responsibility to remove any operators, functions, aggregates, access methods, subtypes, and tables that - use a deleted type. However, the associated array datatype + use a deleted type. However, the associated array data type (which was automatically created by CREATE TYPE) will be removed automatically. diff --git a/doc/src/sgml/ref/drop_user.sgml b/doc/src/sgml/ref/drop_user.sgml index 2a20e55eed..d0a9087c06 100644 --- a/doc/src/sgml/ref/drop_user.sgml +++ b/doc/src/sgml/ref/drop_user.sgml @@ -1,5 +1,5 @@ @@ -72,7 +72,7 @@ ERROR: DROP USER: user "name" does - This message occurs if the username is not found. + This message occurs if the user name is not found. diff --git a/doc/src/sgml/ref/dropdb.sgml b/doc/src/sgml/ref/dropdb.sgml index 79750a900b..b60af8e70f 100644 --- a/doc/src/sgml/ref/dropdb.sgml +++ b/doc/src/sgml/ref/dropdb.sgml @@ -1,5 +1,5 @@ @@ -40,7 +40,7 @@ PostgreSQL documentation Specifies the host name of the machine on which the server is running. If host begins with a slash, it is used - as the directory for the unix domain socket. + as the directory for the Unix domain socket. @@ -193,7 +193,7 @@ PostgreSQL documentation To destroy the database demo using the - server on host eden, port 5000, with verification and a peek + server on host eden, port 5000, with verification and a peek at the underlying query: $ dropdb -p 5000 -h eden -i -e demo diff --git a/doc/src/sgml/ref/dropuser.sgml b/doc/src/sgml/ref/dropuser.sgml index 07d77691ea..e20ae5a739 100644 --- a/doc/src/sgml/ref/dropuser.sgml +++ b/doc/src/sgml/ref/dropuser.sgml @@ -1,5 +1,5 @@ @@ -184,7 +184,7 @@ PostgreSQL documentation To remove user joe using the postmaster on host - eden, port 5000, with verification and a peek at the underlying + eden, port 5000, with verification and a peek at the underlying query: $ dropuser -p 5000 -h eden -i -e joe diff --git a/doc/src/sgml/ref/ecpg-ref.sgml b/doc/src/sgml/ref/ecpg-ref.sgml index 1d6019b628..cf9b588c17 100644 --- a/doc/src/sgml/ref/ecpg-ref.sgml +++ b/doc/src/sgml/ref/ecpg-ref.sgml @@ -1,5 +1,5 @@ @@ -84,7 +84,7 @@ PostgreSQL documentation -o outfile - Specifies that ecpg should write all its output to outfile. + Specifies that ecpg should write all its output to outfile. If no such option is given the output is written to name.c, assuming the input file was @@ -255,7 +255,7 @@ EXEC SQL INCLUDE sqlca; The sqlca is in lowercase. While SQL convention may be followed, i.e., using uppercase to separate embedded SQL - from C statements, sqlca (which includes the sqlca.h + from C statements, sqlca (which includes the sqlca.h header file) must be lowercase. This is because the EXEC SQL prefix indicates that this inclusion will be parsed by ecpg. @@ -268,7 +268,7 @@ EXEC SQL INCLUDE sqlca; - The sqlprint command is used with the EXEC SQL WHENEVER + The sqlprint command is used with the EXEC SQL WHENEVER statement to turn on error handling throughout the program: diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index 6d8f193b78..8f07f4adf5 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -1,5 +1,5 @@ @@ -216,7 +216,7 @@ GRANT INSERT ON films TO PUBLIC; - Grant all privileges to user manuel on view kinds: + Grant all privileges to user manuel on view kinds: GRANT ALL PRIVILEGES ON kinds TO manuel; diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index 5368d9a926..38fad94aaa 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -1,5 +1,5 @@ @@ -33,10 +33,10 @@ LOAD 'filename' - The filename is specified in the same way as for shared library + The file name is specified in the same way as for shared library names in ; in particular, one may rely on a search path and automatic addition of the system's standard - shared library filename extension. See the + shared library file name extension. See the Programmer's Guide for more detail. diff --git a/doc/src/sgml/ref/pg_config-ref.sgml b/doc/src/sgml/ref/pg_config-ref.sgml index 56796b094d..056f41db23 100644 --- a/doc/src/sgml/ref/pg_config-ref.sgml +++ b/doc/src/sgml/ref/pg_config-ref.sgml @@ -1,4 +1,4 @@ - + @@ -50,7 +50,7 @@ To use pg_config, supply one or more of the following options: - --bindir + Print the location of user executables. Use this, for example, to find @@ -61,7 +61,7 @@ - --includedir + Print the location of C and C++ header files of the client interfaces. @@ -70,7 +70,7 @@ - --includedir-server + Print the location of C and C++ header files for server @@ -80,7 +80,7 @@ - --libdir + Print the location of object code libraries. @@ -89,7 +89,7 @@ - --pkglibdir + Print the location of dynamically loadable modules, or where @@ -101,7 +101,7 @@ - --configure + Print the options that were given to the configure @@ -115,7 +115,7 @@ - --version + Print the version of PostgreSQL and exit. diff --git a/doc/src/sgml/ref/select_into.sgml b/doc/src/sgml/ref/select_into.sgml index 8c8b6fc448..c52f3f38de 100644 --- a/doc/src/sgml/ref/select_into.sgml +++ b/doc/src/sgml/ref/select_into.sgml @@ -1,5 +1,5 @@ @@ -123,7 +123,7 @@ where from_item can be: SELECT INTO creates a new table and fills it with data computed by a query. The data is not returned to the client, as it is with a normal SELECT. The new - table's columns have the names and datatypes associated with the + table's columns have the names and data types associated with the output columns of the SELECT. @@ -132,7 +132,7 @@ where from_item can be: is functionally equivalent to SELECT INTO. CREATE TABLE AS is the recommended syntax, since SELECT INTO is not standard. In fact, this form of - SELECT INTO is not available in PL/pgSQL or ecpg, + SELECT INTO is not available in PL/pgSQL or , because they interpret the INTO clause differently. @@ -147,7 +147,7 @@ where from_item can be: SQL92 uses SELECT ... INTO to represent selecting values into scalar variables of a host program, rather than creating - a new table. This indeed is the usage found in PL/pgSQL and ecpg. + a new table. This indeed is the usage found in PL/pgSQL and . The PostgreSQL usage of SELECT INTO to represent table creation is historical. It's best to use CREATE TABLE AS for this purpose in new code. diff --git a/doc/src/sgml/ref/set_session_auth.sgml b/doc/src/sgml/ref/set_session_auth.sgml index 09a94249b6..fa427c1025 100644 --- a/doc/src/sgml/ref/set_session_auth.sgml +++ b/doc/src/sgml/ref/set_session_auth.sgml @@ -1,4 +1,4 @@ - + 2001-04-21 @@ -25,7 +25,7 @@ SET SESSION AUTHORIZATION 'username' This command sets the session user identifier and the current user - identifer of the current SQL-session context to be + identifier of the current SQL-session context to be username. @@ -34,7 +34,7 @@ SET SESSION AUTHORIZATION 'username' authenticated) user name provided by the client. The current user identifier is normally equal to the session user identifier, but may change temporarily in the context of setuid - functions and similar mechanisms. The current user identifer is + functions and similar mechanisms. The current user identifier is relevant for permission checking. diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml index 45bed28955..5a253d8580 100644 --- a/doc/src/sgml/ref/set_transaction.sgml +++ b/doc/src/sgml/ref/set_transaction.sgml @@ -1,4 +1,4 @@ - + 2000-11-24 @@ -29,7 +29,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL This command sets the transaction isolation level. The SET TRANSACTION command sets the characteristics for the current SQL-transaction. It has no effect on any subsequent - transactions. This command cannot be used after the first DML + transactions. This command cannot be used after the first query or data-modification statement (SELECT, INSERT, DELETE, UPDATE, FETCH, COPY) of a transaction @@ -59,7 +59,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL The current transaction can only see rows committed before - first DML statement was executed in this transaction. + first query or data-modification statement was executed in this transaction. @@ -101,7 +101,7 @@ SET default_transaction_isolation = 'value' PostgreSQL does not provide the isolation levels and . Because - of multi-version concurrency control, the serializable level is not + of multiversion concurrency control, the serializable level is not truly serializable. See the User's Guide for details. diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 291cb4838f..aece9978b7 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -1,4 +1,4 @@ - + Regression Tests @@ -86,7 +86,7 @@ The parallel regression test starts quite a few processes under your - userid. Presently, the maximum concurrency is twenty parallel test + user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means sixty processes --- there's a backend, a psql, and usually a shell parent process for the psql for each test script. So if your system enforces a per-user limit on the number of processes, diff --git a/doc/src/sgml/release.sgml b/doc/src/sgml/release.sgml index c093e5945d..118efb1196 100644 --- a/doc/src/sgml/release.sgml +++ b/doc/src/sgml/release.sgml @@ -1,5 +1,5 @@ @@ -182,7 +182,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/release.sgml,v 1.112 2002/01/09 00:52:37 pe Server Operation Create temporary files in a separate directory (Bruce) -Delete orphanded temporary files on postmaster startup (Bruce) +Delete orphaned temporary files on postmaster startup (Bruce) Added unique indexes to some system tables (Tom) System table operator reorganization (Oleg Bartunov, Teodor Sigaev, Tom) Renamed pg_log to pg_clog (Tom) @@ -268,7 +268,7 @@ New IS UNKNOWN, IS NOT UNKNOWN Boolean tests (Tom) New SHARE UPDATE EXCLUSIVE lock mode (Tom) New EXPLAIN ANALYZE command that shows run times and row counts (Martijn van Oosterhout) Fix problem with LIMIT and subqueries (Tom) -Fix for LIMIT, DISTINCT ON pushed into subqueryies (Tom) +Fix for LIMIT, DISTINCT ON pushed into subqueries (Tom) Fix nested EXCEPT/INTERSECT (Tom) @@ -336,7 +336,7 @@ New TIMESTAMP WITHOUT TIMEZONE data type (Thomas) Add ISO date/time specification with "T", yyyy-mm-ddThh:mm:ss (Thomas) New xid/int comparison functions (Hiroshi) Add precision to TIME, TIMESTAMP, and INVERVAL data types (Thomas) -Modify type coersion logic to attempt binary-compatible functions first (Tom) +Modify type coercion logic to attempt binary-compatible functions first (Tom) New encode() function installed by default (Marko Kreen) Improved to_*() conversion functions (Karel Zak) Optimize LIKE/ILIKE when using single-byte encodings (Tatsuo) diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index ccbf7fb93a..34ee7473db 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -1,4 +1,4 @@ - + The Rule System @@ -282,15 +282,15 @@ between a - CREATE VIEW myview AS SELECT * FROM mytab; +CREATE VIEW myview AS SELECT * FROM mytab; compared against the two commands - CREATE TABLE myview (same attribute list as for mytab); - CREATE RULE "_RETmyview" AS ON SELECT TO myview DO INSTEAD - SELECT * FROM mytab; +CREATE TABLE myview (same attribute list as for mytab); +CREATE RULE "_RETmyview" AS ON SELECT TO myview DO INSTEAD + SELECT * FROM mytab; because this is exactly what the CREATE VIEW command does internally. @@ -342,14 +342,14 @@ integer values. We create that as - CREATE FUNCTION min(integer, integer) RETURNS integer AS - 'BEGIN - IF $1 < $2 THEN - RETURN $1; - END IF; - RETURN $2; - END;' - LANGUAGE 'plpgsql'; +CREATE FUNCTION min(integer, integer) RETURNS integer AS ' + BEGIN + IF $1 < $2 THEN + RETURN $1; + END IF; + RETURN $2; + END; +' LANGUAGE plpgsql; @@ -358,27 +358,27 @@ are these: - CREATE TABLE shoe_data ( - shoename char(10), -- primary key - sh_avail integer, -- available # of pairs - slcolor char(10), -- preferred shoelace color - slminlen float, -- miminum shoelace length - slmaxlen float, -- maximum shoelace length - slunit char(8) -- length unit - ); +CREATE TABLE shoe_data ( + shoename char(10), -- primary key + sh_avail integer, -- available # of pairs + slcolor char(10), -- preferred shoelace color + slminlen float, -- miminum shoelace length + slmaxlen float, -- maximum shoelace length + slunit char(8) -- length unit +); - CREATE TABLE shoelace_data ( - sl_name char(10), -- primary key - sl_avail integer, -- available # of pairs - sl_color char(10), -- shoelace color - sl_len float, -- shoelace length - sl_unit char(8) -- length unit - ); +CREATE TABLE shoelace_data ( + sl_name char(10), -- primary key + sl_avail integer, -- available # of pairs + sl_color char(10), -- shoelace color + sl_len float, -- shoelace length + sl_unit char(8) -- length unit +); - CREATE TABLE unit ( - un_name char(8), -- the primary key - un_fact float -- factor to transform to cm - ); +CREATE TABLE unit ( + un_name char(8), -- the primary key + un_fact float -- factor to transform to cm +); I think most of us wear shoes and can realize that this is @@ -391,38 +391,38 @@ The views are created as - CREATE VIEW shoe AS - SELECT sh.shoename, - sh.sh_avail, - sh.slcolor, - sh.slminlen, - sh.slminlen * un.un_fact AS slminlen_cm, - sh.slmaxlen, - sh.slmaxlen * un.un_fact AS slmaxlen_cm, - sh.slunit - FROM shoe_data sh, unit un - WHERE sh.slunit = un.un_name; +CREATE VIEW shoe AS + SELECT sh.shoename, + sh.sh_avail, + sh.slcolor, + sh.slminlen, + sh.slminlen * un.un_fact AS slminlen_cm, + sh.slmaxlen, + sh.slmaxlen * un.un_fact AS slmaxlen_cm, + sh.slunit + FROM shoe_data sh, unit un + WHERE sh.slunit = un.un_name; - CREATE VIEW shoelace AS - SELECT s.sl_name, - s.sl_avail, - s.sl_color, - s.sl_len, - s.sl_unit, - s.sl_len * u.un_fact AS sl_len_cm - FROM shoelace_data s, unit u - WHERE s.sl_unit = u.un_name; +CREATE VIEW shoelace AS + SELECT s.sl_name, + s.sl_avail, + s.sl_color, + s.sl_len, + s.sl_unit, + s.sl_len * u.un_fact AS sl_len_cm + FROM shoelace_data s, unit u + WHERE s.sl_unit = u.un_name; - CREATE VIEW shoe_ready AS - SELECT rsh.shoename, - rsh.sh_avail, - rsl.sl_name, - rsl.sl_avail, - min(rsh.sh_avail, rsl.sl_avail) AS total_avail - FROM shoe rsh, shoelace rsl - WHERE rsl.sl_color = rsh.slcolor - AND rsl.sl_len_cm >= rsh.slminlen_cm - AND rsl.sl_len_cm <= rsh.slmaxlen_cm; +CREATE VIEW shoe_ready AS + SELECT rsh.shoename, + rsh.sh_avail, + rsl.sl_name, + rsl.sl_avail, + min(rsh.sh_avail, rsl.sl_avail) AS total_avail + FROM shoe rsh, shoelace rsl + WHERE rsl.sl_color = rsh.slcolor + AND rsl.sl_len_cm >= rsh.slminlen_cm + AND rsl.sl_len_cm <= rsh.slmaxlen_cm; The CREATE VIEW command for the shoelace @@ -457,60 +457,60 @@ SELECT in his life: - al_bundy=> INSERT INTO unit VALUES ('cm', 1.0); - al_bundy=> INSERT INTO unit VALUES ('m', 100.0); - al_bundy=> INSERT INTO unit VALUES ('inch', 2.54); - al_bundy=> - al_bundy=> INSERT INTO shoe_data VALUES - al_bundy-> ('sh1', 2, 'black', 70.0, 90.0, 'cm'); - al_bundy=> INSERT INTO shoe_data VALUES - al_bundy-> ('sh2', 0, 'black', 30.0, 40.0, 'inch'); - al_bundy=> INSERT INTO shoe_data VALUES - al_bundy-> ('sh3', 4, 'brown', 50.0, 65.0, 'cm'); - al_bundy=> INSERT INTO shoe_data VALUES - al_bundy-> ('sh4', 3, 'brown', 40.0, 50.0, 'inch'); - al_bundy=> - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl1', 5, 'black', 80.0, 'cm'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl2', 6, 'black', 100.0, 'cm'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl3', 0, 'black', 35.0 , 'inch'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl4', 8, 'black', 40.0 , 'inch'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl5', 4, 'brown', 1.0 , 'm'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl6', 0, 'brown', 0.9 , 'm'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl7', 7, 'brown', 60 , 'cm'); - al_bundy=> INSERT INTO shoelace_data VALUES - al_bundy-> ('sl8', 1, 'brown', 40 , 'inch'); - al_bundy=> - al_bundy=> SELECT * FROM shoelace; - sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm - ----------+--------+----------+------+--------+--------- - sl1 | 5|black | 80|cm | 80 - sl2 | 6|black | 100|cm | 100 - sl7 | 7|brown | 60|cm | 60 - sl3 | 0|black | 35|inch | 88.9 - sl4 | 8|black | 40|inch | 101.6 - sl8 | 1|brown | 40|inch | 101.6 - sl5 | 4|brown | 1|m | 100 - sl6 | 0|brown | 0.9|m | 90 - (8 rows) +al_bundy=> INSERT INTO unit VALUES ('cm', 1.0); +al_bundy=> INSERT INTO unit VALUES ('m', 100.0); +al_bundy=> INSERT INTO unit VALUES ('inch', 2.54); +al_bundy=> +al_bundy=> INSERT INTO shoe_data VALUES +al_bundy-> ('sh1', 2, 'black', 70.0, 90.0, 'cm'); +al_bundy=> INSERT INTO shoe_data VALUES +al_bundy-> ('sh2', 0, 'black', 30.0, 40.0, 'inch'); +al_bundy=> INSERT INTO shoe_data VALUES +al_bundy-> ('sh3', 4, 'brown', 50.0, 65.0, 'cm'); +al_bundy=> INSERT INTO shoe_data VALUES +al_bundy-> ('sh4', 3, 'brown', 40.0, 50.0, 'inch'); +al_bundy=> +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl1', 5, 'black', 80.0, 'cm'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl2', 6, 'black', 100.0, 'cm'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl3', 0, 'black', 35.0 , 'inch'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl4', 8, 'black', 40.0 , 'inch'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl5', 4, 'brown', 1.0 , 'm'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl6', 0, 'brown', 0.9 , 'm'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl7', 7, 'brown', 60 , 'cm'); +al_bundy=> INSERT INTO shoelace_data VALUES +al_bundy-> ('sl8', 1, 'brown', 40 , 'inch'); +al_bundy=> +al_bundy=> SELECT * FROM shoelace; +sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm +----------+--------+----------+------+--------+--------- +sl1 | 5|black | 80|cm | 80 +sl2 | 6|black | 100|cm | 100 +sl7 | 7|brown | 60|cm | 60 +sl3 | 0|black | 35|inch | 88.9 +sl4 | 8|black | 40|inch | 101.6 +sl8 | 1|brown | 40|inch | 101.6 +sl5 | 4|brown | 1|m | 100 +sl6 | 0|brown | 0.9|m | 90 +(8 rows) It's the simplest SELECT Al can do on our views, so we take this to explain the basics of view rules. The SELECT * FROM shoelace was interpreted by the parser and - produced the parsetree + produced the parse tree - SELECT shoelace.sl_name, shoelace.sl_avail, - shoelace.sl_color, shoelace.sl_len, - shoelace.sl_unit, shoelace.sl_len_cm - FROM shoelace shoelace; +SELECT shoelace.sl_name, shoelace.sl_avail, + shoelace.sl_color, shoelace.sl_len, + shoelace.sl_unit, shoelace.sl_len_cm + FROM shoelace shoelace; and this is given to the rule system. The rule system walks through the @@ -520,12 +520,12 @@ rule _RETshoelace with the parse tree - SELECT s.sl_name, s.sl_avail, - s.sl_color, s.sl_len, s.sl_unit, - float8mul(s.sl_len, u.un_fact) AS sl_len_cm - FROM shoelace *OLD*, shoelace *NEW*, - shoelace_data s, unit u - WHERE bpchareq(s.sl_unit, u.un_name); +SELECT s.sl_name, s.sl_avail, + s.sl_color, s.sl_len, s.sl_unit, + float8mul(s.sl_len, u.un_fact) AS sl_len_cm + FROM shoelace *OLD*, shoelace *NEW*, + shoelace_data s, unit u + WHERE bpchareq(s.sl_unit, u.un_name); Note that the parser changed the calculation and qualification into @@ -534,23 +534,23 @@ - To expand the view, the rewriter simply creates a subselect rangetable - entry containing the rule's action parsetree, and substitutes this + To expand the view, the rewriter simply creates a subselect range-table + entry containing the rule's action parse tree, and substitutes this range table entry for the original one that referenced the view. The resulting rewritten parse tree is almost the same as if Al had typed - SELECT shoelace.sl_name, shoelace.sl_avail, - shoelace.sl_color, shoelace.sl_len, - shoelace.sl_unit, shoelace.sl_len_cm - FROM (SELECT s.sl_name, - s.sl_avail, - s.sl_color, - s.sl_len, - s.sl_unit, - s.sl_len * u.un_fact AS sl_len_cm - FROM shoelace_data s, unit u - WHERE s.sl_unit = u.un_name) shoelace; +SELECT shoelace.sl_name, shoelace.sl_avail, + shoelace.sl_color, shoelace.sl_len, + shoelace.sl_unit, shoelace.sl_len_cm + FROM (SELECT s.sl_name, + s.sl_avail, + s.sl_color, + s.sl_len, + s.sl_unit, + s.sl_len * u.un_fact AS sl_len_cm + FROM shoelace_data s, unit u + WHERE s.sl_unit = u.un_name) shoelace; There is one difference however: the sub-query's range table has two @@ -558,7 +558,7 @@ participate directly in the query, since they aren't referenced by the sub-query's join tree or target list. The rewriter uses them to store the access permission check info that was originally present - in the rangetable entry that referenced the view. In this way, the + in the range-table entry that referenced the view. In this way, the executor will still check that the user has proper permissions to access the view, even though there's no direct use of the view in the rewritten query. @@ -566,8 +566,8 @@ That was the first rule applied. The rule system will continue checking - the remaining rangetable entries in the top query (in this example there - are no more), and it will recursively check the rangetable entries in + the remaining range-table entries in the top query (in this example there + are no more), and it will recursively check the range-table entries in the added sub-query to see if any of them reference views. (But it won't expand *OLD* or *NEW* --- otherwise we'd have infinite recursion!) In this example, there are no rewrite rules for shoelace_data or unit, @@ -590,12 +590,12 @@ We teach him what to do and he asks his database: - al_bundy=> SELECT * FROM shoe_ready WHERE total_avail >= 2; - shoename |sh_avail|sl_name |sl_avail|total_avail - ----------+--------+----------+--------+----------- - sh1 | 2|sl1 | 5| 2 - sh3 | 4|sl7 | 7| 4 - (2 rows) +al_bundy=> SELECT * FROM shoe_ready WHERE total_avail >= 2; +shoename |sh_avail|sl_name |sl_avail|total_avail +----------+--------+----------+--------+----------- +sh1 | 2|sl1 | 5| 2 +sh3 | 4|sl7 | 7| 4 +(2 rows) Al is a shoe guru and so he knows that only shoes of type sh1 @@ -607,11 +607,11 @@ The output of the parser this time is the parse tree - SELECT shoe_ready.shoename, shoe_ready.sh_avail, - shoe_ready.sl_name, shoe_ready.sl_avail, - shoe_ready.total_avail - FROM shoe_ready shoe_ready - WHERE int4ge(shoe_ready.total_avail, 2); +SELECT shoe_ready.shoename, shoe_ready.sh_avail, + shoe_ready.sl_name, shoe_ready.sl_avail, + shoe_ready.total_avail + FROM shoe_ready shoe_ready + WHERE int4ge(shoe_ready.total_avail, 2); The first rule applied will be the one for the @@ -619,19 +619,19 @@ parse tree - SELECT shoe_ready.shoename, shoe_ready.sh_avail, - shoe_ready.sl_name, shoe_ready.sl_avail, - shoe_ready.total_avail - FROM (SELECT rsh.shoename, - rsh.sh_avail, - rsl.sl_name, - rsl.sl_avail, - min(rsh.sh_avail, rsl.sl_avail) AS total_avail - FROM shoe rsh, shoelace rsl - WHERE rsl.sl_color = rsh.slcolor - AND rsl.sl_len_cm >= rsh.slminlen_cm - AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready - WHERE int4ge(shoe_ready.total_avail, 2); +SELECT shoe_ready.shoename, shoe_ready.sh_avail, + shoe_ready.sl_name, shoe_ready.sl_avail, + shoe_ready.total_avail + FROM (SELECT rsh.shoename, + rsh.sh_avail, + rsl.sl_name, + rsl.sl_avail, + min(rsh.sh_avail, rsl.sl_avail) AS total_avail + FROM shoe rsh, shoelace rsl + WHERE rsl.sl_color = rsh.slcolor + AND rsl.sl_len_cm >= rsh.slminlen_cm + AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready + WHERE int4ge(shoe_ready.total_avail, 2); Similarly, the rules for shoe and @@ -639,36 +639,36 @@ the sub-query, leading to a three-level final query tree: - SELECT shoe_ready.shoename, shoe_ready.sh_avail, - shoe_ready.sl_name, shoe_ready.sl_avail, - shoe_ready.total_avail - FROM (SELECT rsh.shoename, - rsh.sh_avail, - rsl.sl_name, - rsl.sl_avail, - min(rsh.sh_avail, rsl.sl_avail) AS total_avail - FROM (SELECT sh.shoename, - sh.sh_avail, - sh.slcolor, - sh.slminlen, - sh.slminlen * un.un_fact AS slminlen_cm, - sh.slmaxlen, - sh.slmaxlen * un.un_fact AS slmaxlen_cm, - sh.slunit - FROM shoe_data sh, unit un - WHERE sh.slunit = un.un_name) rsh, - (SELECT s.sl_name, - s.sl_avail, - s.sl_color, - s.sl_len, - s.sl_unit, - s.sl_len * u.un_fact AS sl_len_cm - FROM shoelace_data s, unit u - WHERE s.sl_unit = u.un_name) rsl - WHERE rsl.sl_color = rsh.slcolor - AND rsl.sl_len_cm >= rsh.slminlen_cm - AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready - WHERE int4ge(shoe_ready.total_avail, 2); +SELECT shoe_ready.shoename, shoe_ready.sh_avail, + shoe_ready.sl_name, shoe_ready.sl_avail, + shoe_ready.total_avail + FROM (SELECT rsh.shoename, + rsh.sh_avail, + rsl.sl_name, + rsl.sl_avail, + min(rsh.sh_avail, rsl.sl_avail) AS total_avail + FROM (SELECT sh.shoename, + sh.sh_avail, + sh.slcolor, + sh.slminlen, + sh.slminlen * un.un_fact AS slminlen_cm, + sh.slmaxlen, + sh.slmaxlen * un.un_fact AS slmaxlen_cm, + sh.slunit + FROM shoe_data sh, unit un + WHERE sh.slunit = un.un_name) rsh, + (SELECT s.sl_name, + s.sl_avail, + s.sl_color, + s.sl_len, + s.sl_unit, + s.sl_len * u.un_fact AS sl_len_cm + FROM shoelace_data s, unit u + WHERE s.sl_unit = u.un_name) rsl + WHERE rsl.sl_color = rsh.slcolor + AND rsl.sl_len_cm >= rsh.slminlen_cm + AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready + WHERE int4ge(shoe_ready.total_avail, 2); It turns out that the planner will collapse this tree into a two-level @@ -718,9 +718,9 @@ a and b, the parse trees for the two statements - SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a; +SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a; - UPDATE t1 SET b = t2.b WHERE t1.a = t2.a; +UPDATE t1 SET b = t2.b WHERE t1.a = t2.a; are nearly identical. @@ -759,14 +759,14 @@ and the final parse tree will read as - UPDATE t1 SET a = t1.a, b = t2.b WHERE t1.a = t2.a; +UPDATE t1 SET a = t1.a, b = t2.b WHERE t1.a = t2.a; and thus the executor run over the join will produce exactly the same result set as a - SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; +SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; will do. But there is a little problem in UPDATE. The executor does @@ -781,21 +781,21 @@ To resolve this problem, another entry is added to the target list - in UPDATE (and also in DELETE) statements: the current tuple ID (ctid). + in UPDATE (and also in DELETE) statements: the current tuple ID (CTID). This is a system attribute containing the file block number and position in the block for the row. Knowing the table, - the ctid can be used to retrieve the original t1 row to be updated. - After adding the ctid to the target list, the query actually looks like + the CTID can be used to retrieve the original t1 row to be updated. + After adding the CTID to the target list, the query actually looks like - SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; +SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; Now another detail of PostgreSQL enters the stage. At this moment, table rows aren't overwritten and this is why ABORT TRANSACTION is fast. In an UPDATE, the new result row is inserted - into the table (after stripping ctid) and in the tuple header of the row - that ctid pointed to the cmax and xmax entries are set to the current + into the table (after stripping CTID) and in the tuple header of the row + that CTID pointed to the cmax and xmax entries are set to the current command counter and current transaction ID. Thus the old row is hidden and after the transaction committed the vacuum cleaner can really move it out. @@ -915,9 +915,9 @@ Keep the syntax - CREATE RULE rule_name AS ON event - TO object [WHERE rule_qualification] - DO [INSTEAD] [action | (actions) | NOTHING]; +CREATE RULE rule_name AS ON event + TO object [WHERE rule_qualification] + DO [INSTEAD] [action | (actions) | NOTHING]; in mind. @@ -1046,20 +1046,20 @@ - The parsetrees found in the actions of the pg_rewrite + The parse trees found in the actions of the pg_rewrite system catalog are only templates. Since they can reference the - rangetable entries for NEW and OLD, some substitutions have to be made - before they can be used. For any reference to NEW, the targetlist of + range-table entries for NEW and OLD, some substitutions have to be made + before they can be used. For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that entry's expression replaces the reference. Otherwise NEW means the same as OLD (for an UPDATE) or is replaced by NULL (for an INSERT). Any reference to OLD is replaced by a - reference to the rangetable entry which is the resultrelation. + reference to the range-table entry which is the result relation. After we are done applying update rules, we apply view rules to the - produced parsetree(s). Views cannot insert new update actions so + produced parse tree(s). Views cannot insert new update actions so there is no need to apply update rules to the output of view rewriting. @@ -1073,21 +1073,21 @@ an UPDATE is performed on shoelace_data. - CREATE TABLE shoelace_log ( - sl_name char(10), -- shoelace changed - sl_avail integer, -- new available value - log_who text, -- who did it - log_when timestamp -- when - ); +CREATE TABLE shoelace_log ( + sl_name char(10), -- shoelace changed + sl_avail integer, -- new available value + log_who text, -- who did it + log_when timestamp -- when +); - CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data - WHERE NEW.sl_avail != OLD.sl_avail - DO INSERT INTO shoelace_log VALUES ( - NEW.sl_name, - NEW.sl_avail, - current_user, - current_timestamp - ); +CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data + WHERE NEW.sl_avail != OLD.sl_avail + DO INSERT INTO shoelace_log VALUES ( + NEW.sl_name, + NEW.sl_avail, + current_user, + current_timestamp + ); @@ -1095,50 +1095,50 @@ Now Al does - al_bundy=> UPDATE shoelace_data SET sl_avail = 6 - al_bundy-> WHERE sl_name = 'sl7'; +al_bundy=> UPDATE shoelace_data SET sl_avail = 6 +al_bundy-> WHERE sl_name = 'sl7'; - and we look at the logtable. + and we look at the log table. - al_bundy=> SELECT * FROM shoelace_log; - sl_name |sl_avail|log_who|log_when - ----------+--------+-------+-------------------------------- - sl7 | 6|Al |Tue Oct 20 16:14:45 1998 MET DST - (1 row) +al_bundy=> SELECT * FROM shoelace_log; +sl_name |sl_avail|log_who|log_when +----------+--------+-------+-------------------------------- +sl7 | 6|Al |Tue Oct 20 16:14:45 1998 MET DST +(1 row) That's what we expected. What happened in the background is the following. - The parser created the parsetree (this time the parts of the original - parsetree are highlighted because the base of operations is the + The parser created the parse tree (this time the parts of the original + parse tree are highlighted because the base of operations is the rule action for update rules). - UPDATE shoelace_data SET sl_avail = 6 - FROM shoelace_data shoelace_data - WHERE bpchareq(shoelace_data.sl_name, 'sl7'); +UPDATE shoelace_data SET sl_avail = 6 + FROM shoelace_data shoelace_data + WHERE bpchareq(shoelace_data.sl_name, 'sl7'); There is a rule log_shoelace that is ON UPDATE with the rule qualification expression - int4ne(NEW.sl_avail, OLD.sl_avail) +int4ne(NEW.sl_avail, OLD.sl_avail) and one action - INSERT INTO shoelace_log VALUES( - *NEW*.sl_name, *NEW*.sl_avail, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*; +INSERT INTO shoelace_log VALUES( + *NEW*.sl_name, *NEW*.sl_avail, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*; This is a little strange-looking since you can't normally write INSERT ... VALUES ... FROM. The FROM clause here is just to indicate - that there are rangetable entries in the parsetree for *NEW* and *OLD*. + that there are range-table entries in the parse tree for *NEW* and *OLD*. These are needed so that they can be referenced by variables in the INSERT command's querytree. @@ -1146,27 +1146,27 @@ The rule is a qualified non-INSTEAD rule, so the rule system has to return two parse trees: the modified rule action and the original - parsetree. In the first step the range table of the original query is + parse tree. In the first step the range table of the original query is incorporated into the rule's action parse tree. This results in - INSERT INTO shoelace_log VALUES( - *NEW*.sl_name, *NEW*.sl_avail, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*, - shoelace_data shoelace_data; +INSERT INTO shoelace_log VALUES( + *NEW*.sl_name, *NEW*.sl_avail, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*, + shoelace_data shoelace_data; In step 2 the rule qualification is added to it, so the result set is restricted to rows where sl_avail changes. - INSERT INTO shoelace_log VALUES( - *NEW*.sl_name, *NEW*.sl_avail, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*, - shoelace_data shoelace_data - WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail); +INSERT INTO shoelace_log VALUES( + *NEW*.sl_name, *NEW*.sl_avail, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*, + shoelace_data shoelace_data + WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail); This is even stranger-looking, since INSERT ... VALUES doesn't have @@ -1179,13 +1179,13 @@ by the original parse tree. - INSERT INTO shoelace_log VALUES( - *NEW*.sl_name, *NEW*.sl_avail, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*, - shoelace_data shoelace_data - WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail) - AND bpchareq(shoelace_data.sl_name, 'sl7'); +INSERT INTO shoelace_log VALUES( + *NEW*.sl_name, *NEW*.sl_avail, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*, + shoelace_data shoelace_data + WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail) + AND bpchareq(shoelace_data.sl_name, 'sl7'); Step 4 substitutes NEW references by the target list entries from the @@ -1193,25 +1193,25 @@ from the result relation. - INSERT INTO shoelace_log VALUES( - shoelace_data.sl_name, 6, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*, - shoelace_data shoelace_data - WHERE int4ne(6, *OLD*.sl_avail) - AND bpchareq(shoelace_data.sl_name, 'sl7'); +INSERT INTO shoelace_log VALUES( + shoelace_data.sl_name, 6, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*, + shoelace_data shoelace_data + WHERE int4ne(6, *OLD*.sl_avail) + AND bpchareq(shoelace_data.sl_name, 'sl7'); Step 5 changes OLD references into result relation references. - INSERT INTO shoelace_log VALUES( - shoelace_data.sl_name, 6, - current_user, current_timestamp - FROM shoelace_data *NEW*, shoelace_data *OLD*, - shoelace_data shoelace_data - WHERE int4ne(6, shoelace_data.sl_avail) - AND bpchareq(shoelace_data.sl_name, 'sl7'); +INSERT INTO shoelace_log VALUES( + shoelace_data.sl_name, 6, + current_user, current_timestamp + FROM shoelace_data *NEW*, shoelace_data *OLD*, + shoelace_data shoelace_data + WHERE int4ne(6, shoelace_data.sl_avail) + AND bpchareq(shoelace_data.sl_name, 'sl7'); That's it. Since the rule is not INSTEAD, we also output the @@ -1219,15 +1219,15 @@ is a list of two parse trees that are the same as the statements: - INSERT INTO shoelace_log VALUES( - shoelace_data.sl_name, 6, - current_user, current_timestamp - FROM shoelace_data - WHERE 6 != shoelace_data.sl_avail - AND shoelace_data.sl_name = 'sl7'; +INSERT INTO shoelace_log VALUES( + shoelace_data.sl_name, 6, + current_user, current_timestamp + FROM shoelace_data + WHERE 6 != shoelace_data.sl_avail + AND shoelace_data.sl_name = 'sl7'; - UPDATE shoelace_data SET sl_avail = 6 - WHERE sl_name = 'sl7'; +UPDATE shoelace_data SET sl_avail = 6 + WHERE sl_name = 'sl7'; These are executed in this order and that is exactly what @@ -1235,8 +1235,8 @@ added ensure that if the original query would be, say, - UPDATE shoelace_data SET sl_color = 'green' - WHERE sl_name = 'sl7'; +UPDATE shoelace_data SET sl_color = 'green' + WHERE sl_name = 'sl7'; no log entry would get written. This @@ -1245,12 +1245,12 @@ shoelace_data.sl_avail resulting in the extra query - INSERT INTO shoelace_log VALUES( - shoelace_data.sl_name, shoelace_data.sl_avail, - current_user, current_timestamp) - FROM shoelace_data - WHERE shoelace_data.sl_avail != shoelace_data.sl_avail - AND shoelace_data.sl_name = 'sl7'; +INSERT INTO shoelace_log VALUES( + shoelace_data.sl_name, shoelace_data.sl_avail, + current_user, current_timestamp) + FROM shoelace_data + WHERE shoelace_data.sl_avail != shoelace_data.sl_avail + AND shoelace_data.sl_name = 'sl7'; and that qualification will never be true. It will also @@ -1258,8 +1258,8 @@ would issue the command - UPDATE shoelace_data SET sl_avail = 0 - WHERE sl_color = 'black'; +UPDATE shoelace_data SET sl_avail = 0 + WHERE sl_color = 'black'; four rows in fact get updated (sl1, sl2, sl3 and sl4). @@ -1268,12 +1268,12 @@ in the extra parse tree - INSERT INTO shoelace_log SELECT - shoelace_data.sl_name, 0, - current_user, current_timestamp - FROM shoelace_data - WHERE 0 != shoelace_data.sl_avail - AND shoelace_data.sl_color = 'black'; +INSERT INTO shoelace_log SELECT + shoelace_data.sl_name, 0, + current_user, current_timestamp + FROM shoelace_data + WHERE 0 != shoelace_data.sl_avail + AND shoelace_data.sl_color = 'black'; This parse tree will surely insert three new log entries. And @@ -1301,12 +1301,12 @@ thrown away. We create the rules - CREATE RULE shoe_ins_protect AS ON INSERT TO shoe - DO INSTEAD NOTHING; - CREATE RULE shoe_upd_protect AS ON UPDATE TO shoe - DO INSTEAD NOTHING; - CREATE RULE shoe_del_protect AS ON DELETE TO shoe - DO INSTEAD NOTHING; +CREATE RULE shoe_ins_protect AS ON INSERT TO shoe + DO INSTEAD NOTHING; +CREATE RULE shoe_upd_protect AS ON UPDATE TO shoe + DO INSTEAD NOTHING; +CREATE RULE shoe_del_protect AS ON DELETE TO shoe + DO INSTEAD NOTHING; If Al now tries to do any of these operations on the view @@ -1338,29 +1338,29 @@ the following rules: - CREATE RULE shoelace_ins AS ON INSERT TO shoelace - DO INSTEAD - INSERT INTO shoelace_data VALUES ( - NEW.sl_name, - NEW.sl_avail, - NEW.sl_color, - NEW.sl_len, - NEW.sl_unit); +CREATE RULE shoelace_ins AS ON INSERT TO shoelace + DO INSTEAD + INSERT INTO shoelace_data VALUES ( + NEW.sl_name, + NEW.sl_avail, + NEW.sl_color, + NEW.sl_len, + NEW.sl_unit); - CREATE RULE shoelace_upd AS ON UPDATE TO shoelace - DO INSTEAD - UPDATE shoelace_data SET - sl_name = NEW.sl_name, - sl_avail = NEW.sl_avail, - sl_color = NEW.sl_color, - sl_len = NEW.sl_len, - sl_unit = NEW.sl_unit - WHERE sl_name = OLD.sl_name; +CREATE RULE shoelace_upd AS ON UPDATE TO shoelace + DO INSTEAD + UPDATE shoelace_data SET + sl_name = NEW.sl_name, + sl_avail = NEW.sl_avail, + sl_color = NEW.sl_color, + sl_len = NEW.sl_len, + sl_unit = NEW.sl_unit + WHERE sl_name = OLD.sl_name; - CREATE RULE shoelace_del AS ON DELETE TO shoelace - DO INSTEAD - DELETE FROM shoelace_data - WHERE sl_name = OLD.sl_name; +CREATE RULE shoelace_del AS ON DELETE TO shoelace + DO INSTEAD + DELETE FROM shoelace_data + WHERE sl_name = OLD.sl_name; Now there is a pack of shoelaces arriving in Al's shop and it has @@ -1371,83 +1371,83 @@ trick. The create commands for these are: - CREATE TABLE shoelace_arrive ( - arr_name char(10), - arr_quant integer - ); +CREATE TABLE shoelace_arrive ( + arr_name char(10), + arr_quant integer +); - CREATE TABLE shoelace_ok ( - ok_name char(10), - ok_quant integer - ); +CREATE TABLE shoelace_ok ( + ok_name char(10), + ok_quant integer +); - CREATE RULE shoelace_ok_ins AS ON INSERT TO shoelace_ok - DO INSTEAD - UPDATE shoelace SET - sl_avail = sl_avail + NEW.ok_quant - WHERE sl_name = NEW.ok_name; +CREATE RULE shoelace_ok_ins AS ON INSERT TO shoelace_ok + DO INSTEAD + UPDATE shoelace SET + sl_avail = sl_avail + NEW.ok_quant + WHERE sl_name = NEW.ok_name; Now Al can sit down and do whatever until - al_bundy=> SELECT * FROM shoelace_arrive; - arr_name |arr_quant - ----------+--------- - sl3 | 10 - sl6 | 20 - sl8 | 20 - (3 rows) +al_bundy=> SELECT * FROM shoelace_arrive; +arr_name |arr_quant +----------+--------- +sl3 | 10 +sl6 | 20 +sl8 | 20 +(3 rows) is exactly what's on the part list. We take a quick look at the current data, - al_bundy=> SELECT * FROM shoelace; - sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm - ----------+--------+----------+------+--------+--------- - sl1 | 5|black | 80|cm | 80 - sl2 | 6|black | 100|cm | 100 - sl7 | 6|brown | 60|cm | 60 - sl3 | 0|black | 35|inch | 88.9 - sl4 | 8|black | 40|inch | 101.6 - sl8 | 1|brown | 40|inch | 101.6 - sl5 | 4|brown | 1|m | 100 - sl6 | 0|brown | 0.9|m | 90 - (8 rows) +al_bundy=> SELECT * FROM shoelace; +sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm +----------+--------+----------+------+--------+--------- +sl1 | 5|black | 80|cm | 80 +sl2 | 6|black | 100|cm | 100 +sl7 | 6|brown | 60|cm | 60 +sl3 | 0|black | 35|inch | 88.9 +sl4 | 8|black | 40|inch | 101.6 +sl8 | 1|brown | 40|inch | 101.6 +sl5 | 4|brown | 1|m | 100 +sl6 | 0|brown | 0.9|m | 90 +(8 rows) move the arrived shoelaces in - al_bundy=> INSERT INTO shoelace_ok SELECT * FROM shoelace_arrive; +al_bundy=> INSERT INTO shoelace_ok SELECT * FROM shoelace_arrive; and check the results - al_bundy=> SELECT * FROM shoelace ORDER BY sl_name; - sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm - ----------+--------+----------+------+--------+--------- - sl1 | 5|black | 80|cm | 80 - sl2 | 6|black | 100|cm | 100 - sl7 | 6|brown | 60|cm | 60 - sl4 | 8|black | 40|inch | 101.6 - sl3 | 10|black | 35|inch | 88.9 - sl8 | 21|brown | 40|inch | 101.6 - sl5 | 4|brown | 1|m | 100 - sl6 | 20|brown | 0.9|m | 90 - (8 rows) +al_bundy=> SELECT * FROM shoelace ORDER BY sl_name; +sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm +----------+--------+----------+------+--------+--------- +sl1 | 5|black | 80|cm | 80 +sl2 | 6|black | 100|cm | 100 +sl7 | 6|brown | 60|cm | 60 +sl4 | 8|black | 40|inch | 101.6 +sl3 | 10|black | 35|inch | 88.9 +sl8 | 21|brown | 40|inch | 101.6 +sl5 | 4|brown | 1|m | 100 +sl6 | 20|brown | 0.9|m | 90 +(8 rows) - al_bundy=> SELECT * FROM shoelace_log; - sl_name |sl_avail|log_who|log_when - ----------+--------+-------+-------------------------------- - sl7 | 6|Al |Tue Oct 20 19:14:45 1998 MET DST - sl3 | 10|Al |Tue Oct 20 19:25:16 1998 MET DST - sl6 | 20|Al |Tue Oct 20 19:25:16 1998 MET DST - sl8 | 21|Al |Tue Oct 20 19:25:16 1998 MET DST - (4 rows) +al_bundy=> SELECT * FROM shoelace_log; +sl_name |sl_avail|log_who|log_when +----------+--------+-------+-------------------------------- +sl7 | 6|Al |Tue Oct 20 19:14:45 1998 MET DST +sl3 | 10|Al |Tue Oct 20 19:25:16 1998 MET DST +sl6 | 20|Al |Tue Oct 20 19:25:16 1998 MET DST +sl8 | 21|Al |Tue Oct 20 19:25:16 1998 MET DST +(4 rows) It's a long way from the one INSERT ... SELECT to these @@ -1455,21 +1455,21 @@ document (but not the last example :-). First there was the parser's output - INSERT INTO shoelace_ok SELECT - shoelace_arrive.arr_name, shoelace_arrive.arr_quant - FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok; +INSERT INTO shoelace_ok SELECT + shoelace_arrive.arr_name, shoelace_arrive.arr_quant + FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok; Now the first rule shoelace_ok_ins is applied and turns it into - UPDATE shoelace SET - sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant) - FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, - shoelace_ok *OLD*, shoelace_ok *NEW*, - shoelace shoelace - WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name); +UPDATE shoelace SET + sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant) + FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, + shoelace_ok *OLD*, shoelace_ok *NEW*, + shoelace shoelace + WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name); and throws away the original INSERT on shoelace_ok. @@ -1477,18 +1477,18 @@ the second applied rule shoelace_upd produced - UPDATE shoelace_data SET - sl_name = shoelace.sl_name, - sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant), - sl_color = shoelace.sl_color, - sl_len = shoelace.sl_len, - sl_unit = shoelace.sl_unit - FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, - shoelace_ok *OLD*, shoelace_ok *NEW*, - shoelace shoelace, shoelace *OLD*, - shoelace *NEW*, shoelace_data showlace_data - WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name) - AND bpchareq(shoelace_data.sl_name, shoelace.sl_name); +UPDATE shoelace_data SET + sl_name = shoelace.sl_name, + sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant), + sl_color = shoelace.sl_color, + sl_len = shoelace.sl_len, + sl_unit = shoelace.sl_unit + FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, + shoelace_ok *OLD*, shoelace_ok *NEW*, + shoelace shoelace, shoelace *OLD*, + shoelace *NEW*, shoelace_data showlace_data + WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name) + AND bpchareq(shoelace_data.sl_name, shoelace.sl_name); Again it's an INSTEAD rule and the previous parse tree is trashed. @@ -1497,20 +1497,20 @@ and applies the rule _RETshoelace on it and we get - UPDATE shoelace_data SET - sl_name = s.sl_name, - sl_avail = int4pl(s.sl_avail, shoelace_arrive.arr_quant), - sl_color = s.sl_color, - sl_len = s.sl_len, - sl_unit = s.sl_unit - FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, - shoelace_ok *OLD*, shoelace_ok *NEW*, - shoelace shoelace, shoelace *OLD*, - shoelace *NEW*, shoelace_data showlace_data, - shoelace *OLD*, shoelace *NEW*, - shoelace_data s, unit u - WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) - AND bpchareq(shoelace_data.sl_name, s.sl_name); +UPDATE shoelace_data SET + sl_name = s.sl_name, + sl_avail = int4pl(s.sl_avail, shoelace_arrive.arr_quant), + sl_color = s.sl_color, + sl_len = s.sl_len, + sl_unit = s.sl_unit + FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, + shoelace_ok *OLD*, shoelace_ok *NEW*, + shoelace shoelace, shoelace *OLD*, + shoelace *NEW*, shoelace_data showlace_data, + shoelace *OLD*, shoelace *NEW*, + shoelace_data s, unit u + WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) + AND bpchareq(shoelace_data.sl_name, s.sl_name); Again an update rule has been applied and so the wheel @@ -1519,23 +1519,22 @@ parse tree - INSERT INTO shoelace_log SELECT - s.sl_name, - int4pl(s.sl_avail, shoelace_arrive.arr_quant), - current_user, - current_timestamp - FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, - shoelace_ok *OLD*, shoelace_ok *NEW*, - shoelace shoelace, shoelace *OLD*, - shoelace *NEW*, shoelace_data showlace_data, - shoelace *OLD*, shoelace *NEW*, - shoelace_data s, unit u, - shoelace_data *OLD*, shoelace_data *NEW* - shoelace_log shoelace_log - WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) - AND bpchareq(shoelace_data.sl_name, s.sl_name); - AND int4ne(int4pl(s.sl_avail, shoelace_arrive.arr_quant), - s.sl_avail); +INSERT INTO shoelace_log SELECT + s.sl_name, + int4pl(s.sl_avail, shoelace_arrive.arr_quant), + current_user, + current_timestamp + FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, + shoelace_ok *OLD*, shoelace_ok *NEW*, + shoelace shoelace, shoelace *OLD*, + shoelace *NEW*, shoelace_data showlace_data, + shoelace *OLD*, shoelace *NEW*, + shoelace_data s, unit u, + shoelace_data *OLD*, shoelace_data *NEW* + shoelace_log shoelace_log + WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) + AND bpchareq(shoelace_data.sl_name, s.sl_name); + AND int4ne(int4pl(s.sl_avail, shoelace_arrive.arr_quant), s.sl_avail); After that the rule system runs out of rules and returns the @@ -1544,24 +1543,24 @@ SQL statements - INSERT INTO shoelace_log SELECT - s.sl_name, - s.sl_avail + shoelace_arrive.arr_quant, - current_user, - current_timestamp - FROM shoelace_arrive shoelace_arrive, shoelace_data shoelace_data, - shoelace_data s - WHERE s.sl_name = shoelace_arrive.arr_name - AND shoelace_data.sl_name = s.sl_name - AND s.sl_avail + shoelace_arrive.arr_quant != s.sl_avail; +INSERT INTO shoelace_log SELECT + s.sl_name, + s.sl_avail + shoelace_arrive.arr_quant, + current_user, + current_timestamp + FROM shoelace_arrive shoelace_arrive, shoelace_data shoelace_data, + shoelace_data s + WHERE s.sl_name = shoelace_arrive.arr_name + AND shoelace_data.sl_name = s.sl_name + AND s.sl_avail + shoelace_arrive.arr_quant != s.sl_avail; - UPDATE shoelace_data SET - sl_avail = shoelace_data.sl_avail + shoelace_arrive.arr_quant - FROM shoelace_arrive shoelace_arrive, - shoelace_data shoelace_data, - shoelace_data s - WHERE s.sl_name = shoelace_arrive.sl_name - AND shoelace_data.sl_name = s.sl_name; +UPDATE shoelace_data SET + sl_avail = shoelace_data.sl_avail + shoelace_arrive.arr_quant + FROM shoelace_arrive shoelace_arrive, + shoelace_data shoelace_data, + shoelace_data s + WHERE s.sl_name = shoelace_arrive.sl_name + AND shoelace_data.sl_name = s.sl_name; The result is that data coming from one relation inserted into another, @@ -1621,10 +1620,10 @@ Merge Join his database for pink ones. - al_bundy=> INSERT INTO shoelace VALUES - al_bundy-> ('sl9', 0, 'pink', 35.0, 'inch', 0.0); - al_bundy=> INSERT INTO shoelace VALUES - al_bundy-> ('sl10', 1000, 'magenta', 40.0, 'inch', 0.0); +al_bundy=> INSERT INTO shoelace VALUES +al_bundy-> ('sl9', 0, 'pink', 35.0, 'inch', 0.0); +al_bundy=> INSERT INTO shoelace VALUES +al_bundy-> ('sl10', 1000, 'magenta', 40.0, 'inch', 0.0); Since this happens often, we must lookup for shoelace entries, @@ -1633,19 +1632,19 @@ Merge Join for it. The view for this is - CREATE VIEW shoelace_obsolete AS - SELECT * FROM shoelace WHERE NOT EXISTS - (SELECT shoename FROM shoe WHERE slcolor = sl_color); +CREATE VIEW shoelace_obsolete AS + SELECT * FROM shoelace WHERE NOT EXISTS + (SELECT shoename FROM shoe WHERE slcolor = sl_color); Its output is - al_bundy=> SELECT * FROM shoelace_obsolete; - sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm - ----------+--------+----------+------+--------+--------- - sl9 | 0|pink | 35|inch | 88.9 - sl10 | 1000|magenta | 40|inch | 101.6 +al_bundy=> SELECT * FROM shoelace_obsolete; +sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm +----------+--------+----------+------+--------+--------- +sl9 | 0|pink | 35|inch | 88.9 +sl10 | 1000|magenta | 40|inch | 101.6 For the 1000 magenta shoelaces we must debt Al before we can @@ -1654,34 +1653,34 @@ Merge Join we don't delete it directly. Instead we create one more view - CREATE VIEW shoelace_candelete AS - SELECT * FROM shoelace_obsolete WHERE sl_avail = 0; +CREATE VIEW shoelace_candelete AS + SELECT * FROM shoelace_obsolete WHERE sl_avail = 0; and do it this way: - DELETE FROM shoelace WHERE EXISTS - (SELECT * FROM shoelace_candelete - WHERE sl_name = shoelace.sl_name); +DELETE FROM shoelace WHERE EXISTS + (SELECT * FROM shoelace_candelete + WHERE sl_name = shoelace.sl_name); - Voilą: + Voilą: - al_bundy=> SELECT * FROM shoelace; - sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm - ----------+--------+----------+------+--------+--------- - sl1 | 5|black | 80|cm | 80 - sl2 | 6|black | 100|cm | 100 - sl7 | 6|brown | 60|cm | 60 - sl4 | 8|black | 40|inch | 101.6 - sl3 | 10|black | 35|inch | 88.9 - sl8 | 21|brown | 40|inch | 101.6 - sl10 | 1000|magenta | 40|inch | 101.6 - sl5 | 4|brown | 1|m | 100 - sl6 | 20|brown | 0.9|m | 90 - (9 rows) +al_bundy=> SELECT * FROM shoelace; +sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm +----------+--------+----------+------+--------+--------- +sl1 | 5|black | 80|cm | 80 +sl2 | 6|black | 100|cm | 100 +sl7 | 6|brown | 60|cm | 60 +sl4 | 8|black | 40|inch | 101.6 +sl3 | 10|black | 35|inch | 88.9 +sl8 | 21|brown | 40|inch | 101.6 +sl10 | 1000|magenta | 40|inch | 101.6 +sl5 | 4|brown | 1|m | 100 +sl6 | 20|brown | 0.9|m | 90 +(9 rows) A DELETE on a view, with a subselect qualification that @@ -1738,10 +1737,10 @@ Merge Join He can construct the following: - CREATE TABLE phone_data (person text, phone text, private bool); - CREATE VIEW phone_number AS - SELECT person, phone FROM phone_data WHERE NOT private; - GRANT SELECT ON phone_number TO secretary; +CREATE TABLE phone_data (person text, phone text, private bool); +CREATE VIEW phone_number AS + SELECT person, phone FROM phone_data WHERE NOT private; +GRANT SELECT ON phone_number TO secretary; Nobody except him (and the database superusers) can access the @@ -1840,15 +1839,15 @@ Merge Join For example: There are two tables - CREATE TABLE computer ( - hostname text, -- indexed - manufacturer text -- indexed - ); +CREATE TABLE computer ( + hostname text, -- indexed + manufacturer text -- indexed +); - CREATE TABLE software ( - software text, -- indexed - hostname text -- indexed - ); +CREATE TABLE software ( + software text, -- indexed + hostname text -- indexed +); Both tables have many @@ -1860,21 +1859,21 @@ Merge Join deleted from computer, it can use the statement - DELETE FROM software WHERE hostname = $1; +DELETE FROM software WHERE hostname = $1; in a prepared and saved plan and pass the hostname in the parameter. The rule would be written as - CREATE RULE computer_del AS ON DELETE TO computer - DO DELETE FROM software WHERE hostname = OLD.hostname; +CREATE RULE computer_del AS ON DELETE TO computer + DO DELETE FROM software WHERE hostname = OLD.hostname; Now we look at different types of deletes. In the case of a - DELETE FROM computer WHERE hostname = 'mypc.local.net'; +DELETE FROM computer WHERE hostname = 'mypc.local.net'; the table computer is scanned by index (fast) and the @@ -1882,17 +1881,17 @@ Merge Join The extra query from the rule would be a - DELETE FROM software WHERE computer.hostname = 'mypc.local.net' - AND software.hostname = computer.hostname; +DELETE FROM software WHERE computer.hostname = 'mypc.local.net' + AND software.hostname = computer.hostname; Since there are appropriate indexes setup, the planner will create a plan of - Nestloop - -> Index Scan using comp_hostidx on computer - -> Index Scan using soft_hostidx on software +Nestloop + -> Index Scan using comp_hostidx on computer + -> Index Scan using soft_hostidx on software So there would be not that much difference in speed between @@ -1901,31 +1900,31 @@ Merge Join with 'old'. There are two possible queries to do that. One is - DELETE FROM computer WHERE hostname >= 'old' - AND hostname < 'ole' +DELETE FROM computer WHERE hostname >= 'old' + AND hostname < 'ole' Where the plan for the rule query will be a - Hash Join - -> Seq Scan on software - -> Hash - -> Index Scan using comp_hostidx on computer +Hash Join + -> Seq Scan on software + -> Hash + -> Index Scan using comp_hostidx on computer The other possible query is a - DELETE FROM computer WHERE hostname ~ '^old'; +DELETE FROM computer WHERE hostname ~ '^old'; with the execution plan - Nestloop - -> Index Scan using comp_hostidx on computer - -> Index Scan using soft_hostidx on software +Nestloop + -> Index Scan using comp_hostidx on computer + -> Index Scan using soft_hostidx on software This shows, that the planner does not realize that the @@ -1948,7 +1947,7 @@ Merge Join The last query we look at is a - DELETE FROM computer WHERE manufacurer = 'bim'; +DELETE FROM computer WHERE manufacurer = 'bim'; Again this could result in many rows to be deleted from @@ -1957,16 +1956,16 @@ Merge Join two index scans. Only using another index on computer: - Nestloop - -> Index Scan using comp_manufidx on computer - -> Index Scan using soft_hostidx on software +Nestloop + -> Index Scan using comp_manufidx on computer + -> Index Scan using soft_hostidx on software resulting from the rules query - DELETE FROM software WHERE computer.manufacurer = 'bim' - AND software.hostname = computer.hostname; +DELETE FROM software WHERE computer.manufacurer = 'bim' + AND software.hostname = computer.hostname; In any of these cases, the extra queries from the rule system will be diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 9c8b3d131b..2c0441fdcc 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1,5 +1,5 @@ @@ -187,7 +187,7 @@ $ postmaster -D /usr/local/pgsql/data $ postmaster -D /usr/local/pgsql/data > logfile 2>&1 & - It is an extremely good idea to keep the server's stdout and stderr + It is an extremely good idea to keep the server's stdout and stderr output around somewhere, as suggested here. It will help both for auditing purposes and to diagnose problems. (See for a more thorough discussion @@ -1208,9 +1208,9 @@ dynamic_library_path = '/usr/local/lib/postgresql:/home/my_project/lib:$libdir' It should be noted that the performance penalty from doing - fsyncs is considerably less in PostgreSQL version + fsyncs is considerably less in PostgreSQL version 7.1 than it was in prior releases. If you previously suppressed - fsyncs because of performance problems, you may wish to reconsider + fsyncs because of performance problems, you may wish to reconsider your choice. @@ -1263,9 +1263,9 @@ dynamic_library_path = '/usr/local/lib/postgresql:/home/my_project/lib:$libdir' subprocess. The default is 1000. The limit actually used by the code is the smaller of this setting and the result of sysconf(_SC_OPEN_MAX). - Therefore, on systems where sysconf returns a reasonable limit, + Therefore, on systems where sysconf returns a reasonable limit, you don't need to worry about this setting. But on some platforms - (notably, most BSD systems), sysconf returns a value that is much + (notably, most BSD systems), sysconf returns a value that is much larger than the system can really support when a large number of processes all try to open that many files. If you find yourself seeing Too many open files failures, try reducing this @@ -1599,7 +1599,7 @@ dynamic_library_path = '/usr/local/lib/postgresql:/home/my_project/lib:$libdir' Time delay between writing a commit record to the WAL buffer and flushing the buffer out to disk, in microseconds. A nonzero delay - allows multiple transactions to be committed with only one fsync, + allows multiple transactions to be committed with only one fsync system call, if system load is high enough that additional transactions become ready to commit within the given interval. But the delay is just wasted time if no other transactions become ready to commit. @@ -1659,10 +1659,10 @@ dynamic_library_path = '/usr/local/lib/postgresql:/home/my_project/lib:$libdir' Method used for forcing WAL updates out to disk. Possible values are - FSYNC (call fsync() at each commit), - FDATASYNC (call fdatasync() at each commit), - OPEN_SYNC (write WAL files with open() option O_SYNC), or - OPEN_DATASYNC (write WAL files with open() option O_DSYNC). + FSYNC (call fsync() at each commit), + FDATASYNC (call fdatasync() at each commit), + OPEN_SYNC (write WAL files with open() option O_SYNC), or + OPEN_DATASYNC (write WAL files with open() option O_DSYNC). Not all of these choices are available on all platforms. This option can only be set at server start or in the postgresql.conf file. diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml index 94c5d1883c..c07264e5db 100644 --- a/doc/src/sgml/sources.sgml +++ b/doc/src/sgml/sources.sgml @@ -1,5 +1,5 @@ @@ -14,7 +14,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/sources.sgml,v 2.5 2001/11/21 06:09:45 thom - For emacs, add the following (or something similar) + For Emacs, add the following (or something similar) to your ~/.emacs initialization file: diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml index 54af930dcd..439f65e98f 100644 --- a/doc/src/sgml/start.sgml +++ b/doc/src/sgml/start.sgml @@ -1,5 +1,5 @@ @@ -152,7 +152,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/start.sgml,v 1.21 2001/11/28 20:49:10 peter To create a new database, in this example named - mydb, you use the following command: + mydb, you use the following command: $ createdb mydb @@ -244,7 +244,7 @@ createdb: database creation failed If you don't want to use your database anymore you can remove it. For example, if you are the owner (creator) of the database - mydb, you can destroy it using the following + mydb, you can destroy it using the following command: $ dropdb mydb @@ -272,7 +272,7 @@ createdb: database creation failed Running the PostgreSQL interactive - terminal program, called psql, which allows you + terminal program, called psql, which allows you to interactively enter, edit, and execute SQL commands. @@ -300,7 +300,7 @@ createdb: database creation failed You probably want to start up psql, to try out the examples in this tutorial. It can be activated for the - mydb database by typing the command: + mydb database by typing the command: $ psql mydb @@ -346,7 +346,7 @@ mydb=# The last line printed out by psql is the prompt, and it indicates that psql is listening to you and that you can type SQL queries into a - workspace maintained by psql. Try out these + work space maintained by psql. Try out these commands: version diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index 9c14bde93f..72fcc5da5f 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -1,5 +1,5 @@ @@ -1019,7 +1019,7 @@ sqrt(2) - A type cast specifies a conversion from one datatype to another. + A type cast specifies a conversion from one data type to another. PostgreSQL accepts two equivalent syntaxes for type casts: @@ -1039,8 +1039,8 @@ CAST ( expression AS type . A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type - (if the string literal's contents are acceptable input syntax for the - datatype). + (if the contents of the string literal are acceptable input syntax for the + data type). diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 4b9f2f0c6a..031be06a2e 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -43,7 +43,7 @@ has an associated data type which determines its behavior and allowed usage. PostgreSQL has an extensible type system that is much more general and flexible than other RDBMS implementations. Hence, most type conversion behavior in PostgreSQL -should be governed by general rules rather than by ad-hoc heuristics, to allow +should be governed by general rules rather than by ad hoc heuristics, to allow mixed-type expressions to be meaningful even with user-defined types. @@ -178,7 +178,7 @@ Implicit conversions should never have surprising or unpredictable outcomes. -User-defined types, of which the parser has no a-priori knowledge, should be +User-defined types, of which the parser has no a priori knowledge, should be higher in the type hierarchy. In mixed-type expressions, native types shall always be converted to a user-defined type (of course, only if conversion is necessary). @@ -410,7 +410,7 @@ type to resolve the unknown literals to. The PostgreSQL operator catalog has several entries for the prefix operator @, all of which implement -absolute-value operations for various numeric datatypes. One of these +absolute-value operations for various numeric data types. One of these entries is for type float8, which is the preferred type in the numeric category. Therefore, PostgreSQL will use that entry when faced with a non-numeric input: @@ -432,7 +432,7 @@ ERROR: Input '-4.5e500' is out of range for float8 On the other hand, the postfix operator ! (factorial) -is defined only for integer datatypes, not for float8. So, if we +is defined only for integer data types, not for float8. So, if we try a similar case with !, we get: tgl=> select text '44' ! as "factorial"; @@ -657,7 +657,7 @@ system catalog. Values to be inserted into a table are coerced to the destination - column's datatype according to the + column's data type according to the following steps. @@ -712,7 +712,7 @@ What has really happened here is that the two unknown literals are resolved to text by default, allowing the || operator to be resolved as text concatenation. Then the text result of the operator is coerced to bpchar (blank-padded -char, the internal name of the character datatype) to match the target +char, the internal name of the character data type) to match the target column type. (Since the parser knows that text and bpchar are binary-compatible, this coercion is implicit and does not insert any real function call.) Finally, the sizing function @@ -734,7 +734,7 @@ to each output column of a union query. The INTERSECT and EXCEPT constructs resolve dissimilar types in the same way as UNION. A CASE construct also uses the identical algorithm to match up its -component expressions and select a result datatype. +component expressions and select a result data type. <literal>UNION</> and <literal>CASE</> Type Resolution diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index a52bc02eb4..20341077c9 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -1,5 +1,5 @@ @@ -215,7 +215,7 @@ SELECT one(); Notice that we defined a column alias within the function body for the result of the function (with the name RESULT), but this column alias is not visible - outside the function. Hence, the result is labelled one + outside the function. Hence, the result is labeled one instead of RESULT.